I have a boss who tells us weekly that everything we do should start with AI. Researching? Ask ChatGPT first. Writing an email or a document? Get ChatGPT to do it.
They send me documents they “put together” that are clearly ChatGPT generated, with no shame. They tell us that if we aren’t doing these things, our careers will be dead. And their boss is bought in to AI just as much, and so on.
I feel like I am living in a nightmare.
I am building it! Or, well, not it anymore but a product that is heavily based on it.
I think we as a company recognize that the, like, 95% of AI products right now are shit. And that the default path from now is that power continues to concentrate at the large labs like OpenAI, which haven’t been behaving particularly well.
But we also believe that there are good ways to use it. We hope to build one.
The thing your boss is asking you to do is shitty. However, TBQH humanity doesn’t really know what LLMs are useful for yet. It’s going to be a long process to find that out, and trying it in places where it isn’t helpful is part of that.
Lastly, even if LLMs don’t turn out to be useful for real work, there is something interesting and worth exploring there. Look at researcher/writers like Nostalgebrist and Janus - they’re exploring what LLMs are like as beings. Not that they’re conscious, but rather that there’s interesting and complex things going on in there that we don’t understand yet. The overarching feeling in my workplace is that we’re in a frontier time, where clever thinking and exploration will be rewarded.
Asking you as it seems like you’re somebody working in the AI field: how can I avoid whatever you and others in your field are doing? Do I just have to go offline?
I’m not against that idea, but unfortunately I do have debts to pay off at least for the next 45 months or so and my career requires me to use the internet (I’m a web developer). Once the debt is paid, I’m free.
One of my managers is like that, I’ve known him for about 5 years and he’s been the biggest idiot I’ve ever met the entire time. But ever since AI came out he’s turned it up to 11.
Fortunately my other manager can’t stand him, and they have blazing arguments, so generally speaking if he tells me to do something I don’t like / want to do, I go and tattle tell.
You have double manager? Do you actually report to both of them?
semi-toxic and stupid. I got the fucking AI cert they wanted me to, but instead of getting hooked up with clients to do that, I’m doing the fucking test automation since the start of this awful career.
I hear there’s some sort of AI mandate coming but no idea what it is yet. A few coworkers poke at chatGPT for basic coding questions. I will not use it for anything at this point. IT here is mostly useless contractors so they can’t tell what we’re doing and they can’t make us do anything. My direct reporting chain will back me working how I want to work so I don’t foresee any issues.
I fully recognize I’m in a highly privileged position that many others aren’t. But I’m going to take full advantage and keep my sanity.
This is all of tech right now.
We get encouraged to try out AI tools for various purposes to see where we can find value out of them, if any. There are some use-cases where the tech makes sense when wielded correctly, and in those cases I make use of it. In other cases, I don’t.
So far, I suspect we may be striking a decent balance. I have however noticed a concern trend of people copy-pasting unfiltered slop as a response to various scenarios, which is obviously not helpful.
Our devs are implementing some ML for anomaly detection, which seems promising.
There’s also a LLM with MCP etc that is writing the pull requests and some documentation at least, so I guess our devs like it. The customers LOVE it, but it keeps making shit up and they don’t mind. Stuff like “make a graph of usage on weekdays” and it includes 6 days some weeks. They generated a monthly report for themselves, and it made up every scrap of data, and the customer missed the little note at the bottom where the damn thing said “I can regenerate this report with actual data if it is made available to me”.
As someone who has done various kinds of anomaly detections, it always seems promising until it hits real world data and real world use cases.
There are some widely recognised papers in this field, just about this issue.
Once an anomaly is defined, I usually find it easier to build a regular alert for it. I guess the ML or LLM would be most useful to me in finding problems that I wasn’t looking for.
Not quite that extreme where I am but it is being thrust into any kind of strategy scenario with absolutely nothing to back it up. They are desperate to incorporate.
Some people are using it for work purposes when there isn’t a major policy on it.
You can tell because the work is shit.
I vibe code from time to time because people sometimes demand quick results in an unachievable timeline. In saying that, I may use a LLM to generate the base code that provides a basic solution to what is needed and then I go over the code and review/refactor it line by line. Sometimes if time is severely pressed and the code is waaaay off a bare minimum, I’ll have the LLM revise the code to solve some of the problem, and then I review, adjust, amend where needed.
I treat AI as a tool and (frustrating and annoying) companion in my work, but ultimately I review and adjust and amend (and sometimes refactor) everything. It’s kind of similar to when you are reading code samples from websites, copying it if you can use it, and refactoring it for your app, except tailored a bit more to what you need already…
In the same token, I also prefer to do it all myself if I can, so if I’m not pressed for time, or I know it’s something that I can do quickly, I’ll do it myself.
My “company” is tiny, and only employs myself 1 colleague, and an assistant. We’re accountants.
We self host some models from huggingface.
We don’t really use these as part of any established workflow. Thinking of some examples …
This week my colleague used a model to prep a simple contract between herself and her daughter where by her daughter would perform whatever chores and she would pay for cello lessons.
My assistant used an AI thing to parse some scanned bank statements, so this one is work related. The alternative is bashing out the dates, descriptions, and amounts manually. Using traditional OCR for this purpose doesn’t really save any time because hunting down all the mistakes and missed decimal places takes a lot of effort. Parsing this way takes about a third of the time, and it’s less mentally taxing. However, this isn’t a task we regularly perform because obviously in the vast majority of cases we can get the data instead of printed statements.
I was trying to think the proper term for an english word which has evolved from some phrase or whatever, like “stearing board” became “starboard”. The Gen AI suggested portmanteau, but I actually think there’s a better word I just haven’t remembered yet.
I had it create a bash one liner to extract a specific section from a README.md.
I asked it to explain the method of action of diazepam.
My feelings about AI are that it’s pretty great for specific niche tasks like this. Like the bash one liner. It took 30 seconds to ask and I got an immediate, working solution. Without Gen AI I just wouldn’t be able to grep whatever section from a README - not exactly a life changing super power, but a small improvement to whatever project I was working on.
In terms of our ability to do our work and deliver results for clients, it’s a 10% bump to efficiency and productivity when used correctly. Gen AI is not going to put us out of a job.
Intolerable
I work in public education healthcare. A few people are using magic school or chatgpt to write goals and treatment notes, and then generate report cards. The only discussion has been these people doing brief demonstrations in department meetings.
Obsessive.
thankfully i’m cusodial so they’re just relieved i know how to use the timeclock lol (not kidding) lol






