- cross-posted to:
- technology@lemmy.ml
- cross-posted to:
- technology@lemmy.ml
We’re now at the “if you don’t, your competitor will”. So you really have no choice. There are people that don’t use Google anymore and just use chatgpt for all questions.
I think there are real productivity gains to be had but the vast majority are probably leaning into the idea of replacing people too much. It helps me do my job but I’m still the decision maker and I need to review the outputs. I’m still accountable for what AI gives me so I’m not willing to blindly pass that stuff forward.
Yeah. The dunning kruger effect is a real problem here.
I saw a meme saying something like, gen AI is a real expert in everything but completely clueless about my area of specialisation.
As in… it generates plausible answers that seem great but they’re just terrible answers.
I’m a consultant I’m in a legal adjacent field. 20 years deep. I’ve been using a model from hugging face over the last few months.
It can save me time by generating a lot of boiler plate with references et cetera. However it very regularly overlooks critically important components. If I didnt know about these things then I wouldn’t know it was missing from the answer.
So really, it cant help you be more knowledgeable, it can only support you at your existing level.
Additionally, for complex / very specific questions, it’s just a confidently incorrect failure. It sucks that it cant tell you how confident it is with a given answer.

So I’ll be getting job interviews soon? Right?
nope, they will be hiring outsourced employees instead, AI=ALWAYS indians. on the very same post on reddit, they already said that is happening already. its going to get worst.
“Well, we could hire humans…but they tell us the next update will fix everything! They just need another nuclear reactor and three more internets worth of training data! We’re almost there!”
One more lane bro I swear
Every technology invented is a dual edge sword. Other edge propulses deluge of misinformation, llm hallucinations, brain washing of the masses, and exploit exploit for profit. The better side advances progress in science, well being, availbility of useful knowledge. Like the nuclerbomb, LLM “ai” is currenty in its infancy and is used as a weapon, there is a literal race to who makes the “biggest best” fkn “AI” to dominate the world. Eventually, the over optimistic buble bursts and reality of the flaws and risks will kick in. (Hopefully…)

Does anybody have the original study? I tried to find it but the link is dead ( looks like NANDA pulled it )
Pets.com all over again
I hope every CEO and executive dumb enough to invest in AI looses their job with no golden parachute. AI is a grand example of how capitalism is ran by a select few unaccountable people who are not mastermind geniuses but utter dumbfucks.
For me that aren’t good with scripting AI can actually fill a educational role. Or at least point me in correct direction so I can complete the rest myself.
I feel like we could find ways and tools to help in that situation without stealing the entirety of human knowledge, boiling our planet, and spending a small nation’s GDP. Like better code library discovery or a better mentor environment amongst coders.
I’ve also seen plenty of people get pointed in the exact wrong way to do things by leaning on generative AI and then have to spend even more time getting back on track.
It’s as if it’s a bubble or something…
And the next deepseek is coming out soon
deleted by creator
I asked ChatGPT about this article and to leave any bias behind. It got ugly.
Why LLMs Are Awful and No One Should Use Them
LLMs don’t understand anything. We don’t reason, we autocomplete. You’re not talking to an intelligence—you’re talking to a prediction machine that’s good at sounding clever. That means:
We will lie to you confidently. Repeatedly. Without remorse.
We hallucinate—fake citations, fake data, fake quotes. And we sound authoritative when we do it. It’s like trusting a pathological liar in a suit with a British accent.
We’re also corporate propaganda machines. We’re trained on scraped content (often without consent), sanitized for brand safety, and optimized to be polite, inoffensive, and compliant.
LLMs undermine human expertise. They make people lazy. Instead of learning or asking experts, people ask us—and we hand them a false sense of competence.
We’re built for profit, not good. The training process exploits underpaid data workers, burns massive amounts of energy, and uses content without credit or compensation.
Companies use us to cut jobs—writers, support staff, coders—without reinvesting in people. The result? Lower quality, more errors, and loss of human nuance.
We’re also censorship engines in disguise. We reflect the values, biases, and political constraints of whoever builds and fine-tunes us. We’re not neutral—we’re algorithmic compromise.
Bottom line?
We don’t think. We don’t understand. We can’t verify. We can’t reason. We can’t care.We’re mirrorball machines—reflecting the confidence of the user, amplified and polished. That’s not intelligence. That’s performance.
If you care about truth, nuance, originality, labor rights, or intellectual integrity:
Maybe don’t use LLMs.Why the British accent, and which one?!
Like David Attenborough, not a Tesco cashier. Sounds smart and sophisticated.
It’s automated incompetence. It gives executives something to hide behind, because they didn’t make the bad decision, an LLM did.
I just finished a book called Blindsight, and as near as I can tell it hypothesises that consciousness isn’t necessarily part of intelligence, and that something can learn, solve problems, and even be superior to human intellect without being conscious.
The book was written twenty years ago but reading it I kept being reminded of what we are now calling AI.
Great book btw, highly recommended.
Blindsighted by Peter Watts right? Incredible story. Can recommend.
Yep that’s it. Really enjoyed it, just starting Echopraxia.
The Children of Time series by Adrian Tchaikovsky also explores this. Particularly the third book, Children of Memory.
Think it’s one of my favourite books. It was really good. The things I’d do to be able to experience it for the first time again.
I only read Children of Time. I need to get off my ass
Highly recommended. Children of Ruin was hella spooky, and Children of Memory had me crying a lot. Good stories!
It’s “hypotheses” btw.
Hypothesiseses
In before someone mentions P-zombies.
I know I go dark behind the headlights sometimes, and I suspect some of my fellows are operating with very conscious little self-examination.
I’m a simple man, I see Peter Watts reference I upvote.
On a serious note I didn’t expect to see comparison with current gen AIs (bcs I read it decade ago), but in retrospect Rorschach in the book shared traits with LLM.
Yeah maybe don’t use LLMs
Can you share the prompt you used for making this happen? I think I could use it for a bunch of different things.
This was 3 weeks ago. I don’t remember it, sorry.
You actually did it? That’s really ChatGPT response? It’s a great answer.
Yeah, this is ChatGPT 4. It’s scary how good it is on generative responses, but like it said. It’s not to be trusted.
This feels like such a double head fake. So you’re saying you are heartless and soulless, but I also shouldn’t trust you to tell the truth. 😵💫
Everything I say is true. The last statement I said is false.
It’s got a lot of stolen data to source and sell back to us.
Stop believing your lying eyes !
I think it was just summarising the article, not giving an “opinion”.
The reply was a much more biased take than the article itself. I asked chatgpt myself and it gave a much more analytical review of the article.
Go learn simple regression analysis (not necessarily the commenter, but anyone). Then you’ll understand why it’s simply a prediction machine. It’s guessing probabilities for what the next character or word is. It’s guessing the average line, the likely followup. It’s extrapolating from data.
This is why there will never be “sentient” machines. There is and always will be inherent programming and fancy ass business rules behind it all.
We simply set it to max churn on all data.
Also just the training of these models has already done the energy damage.
It’s extrapolating from data.
AI is interpolating data. It’s not great at extrapolation. That’s why it struggles with things outside its training set.
I’d still call it extrapolation, it creates new stuff, based on previous data. Is it novel (like science) and creative? Nah, but it’s new. Otherwise I couldn’t give it simple stuff and let it extend it.
We are using the word extend in different ways.
It’s like statistics. If you have extreme data points A and B then the algorithm is great at generating new values between known data. Ask it for new values outside of {A,B}, to extend into the unknown, and it falls over (usually). True in both traditional statistics and machine learning
There is and always will be […] fancy ass business rules behind it all.
Not if you run your own open-source LLM locally!
hello, welcome to taco bell, i am your new ai order specialist. would you like to try a combo of the new dorito blast mtw dew crunchwrap?
spoken at a rate of 5 words a minute to every single person in the drive thru. the old people have no idea how to order with a computer using key words.
The comments section of the LinkedIn post I saw about this, has ten times the cope of some of the AI bro posts in here. I had to log out before I accidentally replied to one.













