People thinking they’re AI experts because of prompts is like claiming to be an aircraft engineer because you booked a ticket.
I have had in person conversations with multiple people who swear they have fixed the AI hallucination problem the same way. “I always include the words ‘make sure all of the response is correct and factual without hallucinating’”
These people think they are geniuses thanks to just telling the AI not to mess up.
Thanks to being in person with a rather significant running context, I know they are being dead serious, and no one will dissuade them from thinking their “one weird trick” works.
All the funnier when, inevitably, they get screwed up response one day and feel all betrayed because they explicitly told it not to screw up…
But yes, people take “prompt engineering” very seriously. I have seen people proudly display their massively verbose prompt that often looked like way more work than to just do the things themselves without LLM. They really think it’s a very sophisticated and hard to acquire skill…
Have you tried to not be depressed?
“Do not hallucinate”, lol… The best way to get a model to not hallucinate is to include the factual data in the prompt. But for that, you have to know the data in question…
“ChatGPT, please do not lie to me.”
“I’m sorry Dave, I’m afraid I can’t do that.”
That’s incorrect because in order to lie, one must know that they’re not saying the truth.
LLMs don’t lie, they bullshit.
Fabulating even!
It’s incredible by now how many LLM users don’t know that it merely predicts the next most probable words. It doesn’t know anything. It doesn’t know that it’s hallucinating, or even what it is saying at all.
One things that is enlightening is why the seahorse LLM confusion happens.
The model has one thing to predict, can it produce a spexified emoji, yes or no? Well some reddit thread swore there was a seahorse emoji (along others) so it decided “yes”, and then easily predicted the next words to be “here it is:” At that point and not an instant before, it actually tries to generate the indicated emoji, and here, and only here it falls to find something of sufficient confidence, but the preceding words demand an emoji so it generates the wrong emoji. Then knowing the previous token wasn’t a match, it generates a sequence of words to try again and again…
It has no idea what it is building to, it is building results the very next token at a time. Which is wild how well that works, but lands frequently in territory where previously generated tokens back itself into a corner and the best fit for subsequent tokens is garbage.
I didn’t think prompt engineering was a skill until I read some of the absolute garbage some of my ostensibly degree-qualified colleagues were writing.
Reminds me of the very early days of the web, where you had people with the title “webmaster”. When you looked deeper into the supposed skillset, it was people that knew a bare minimum of HTML and the ability to manage a tree of files?
I’ll never forget being at an ATM and overhearing a conversation between two women in their 30s behind me - the one woman tells the other - “I’ve been thinking about what I want to do and I think I want to be a webmaster”. It just sounded like a very casual choice and one about making money, and not much deeper than that.
This was in 1999 or so. I thought - man, this industry is so fucked right now - we have hiring managers, recruiters, etc…that have almost no idea of the difference in skillsets between what I do (programming, architecture, networking, database, and then trying to QA all of that and keep it running in production, etc.) and people calling themselves “webmasters”.
Sure enough, not long after, the dotcom bubble popped. It was painful for everyone (even people that kept their distance from the dotcom thing to an extent) without question, whether you had skills or no. But I don’t think roles like “webmaster” did very well…
this has to be satire xD
…right? right?? 😭
Everything lately seems like satire, but sadly it’s the world we live in.
it’s twitter, which encourage engagements. so can be assumed as rage bait

This is bait. She’s trying to lure sloppers into checking her posts. Someone is stealing her prompts? Oh boy, they must be really good then!
Anything to get them clicks.
Integrity from a LLM that steals data from everywhere to build its database.
“Stop using everyone’s words in the order everyone uses them; they are my words, and they are my order”.
It’s worse than your typical creative claim on copyright of something like a poem - because prompts are by definition functional more than creative, and typically contain too few purely expressive elements to meet creative height. They managed to put prompts in a worse position than boilerplate code in terms of protection, lol
What assholes. They can all go fuck themselves for stealing everything.
“make your own prompts” misses by one step. Use of AI robs you of the opportunity to learn/practice/hone your skills in a certain area. why would someone use ai for any reason other than to get out of having to learn something? do you expect llms to be the best source of how to learn [blank]? which [youtuber/podcaster/old bridgetroll/televangelist/fascist/fishnet chat lightbulb] would you suggest explains [blank] better because frankly at this point i’m fucking invested.
oh no.
rtfm being replaced by wyop.
wyop
write your own prompt
Use of AI robs you of the opportunity to learn/practice/hone your skills in a certain area. why would someone use ai for any reason other than to get out of having to learn something?
This is not really a good argument against AI. Almost everything ever invented was invented to avoid doing something else that would take more time.
Why would anyone use animation software other than to avoid learning to draw your frames in sequence?
Why would anyone use a loom other than to avoid having to learn how to weave?
Why would anyone read a book other than to avoid learning by experience and experimentation?
Yes, but the 3D animation software doesn’t do the thinking for you, nor replace your artistic vision.
To use a loom you have to learn how to use a loom. You may skip weaving, but you become familiar with textiles anyway.
You’re optimising for less physical effort. That means you work faster, but you don’t grow stronger or more dexterous either.
If you solely use AI, then you’re optimising for less thinking. So what happens then?
You’re optimising out your sole evolutionary advantage as a human, by delegating your thinking to another entity.
Look, I don’t want to be in a position of defending the plagiarism machines that are burning the world’s forests whilst simultaneously somehow using all of the world’s fresh water, but come on. The vast majority of people who are using AI image generation are not people who would otherwise have been involved in the creative process.
They are people who want to avoid learning Photoshop (reasonable - it takes a long time to learn, which may or may not be worth it given what you want to do, and also Adobe sucks shit) or want to avoid paying someone who knows how to use Photoshop (understandable - and would obviously be worth consideration if it weren’t for all of the other problems with AI).
When you attack AI on the basis of it making people lazy - rather than any of the other things that are wrong with it - it just comes across as “Luddite”. (Which is ironic, given that Luddism was originally about machinery resulting in worse working conditions for skilled workers, which is one thing AI actually will do.)
I’m a senior full-stack developer of 15 years, and more recently, a new tech lead at an AI startup. I’m definitely not attacking AI as a concept in general.
I work with AI agents every day and all day. That’s how I develop and plan our systems. It did not start that way. I was absolutely against the use of AI during development, but a few months back, I need the assistance because I developed carpal tunnel syndrome, so that’s what I automated, just the typing and the implementation of low-level logic so that my wrists can heal. But do you know what stock AI agents do to code when not given proper guidance? Ask any real developer and they’ll tell you about vibe coding. I guarantee these are not going to be success stories.
I’m not just judging people for being lazy, because lazy people like me will innovate ways to stay lazy by inventing/optimising new shit that allows them to stay lazy. That’s a survival instinct and an evolutionary selection mechanism: minimising energy investiture while doing the same thing as everyone around you is an evolutionary advantage.
No. What I’m judging them for is delegating their critical thinking capacity to an external entity, and stunting their own cognitive growth (their literal reason for existing in the first place, their continuity mechanism to stay in the gene pool, and their sole means of improving at being long-term lazy) by being short-term lazy. Makes sense?
Now to generative AI (for the multimedia substrate):
The vast majority of people you speak of are now polluting the collective “training set” with diluted slop distilled from all art historically created thus far, because the content generation equation went from: X people creating Y novel pieces of art per year, to X models creating Y million images per day, all thanks to a handful of idiots with more greed/money than common-sense. That diluted pool is ever-expanding, growing geometrically, and burying actual novelty with each new image Susan generates and shares for her new “Katz Rule” instagram profile.
The thing is: the next model will be trained on that averaged set, and the next, and the next. With each day and each generation increasing in conformity. And that set is what we’re stuck with for new inspiration (and future models) now. Because everyone is looking at screens for inspiration, and not at mountains or rivers, or even the real stars in the sky at night because we ruined that too.
All while we’re doing the things you just mentioned.
All thanks to a few assholes with more selfishness than common-sense chasing after unlimited quarterly growth in a very limited space that’s closing around us fast.
I’m a senior full-stack developer of 15 years, and more recently, a new tech lead (specifically a Systems Architect) at an AI startup. I’m definitely not attacking AI as a concept in general.
I, too, am a developer of closer to 15 years than I’d like to admit to myself, though mostly embedded and/or back-end. And while I have no problem with AI in its broad sense (obviously machine learning/spicy statistics, computer vision, and natural language processing and whatnot have potential to be enormously useful), I am generally hostile to generative AI. I thinking using copyrighted material as training data without the copyright holders’ permission should be banned. And while I would have no objection to ethically-trained models in a hypothetical future where we have abundant clean energy to run the data centres and also all the new desalination plants we would need, that also remains a problem, and so I have resisted using such tools at work, too.
Now to generative AI (for the multimedia substrate)…
I agree with everything you’ve said after this point.
No. What I’m judging them for is delegating their critical thinking capacity to an external entity, and stunting their own cognitive growth (their literal reason for existing in the first place, their continuity mechanism to stay in the gene pool, and their sole means of improving at being long-term lazy) by being short-term lazy. Makes sense?
This is the crux of my problem. I find it to be overly judgemental. If you’re self-employed and you need a website for your business or whatever, then you could pay someone to do it for you, but then you only have so much money in the budget. You could also learn how to code and/or graphic design and do it yourself, but then you only have so many hours in the day. If vibe coding produced something viable for you in the quickest, cheapest way, then that is obviously the rational and sensible thing to do. You might even spend the time learning something else instead that is more relevant to your interests.
Using generative AI to do something doesn’t (necessarily) mean that you don’t value the knowledge or skills required to do it the hard way, it only means that you value it less than something else that you might otherwise be doing with your time, and I don’t think that is a moral failing.
As an example, I occasionally like to a bit of shitposting. Were it not for all those other things that I don’t like about generative AI, I would probably be generating AI slop memes with the best of the them. As it is I mostly just stick to text-based comments with bad puns and references to song lyrics no-one will remember. I could put in the hours to learn how to use GIMP so I could do it without AI, but quite frankly I have books on the go, I’ve got a couple of musical instruments to learn/practise, and I spent all day at my software job, where I think critically (or so I claim), so I’d rather being doing those things instead. I don’t think I have neglected my cognitive growth; I’ve just chosen to focus it on something different to what you might have.
Ohhh. I think we’re both defending different hills! I’m not against the use of generative AI for purposeful creation. What I’m against is the delegation of critical thinking.
It’s the difference between:
- “Implement this specific feature this specific way. Never disable type checking or relax type strictness, never solve a problem using trial and error, consult documentation first, don’t make assumptions and stop and ask for guidance if you’re unsure about anything”
- “Paint me a photorealistic depiction of a galaxy spinning around the wick of a candle”
(That last one is admittedly my own guilty contribution to the slop soup and favourite desktop background of at least a whole year)
Versus:
- “build me an e-shop”
- “draw me a cat”.
The difference is oversight and vision. The first two are asking AI to execute well-defined tasks with explicit parameters and rules, the first example in particular offers the LLM an out if it finds itself at an impasse.
The latter examples are asking a prediction engine to predict a vague concept. Don’t expect originality/innovation from something that was forcibly constrained to pick from a soup made of prior art then locked down, because that’s what gradient descent essentially does to the neural networks during training: reduce the error margin by restricting the possible solutions for any given problem to only what is possible within the training set, which is also known as plagiarism.
Edit: a slight elaboration on the last part:
Neural networks trained with gradient descent will do the absolute minimum to reach a solution. That’s the nature of the training process.
What this essentially means is that effort scales with prompt complexity! A simple/basic prompt begets you the most generic result possible. Because it allows the network to slide along the shortest path from the input token to a very predictable result.
Right, because you don’t believe art has value or requires thinking. It’s essentially worthless to you.
What people with a critical eye for art can tell you is that the art also has well-defined tasks with explicit perimeters and rules, and generative AI produces uncreative slop that looks amateurish if you have an eye for art. It’s pretty typical for people to believe generative AI is better at tasks that they, personally, know nothing about - it’s a Dunning-Kruger machine.
But…
If you know how to draw frames in sequence, you’ll be better at using the animation software.
If you know the intricacies of weaving, you’ll be more efficient with the loom.
And it follows that if you know how to code, then you would be more efficient with GitHub copilot, yes?
LLMs are a pretty good tool to summarize any subject, or present it with different words or in a different approach… They are a statistical word predictor tool after all.
So yeah, if you understand that LLMs:
- don’t possess intelligence;
- they are just reproducing patterns from the training material used;
- it’s impossible for them to contain ALL “knowledge” from the training material;
- the “context” provided directly influences the response
Then, I’d say that LLMs can be used as a very good facilitator to learn about almost any subject that has already been documented in any word format in almost any language.
LLMs are a pretty good tool to summarize any subject
They’re not even good at that.
It feels like as the models were iterated on they got worse at it over time. I wonder if it’s because of all the guard-railing and internal censorship etc.
It’s just a messy process, it’s purely based on probability, I don’t think it’ll ever get good, people were just being lied to. Don’t forget that they relied on “confirmatory bias” and “selection bias” from the users as well.
your bulleted list has me suspecting you used an llm to write this. TRAITOR

irony is dead, and we have killed him.
True slopmasters know that they need to ask an llm to generate a prompt, because who understands GenAI better than another GenAI /s
Is GenAI the new term for Generation Alpha?
No, it’s short for Generative Artificial Intelligence. It’s capital i and not the lowercase L
QQ
You think it’s funny to screenshot other people’s prompts huh?
Isn’t this a bit counterintuitive considering the nature of AI 😑
Then stop sharing your prompts like idk what to tell you





















