Inspired by a recent talk from Richard Stallman.
From Slashdot:
Speaking about AI, Stallman warned that “nowadays, people often use the term artificial intelligence for things that aren’t intelligent at all…” He makes a point of calling large language models “generators” because “They generate text and they don’t understand really what that text means.” (And they also make mistakes “without batting a virtual eyelash. So you can’t trust anything that they generate.”) Stallman says “Every time you call them AI, you are endorsing the claim that they are intelligent and they’re not. So let’s let’s refuse to do that.”
Sometimes I think that even though we are in a “FuckAI” community, we’re still helping the “AI” companies by tacitly agreeing that their LLMs and image generators are in fact “AI” when they’re not. It’s similar to how the people saying “AI will destroy humanity” give an outsized aura to LLMs that they don’t deserve.
Personally I like the term “generators” and will make an effort to use it, but I’m curious to hear everyone else’s thoughts.


Debating whether to call it “AI” or “sparkling humanity killer” is kind of orthogonal. Even before LLMs and the like, “AI” was loosely defined and not widely understood. You’re not going to fix that longstanding issue when also trying to stop the LLMs from stealing the work and creativity of people while cooking the planet.
I’m curious why you used said “LLM” in your example? If you feel highlighting a distinction isn’t important why not just say “AI” again?
It wasn’t an example, it was a pointer in time to before the advent of LLMs as I was discussing popular conceptions of what AI meant before we got it even more wrong recently.
But why are you continuing to refer to “AI” as “LLMs”? If you believe the very act of even making a distinction between “AI” and “LLMs” is orthagonal?
You seem very confused. So I can better understand what you’re asking, can you quote from what I wrote what reads to you as if I’m calling AI by “LLMs”