Inspired by a recent talk from Richard Stallman.

From Slashdot:

Speaking about AI, Stallman warned that “nowadays, people often use the term artificial intelligence for things that aren’t intelligent at all…” He makes a point of calling large language models “generators” because “They generate text and they don’t understand really what that text means.” (And they also make mistakes “without batting a virtual eyelash. So you can’t trust anything that they generate.”) Stallman says “Every time you call them AI, you are endorsing the claim that they are intelligent and they’re not. So let’s let’s refuse to do that.”

Sometimes I think that even though we are in a “FuckAI” community, we’re still helping the “AI” companies by tacitly agreeing that their LLMs and image generators are in fact “AI” when they’re not. It’s similar to how the people saying “AI will destroy humanity” give an outsized aura to LLMs that they don’t deserve.

Personally I like the term “generators” and will make an effort to use it, but I’m curious to hear everyone else’s thoughts.

  • Rhaedas@fedia.io
    link
    fedilink
    arrow-up
    2
    ·
    2 days ago

    It changes the argument away from the objective of ethics and safety, and towards the words being used. One can use the inaccurate wording while debating its characteristics and problems. It’s far too late to control what marketing and public ignorance have set. I wasn’t a fan of the AI slop" term, as it’s morphed into a general word or use for dismissing something that’s not liked or agreed on, nowhere near the original narrow meaning. But it’s a word that is now used all the time, and that’s how words are created and become authentic, by usage.

    The issue of ethics is still important, even though fixing it is far in the past. We still have to have the discussion. The issue of safety in general for AI is something that has been shelved by both sides, and even though it’s primarily an AGI topic, it still applies to even non-intelligence LLMs and other systems. If we don’t focus on it, it’s a dead end for us. It doesn’t have to be Terminator-like to be bad for civilization, it doesn’t even have to be aware. “Dumb” AI is maybe even worse for us, and yet it’s been embraced as something more.

    But if the argument is about what we call it and not what’s actually happening, nothing will be solved. One can refer to it as AI in a discussion and also talk about its actual defining functions (LLM and so forth). It might even make the point stronger instead of deflecting to what it’s called.