Inspired by a recent talk from Richard Stallman.

From Slashdot:

Speaking about AI, Stallman warned that “nowadays, people often use the term artificial intelligence for things that aren’t intelligent at all…” He makes a point of calling large language models “generators” because “They generate text and they don’t understand really what that text means.” (And they also make mistakes “without batting a virtual eyelash. So you can’t trust anything that they generate.”) Stallman says “Every time you call them AI, you are endorsing the claim that they are intelligent and they’re not. So let’s let’s refuse to do that.”

Sometimes I think that even though we are in a “FuckAI” community, we’re still helping the “AI” companies by tacitly agreeing that their LLMs and image generators are in fact “AI” when they’re not. It’s similar to how the people saying “AI will destroy humanity” give an outsized aura to LLMs that they don’t deserve.

Personally I like the term “generators” and will make an effort to use it, but I’m curious to hear everyone else’s thoughts.

  • CodenameDarlen@lemmy.world
    link
    fedilink
    arrow-up
    4
    ·
    2 days ago

    It’s the popular term, at the end, the meaning doesn’t really matter as long as everybody has the same agreement on what we’re talking about.

    Don’t get too attached to cientific meaning of things.

    • Flaqueman@sh.itjust.works
      link
      fedilink
      arrow-up
      3
      arrow-down
      2
      ·
      2 days ago

      Just like “atom” means “that you cannot cut”, but turns out you can actually split them into protons, neutrons and electrons. We just called them that way and although the meaning of the name doesn’t match reality, we stick to the term.

      • technocrit@lemmy.dbzer0.com
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        2 days ago

        Yeah but reasonable people will readily agree that atoms are not “atomic” in the sense of being indivisible.

        On the other hand grifters are really out here saying that computers are “intelligent”.

        It’s also worth pointing out that people really did think atoms were indivisible but they updated their model based on new evidence. Meanwhile grifters never had any basis for their claims of “intelligence” and they will never change their grift despite overwhelming evidence.

      • James R Kirk@startrek.websiteOP
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        4
        ·
        edit-2
        2 days ago

        But in your example the scientists didn’t stick to the term “atoms”. New terms were created (“Protons”, “Neutrons” etc) to describe the new thing.

        They didn’t abandon the term “atom”, they kept it’s definition and created new words for things that didn’t meet that definition.

        • IrateAnteater@sh.itjust.works
          link
          fedilink
          arrow-up
          2
          arrow-down
          1
          ·
          2 days ago

          A proton, neutron, and orbiting electron is still referred to as a hydrogen atom. The term “atom” was never abandoned.

    • James R Kirk@startrek.websiteOP
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      2
      ·
      2 days ago

      the meaning doesn’t really matter as long as everybody has the same agreement on what we’re talking about.

      But we don’t agree. Tech companies are using the same term to describe ChatGPT and Data from Star Trek when they’re not the same thing.

      One of those things can get fucked, the other is a sentient being who (as we all know) does the fucking. Not to mention data was an AI before OpenAI ever existed!

      • mushroommunk@lemmy.today
        link
        fedilink
        arrow-up
        2
        arrow-down
        1
        ·
        2 days ago

        It’s annoying and messy but language evolves and changes.

        Hell, there’s a whole category of words that are their own opposite called contronyms. So AI can mean both things and I’d argue makes it a contronym (meaning slop or artificial intelligence depending on context).

        If you want to fix it then you need to tackle English as a whole and fix English (which hey I’m right there with you, give me Welsh any day instead).

        • James R Kirk@startrek.websiteOP
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          2 days ago

          “Language evolves and changes” describes evolving and changing language, not keeping it the same. Language evolves into more specific definitions, not less specific.

          For example: you might say “LLMs are one form of intelligence”, I don’t agree with that, but it makes logical sense. But claiming “LLMs are the same thing as intelligence” changes the definition of “intelligence” to a much broader umbrella. If you want to change that definition then you also need to invent a new word that means “non-LLM intelligence”.

    • technocrit@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      1
      arrow-down
      1
      ·
      2 days ago

      the meaning doesn’t really matter as long as everybody has the same agreement on what we’re talking about.

      There is absolutely no agreement. It’s grifters versus the truthful.

  • geekwithsoul@piefed.social
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    2 days ago

    Debating whether to call it “AI” or “sparkling humanity killer” is kind of orthogonal. Even before LLMs and the like, “AI” was loosely defined and not widely understood. You’re not going to fix that longstanding issue when also trying to stop the LLMs from stealing the work and creativity of people while cooking the planet.

    • James R Kirk@startrek.websiteOP
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      2 days ago

      I’m curious why you used said “LLM” in your example? If you feel highlighting a distinction isn’t important why not just say “AI” again?

      • geekwithsoul@piefed.social
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 days ago

        It wasn’t an example, it was a pointer in time to before the advent of LLMs as I was discussing popular conceptions of what AI meant before we got it even more wrong recently.

        • James R Kirk@startrek.websiteOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          15 hours ago

          But why are you continuing to refer to “AI” as “LLMs”? If you believe the very act of even making a distinction between “AI” and “LLMs” is orthagonal?

          • geekwithsoul@piefed.social
            link
            fedilink
            English
            arrow-up
            1
            ·
            16 hours ago

            You seem very confused. So I can better understand what you’re asking, can you quote from what I wrote what reads to you as if I’m calling AI by “LLMs”

  • WolfLink@sh.itjust.works
    link
    fedilink
    arrow-up
    7
    ·
    15 hours ago

    The term “Artificial Intelligence” has historically been used by computer scientists to refer to any “decision making” program of any complexity, even something extremely simple, like solving a maze by following the left wall.

  • Darkcoffee@sh.itjust.works
    link
    fedilink
    arrow-up
    7
    arrow-down
    1
    ·
    2 days ago

    “Slop Constructors” is what I call them. It’s good to remember that calling them “AI” helps with the fake hype.

  • x1gma@lemmy.world
    link
    fedilink
    arrow-up
    5
    ·
    2 days ago

    I disagree with this post and with Stallman.

    LLMs are AI. What people are actually confused about is what AI is and what the difference between AI and AGI is.

    There is no universal definition for AI, but multiple definitions which are mostly very similar: AI is the ability of a software system to perform tasks that typically would involve human intelligence like learning, problem solving, decision making, etc. Since the basic idea is basically that artificial intelligence imitates human intelligence, we would need a universal definition of human intelligence - which we don’t have.

    Since this definition is rather broad, there is an additional classification: ANI, artificial narrow intelligence, or weak AI, is an intelligence inferior to human intelligence, which operates purely rule-based and for specific, narrow use cases. This is what LLMs, self-driving cars, assistants like Siri or Alexa fall into. AGI, artificial general intelligence, or strong AI, is an intelligence equal to or comparable to human intelligence, which operates autonomously, based on its perception and knowledge. It can transfer past knowledge to new situations, and learn. It’s a theoretical construct, that we have not achieved yet, and no one knows when or if we will even achieve that, and unfortunately also one of the first things people think about when AI is mentioned. ASI, artificial super intelligence, is basically an AGI but with an intelligence that is superior to a human in all aspects. It’s basically the apex predator of all AI, it’s better, smarter, faster in anything than a human could ever be. Even more theoretical.

    Saying LLMs are not AI is plain wrong, and if our goal is a realistic, proper way of working with AI, we shouldn’t be doing the same as the tech bros.

    • III@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      4
      ·
      2 days ago

      Can you share the prompt you gave to ChatGPT to get this, I have questions and I want to cut out the middle man.

      • x1gma@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        1 day ago

        Feel free to ask your questions, I’ll gladly answer them. Before making stupid and smug claims, maybe you should’ve ran my post through literally any AI text detector before embarrassing yourself.

    • James R Kirk@startrek.websiteOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      15 hours ago

      If I’m reading correctly it sounds like you do agree with Stallman’s main point that a casual distinction is needed, you just disagree on the word itself (“ANI” vs “generator”).

      • x1gma@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        8 hours ago

        No, I think the distinction is already made and there are words for that. Adding additional terms like “generators” or “pretend intelligence” does not help in creating clarity. In my opinion, the current definitions/classifications are enough. I get Stallman’s point, and his definition of intelligence seems to be different from how I would define intelligence, which is probably the main disagreement.

        I definitely would call a LLM intelligent. Even though it does not understand the context like a human could do, it is intelligent enough to create an answer that is correct. Doing this by basically pure stochastics is pretty intelligent in my books. My car’s driving assistant, even if it’s not fully self driving, is pretty damn intelligent and understands the situation I’m in, adapting speed, understanding signs, reacting to what other drivers do. I definitely would call that intelligent. Is it human-like intelligence? Absolutely not. But for this specific, narrow use-case it does work pretty damn good.

        His main point seems to be breaking the hype, but I do not think that it will or can be achieved like that. This will not convince the tech bros or investors. People who are simply uninformed, will not understand an even more abstract concept.

        In my opinion, we should educate people more on where the hype is actually coming from: NVIDIA. Personally, I hate Jensen Huang, but he’s been doing a terrific job as a CEO for NVIDIA, unfortunately. They’ve positioned themselves as a hardware supplier and infrastructure layer for the core component for AI, and are investing/partnering widely into AI providers, hyperscalers, other component suppliers in a circle of cashflow. Any investment they do, they get back multiplied, which also boosts all other related entities. The only thing that went “10x” as promised by AI is NVIDIA stock. They are bringing capex to a whole new level currently.

        And that’s what we should be discussing more, instead of clinging to words. Every word that any company claims about AI should automatically be assumed to be a lie, especially for any AI claim from any hyperscaler, AI provider, hardware supplier, and especially-especially from NVIDIA. Every single claim they do directly relates to revenue. Every positive claim is revenue. Every negative word is loss. In this circle of money they are running - we’re talking about thousands of billions USD. People have done way worse, for way less money.

  • myedition8@lemmy.world
    link
    fedilink
    arrow-up
    7
    arrow-down
    1
    ·
    2 days ago

    This is why I call chatbots “LLMs” and refer to image and video generators as “slop generators”. It isn’t AI, a software can’t be intelligent.

  • hperrin@lemmy.ca
    link
    fedilink
    English
    arrow-up
    5
    ·
    2 days ago

    I think it’s AI. The artificial part is key. There’s no real intelligence there, just like there’s no real grass in an artificial lawn.

    • James R Kirk@startrek.websiteOP
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      2 days ago

      Historically the “artificial” part of “AI” implied the intelligence was real, but “constructed” or “not naturally evolved”.