Screenshot of this question was making the rounds last week. But this article covers testing against all the well-known models out there.

Also includes outtakes on the ‘reasoning’ models.

  • TrackinDaKraken@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    11 days ago

    I think it’s worse when they get it right only some of the time. It’s not a matter of opinion, it should not change its “mind”.

    The fucking things are useless for that reason, they’re all just guessing, literally.

      • m0darn@lemmy.ca
        link
        fedilink
        English
        arrow-up
        0
        ·
        11 days ago

        Isn’t it a probabilistic extrapolation? Isn’t that what a guess is?

        • Iconoclast@feddit.uk
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          11 days ago

          It’s a Large Language Model. It doesn’t “know” anything, doesn’t think, and has zero metacognition. It generates language based on patterns and probabilities. Its only goal is to produce linguistically coherent output - not factually correct one.

          It gets things right sometimes purely because it was trained on a massive pile of correct information - not because it understands anything it’s saying.

          So no, it doesn’t “guess.” It doesn’t even know it’s answering a question. It just talks.

          • KeenFlame@feddit.nu
            link
            fedilink
            English
            arrow-up
            0
            arrow-down
            1
            ·
            11 days ago

            Yes it guesstimates what is wrong with you to argue like that about semantics?

          • vii@lemmy.ml
            link
            fedilink
            English
            arrow-up
            0
            arrow-down
            1
            ·
            11 days ago

            It gets things right sometimes purely because it was trained on a massive pile of correct information - not because it understands anything it’s saying.

            I know some humans that applies to

          • SuspciousCarrot78@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            arrow-down
            1
            ·
            edit-2
            10 days ago

            Fair point. Counter point -

            Language itself encodes meaning. If you can statistically predict the next word, then you are implicitly modeling the structure of ideas, relationships, and concepts carried by that language.

            You don’t get coherence, useful reasoning, or consistently relevant answers from pure noise. The patterns reflect real regularities in the world, distilled through human communication.

            Yes, that doesn’t mean an LLM “understands” in the human sense, or that it’s infallible.

            But reducing it to “just autocomplete” misses the fact that sufficiently rich pattern modeling can approximate aspects of reasoning, abstraction, and knowledge use in ways that are practically meaningful, even if the underlying mechanism is different from human thought.

            TL;DR: it’s a bit more than just a fancy spell check. ICBW and YMMV but I believe I can argue this claim (with evidence if so needed).

            • Iconoclast@feddit.uk
              link
              fedilink
              English
              arrow-up
              0
              arrow-down
              1
              ·
              10 days ago

              No, I completely agree. My personal view is that these systems are more intelligent than the haters give them credit for, but I think this simplistic “it’s just autocomplete” take is a solid heuristic for most people - keeps them from losing sight of what they’re actually dealing with.

              I’d say LLMs are more intelligent than they have any right to be, but not nearly as intelligent as they can sometimes appear.

              The comparison I keep coming back to: an LLM is like cruise control that’s turned out to be a surprisingly decent driver too. Steering and following traffic rules was never the goal of its developers, yet here we are. There’s nothing inherently wrong with letting it take the wheel for a bit, but it needs constant supervision - and people have to remember it’s still just cruise control, not autopilot.

              The second we forget that is when we end up in the ditch. You can’t then climb out shaking your fist at the sky, yelling that the autopilot failed, when you never had autopilot to begin with.

              • SuspciousCarrot78@lemmy.world
                link
                fedilink
                English
                arrow-up
                0
                arrow-down
                1
                ·
                edit-2
                10 days ago

                I think were probably on the same page, tbh. OTOH, I think the “fancy auto complete” meme is a disingenuous thought stopper, so I speak against it when I see it.

                I like your cruise control+ analogy. Its not quite self driving… but, it’s not quite just cruise control, either. Something half way.

                LLMs don’t have human understanding or metacognition, I’m almost certain.

                But next-token prediction suggests a rich semantic model, that can functionally approximate reasoning. That’s weird to think about. It’s something half way.

                With external scaffolding memory, retrieval, provenance, and fail-closed policies, I think you can turn that into even more reliable behavior.

                And then… I don’t know what happens after that. There’s going to come a time where we cross that point and we just can’t tell any more. Then what? No idea. May we live in interesting times, as the old curse goes.

        • vii@lemmy.ml
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          11 days ago

          This gets very murky very fast when you start to think how humans learn and process, we’re just meaty pattern matching machines.

    • merc@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 days ago

      It’s not literally guessing, because guessing implies it understands there’s a question and is trying to answer that question. It’s not even doing that. It’s just generating words that you could expect to find nearby.

    • Iconoclast@feddit.uk
      link
      fedilink
      English
      arrow-up
      1
      ·
      11 days ago

      Is cruise control useless because it doesn’t drive you to the grocery store? No. It’s not supposed to. It’s designed to maintain a steady speed - not to steer.

      Large Language Models, as the name suggests, are designed to generate natural-sounding language - not to reason. They’re not useless - we’re just using them off-label and then complaining when they fail at something they were never built to do.

      • Urist@leminal.space
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        1
        ·
        11 days ago

        Language without meaning is garbage. Like, literal garbage, useful for nothing. Language is a tool used to express ideas, if there are no ideas being expressed then it’s just a combination of letters.

        Which is exactly why LLMs are useless.

        • Iconoclast@feddit.uk
          link
          fedilink
          English
          arrow-up
          0
          ·
          11 days ago

          Which is exactly why LLMs are useless.

          800 million weekly ChatGPT users disagree with that.

          • RichardDegenne@lemmy.zip
            link
            fedilink
            English
            arrow-up
            0
            ·
            11 days ago

            And there are 1.3 billion smokers in the world according to the WHO.

            Does that make cigarettes useful?

            • Iconoclast@feddit.uk
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              11 days ago

              Something being useful doesn’t imply it’s good or beneficial. Those terms are not synonymous. Usefulness describes whether a thing achieves a particular goal or serves a specific purpose effectively.

              A torture device is useful for extracting information. A landmine is useful for denying an area to enemy troops.

              • Urist@leminal.space
                link
                fedilink
                English
                arrow-up
                0
                ·
                11 days ago

                A torture device is useful for extracting information.

                No it fucking isn’t! This is a great analogy, actually, thank you for bringing it up. A person being tortured will tell you literally anything that they believe will stop you from torturing them. They will confess to crimes that never happened, tell you about all their accomplices who don’t exist, and all their daily schedules that were made up on the spot. Torture is useless but morons think it is useful. Just like AI.

                • Womble@piefed.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  10 days ago

                  Torture can be a useful way of extracting information if you have a way to instantly verify it, which actually makes it a good analogy to LLMs. If I want to know the password to your laptop and torture you until you give me the correct password and I log in then that works.

                  • [deleted]@piefed.world
                    link
                    fedilink
                    English
                    arrow-up
                    0
                    arrow-down
                    1
                    ·
                    edit-2
                    10 days ago

                    If you can instantly verify it then you don’t need the torture.

                    Getting the person to volunteer the information is proven to be far, far more successful and being able to instantly verofy means you know when you have the answers.

                  • JcbAzPx@lemmy.world
                    link
                    fedilink
                    English
                    arrow-up
                    0
                    arrow-down
                    1
                    ·
                    10 days ago

                    In fact it cannot ever be a useful way of extracting information. Even just randomly guessing is a better way to get the information you want than torture.

    • Tetragrade@leminal.space
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      11 days ago

      Same takeaway as the article (everyone read the article, right?).

      Applying it to yourself, can you recall instances when you were asked the same question at different points in time? How did you respond?

      • CileTheSane@lemmy.ca
        link
        fedilink
        English
        arrow-up
        0
        ·
        10 days ago

        Having read the article (you read the article right?) what gave you the impression the AI was asked the question at different points in time?

        • Tetragrade@leminal.space
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          10 days ago

          The AI was asked the same question repeatedly and gave different answers, due to its randomised structure.

          People will also often do this (I have, personally), but because our actions seem to be strongly influenced by time-dependent stuff (like sense perception and short-term memory contents), I’d expect you’d need to ask at different times.

          • CileTheSane@lemmy.ca
            link
            fedilink
            English
            arrow-up
            0
            ·
            10 days ago

            My answer to this question will not change if you ask me a year from now, because as OP said this is not a matter of opinion; there is a factually correct answer.