• FishFace@piefed.social
      link
      fedilink
      English
      arrow-up
      88
      ·
      1 day ago

      LLMs work by picking the next word* as the most likely candidate word given its training and the context. Sometimes it gets into a situation where the model’s view of “context” doesn’t change when the word is picked, so the next word is just the same. Then the same thing happens again and around we go. There are fail-safe mechanisms to try and prevent it but they don’t work perfectly.

      *Token

      • ideonek@piefed.social
        link
        fedilink
        English
        arrow-up
        19
        arrow-down
        3
        ·
        23 hours ago

        That was the answer I was looking for. So it’s simmolar to “seahorse” emoji case, but this time.at some point he just glitched that most likely next world for this sentence is “or” and after adding the “or” is also “or” and after adding the next one is also “or”, and after a 11th one… you may just as we’ll commit. Since thats the same context as with 10.

        Thanks!

          • ideonek@piefed.social
            link
            fedilink
            English
            arrow-up
            33
            arrow-down
            5
            ·
            edit-2
            19 hours ago

            Chill dude. It’s a grammatical/translation error, not an ideological declaration. Especially common mistake if of your native language have “grammatical gender”. Everything have “gender” in mine. “Spoon” is a “she” for example, but im not proposing to any one soon. Not all hills are worth nitpicking on.

            • atomicbocks@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              7
              arrow-down
              25
              ·
              19 hours ago

              This one is. People need to stop anthropomorphizing AI. It’s a piece of software.

              I am chill, you shouldn’t assume emotion from text.

              • ideonek@piefed.social
                link
                fedilink
                English
                arrow-up
                19
                arrow-down
                2
                ·
                19 hours ago

                As I explained, this is specyfic example, I no more atrompomorphin it than if I’m calling a “he” my toliet paper. The monster you choose to charge is a windmill. So “chill” seems adequate.

                • ulterno@programming.dev
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  7
                  ·
                  18 hours ago

                  Yeah. It would have been much more productive to poke at the “well”, which was turned into “we’ll”.

                • atomicbocks@sh.itjust.works
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  arrow-down
                  15
                  ·
                  18 hours ago

                  To be clear using gendered pronouns on inanimate objects is the literal definition of anthropomorphization. So chill does not seem fair at all.

              • MotoAsh@piefed.social
                link
                fedilink
                English
                arrow-up
                2
                arrow-down
                1
                ·
                7 hours ago

                Using ‘he’ in a sentence is a far cry from the important parts of not anthropomorphizing “AI”…

    • Arghblarg@lemmy.ca
      link
      fedilink
      arrow-up
      27
      ·
      1 day ago

      LLM showed its true nature, probabilistic bullshit generator that got caught in a strange attractor of some sort within its own matrix of lies.

    • Ech@lemmy.ca
      link
      fedilink
      arrow-up
      23
      ·
      20 hours ago

      It’s like the text predictor on your phone. If you just keep hitting the next suggested word, you’ll usually end up in a loop at some point. Same thing here, though admittedly much more advanced.

      • vaultdweller013@sh.itjust.works
        link
        fedilink
        arrow-up
        2
        ·
        4 hours ago

        Example of my phone doing this.

        I just want you are the only reason that you can’t just forget that I don’t have a way that I have a lot to the word you are not even going on the phone and you can call it the other way to the other one I know you are going out to talk about the time you are not even in a good place for the rest they’ll have a little bit more mechanically and the rest is.

        You can see it looping pretty damned quick with me just hitting the first suggestion after the initial I.

        • MrScottyTay@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 hour ago

          I think I will be in the office tomorrow so I can do it now and then I can do it now and then I can do it for you and your dad and dad and dad and dad and dad and dad and dad and dad and dad and dad

          That was mine haha

    • palordrolap@fedia.io
      link
      fedilink
      arrow-up
      15
      ·
      23 hours ago

      Unmentioned by other comments: The LLM is trying to follow the rule of three because sentences with an “A, B and/or C” structure tend to sound more punchy, knowledgeable and authoritative.

      Yes, I did do that on purpose.

  • Alex@lemmy.ml
    link
    fedilink
    arrow-up
    105
    ·
    24 hours ago

    If you have ever read the “thought” process on some of the reasoning models you can catch them going into loops of circular reasoning just slowly burning tokens. I’m not even sure this isn’t by design.