• PrimeMinisterKeyes@leminal.space
      link
      fedilink
      English
      arrow-up
      2
      ·
      6 个月前

      Mr. Grady, you were the caretaker here. I recognize you. I saw your picture in the newspapers. You chopped your wife and daughters up into little bits. And then you blew your brains out.

      BTW, I just now realized that Shelley Duval died last year. Such a great actress.

  • psycho_driver@lemmy.world
    link
    fedilink
    arrow-up
    7
    ·
    6 个月前

    I searched for Arby’s Pulled Pork earlier to see if it was a limited time deal (it is, though I didn’t see an end date–and yes, they’re pretty decent). Gemini spit out some basic and probably factual information about the sandwich, then showed a picture of a bowl of mac and cheese.

  • crank0271@lemmy.world
    link
    fedilink
    English
    arrow-up
    14
    ·
    6 个月前

    What do you get for the person who has everything and wishes each of those things were smaller?

  • yesman@lemmy.world
    link
    fedilink
    arrow-up
    12
    ·
    6 个月前

    When I read that, in my head it’s spoken to the music of “revolution number 9” by the Beatles.

  • FourWaveforms@lemm.ee
    link
    fedilink
    arrow-up
    13
    arrow-down
    1
    ·
    6 个月前

    I’ve seen models do this in benchmarks. It’s how they respond without reinforcement learning. Also this is probably fake

    • huppakee@lemm.ee
      link
      fedilink
      arrow-up
      9
      arrow-down
      1
      ·
      6 个月前

      I’ve had a bunch of questionable Google ai answers already, not as weird as this but enough to make me believe this could also be not fake.

      • gaja@lemm.ee
        link
        fedilink
        arrow-up
        8
        ·
        6 个月前

        I’ve used an unhealthy amount of AI, this is nothing. There was an audio bug in chatgpt that made the assistant scream. The volume and pitched increased and would persist even when I exited the speech mode. It happened several times and even saved a screen recording, but I don’t have it saved on my phone any more. Repeating is very common though.

  • AnarchistArtificer@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    6 个月前

    Knives Knives Knives Knives Knives Knives Knives Knives Knives Knives Knives Knives Knives Knives Knives Knives Knives Knives Knives Knives Knives Knives Knives Knives Knives Knives Knives Knives Knives Knives Knives Knives More Knives Knives Knives Knives Knives Knives Knives Knives Knives Even More Knives Knives Knives Knives Knives Knives Knives Knives Knives Knives Knives Knives Knives Knives Knives Knives Knives Knives Knives Knives Knives Knives Knives Knives Knives All the Knives Knives Knives Knives Knives Knives Knives Knives Knives

    • Phoenicianpirate@lemm.ee
      link
      fedilink
      English
      arrow-up
      2
      ·
      6 个月前

      Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers

  • skisnow@lemmy.ca
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    1
    ·
    6 个月前

    What’s frustrating to me is there’s a lot of people who fervently believe that their favourite model is able to think and reason like a sentient being, and whenever something like this comes up it just gets handwaved away with things like “wrong model”, “bad prompting”, “just wait for the next version”, “poisoned data”, etc etc…

    • uuldika@lemmy.ml
      link
      fedilink
      arrow-up
      1
      arrow-down
      1
      ·
      6 个月前

      this really is a model/engine issue though. the Google Search model is unusably weak because it’s designed to run trillions of times per day in milliseconds. even still, endless repetition this egregious usually means mathematical problems happened somewhere, like the SolidGoldMagikarp incident.

      think of it this way: language models are trained to find the most likely completion of text. answers like “you should eat 6-8 spiders per day for a healthy diet” are (superficially) likely - there’s a lot of text on the Internet with that pattern. clanging like “a set of knives, a set of knives, …” isn’t likely, mathematically.

      last year there was an incident where ChatGPT went haywire. small numerical errors in the computations would snowball, so after a few coherent sentences the model would start sundowning - clanging and rambling and responding with word salad. the problem in that case was bad cuda kernels. I assume this is something similar, either from bad code or a consequence of whatever evaluation shortcuts they’re taking.

    • nialv7@lemmy.world
      link
      fedilink
      arrow-up
      3
      arrow-down
      1
      ·
      6 个月前

      Given how poorly defined “think”, “reason”, and “sentience” are, any these claims have to be based purely on vibes. OTOH it’s also kind of hard to argue that they are wrong.