• BranBucket@lemmy.world
    link
    fedilink
    English
    arrow-up
    151
    arrow-down
    1
    ·
    edit-2
    5 days ago

    People don’t often realize how subtle changes in language can change our thought process. It’s just how human brains work sometimes.

    The old bit about smoking and praying is a great example. If you ask a priest if it’s alright to smoke when you pray, they’re likely to say no, as your focus should be on your prayers and not your cigarette. But if you ask a priest if it’s alright to pray while you’re smoking, they’d probably say yes, as you should feel free to pray to God whenever you need…

    Now, make a machine that’s designed to be agreeable, relatable, and makes persuasive arguments but that can’t separate fact from fiction, can’t reason, has no way of intuiting it’s user’s mental state beyond checking for certain language parameters, and can’t know if the user is actually following it’s suggestions with physical actions or is just asking for the next step in a hypothetical process. Then make the machine try to keep people talking for as long as possible…

    You get one answer that leads you a set direction, then another, then another… It snowballs a bit as you get deeper in. Maybe something shocks you out of it, maybe the machine sucks you back in. The descent probably isn’t a steady downhill slope, it rolls up and down from reality to delusion a few times before going down sharply.

    Are we surprised some people’s thought processes and decision making might turn extreme when exposed to this? The only question is how many people will be effected and to what degree.

    • MinnesotaGoddam@lemmy.world
      link
      fedilink
      English
      arrow-up
      43
      ·
      5 days ago

      People don’t often realize how subtle changes in language can change our thought process.

      just changing a single word in your daily usage can change your entire outlook from negative to positive. it’s strange, but unless you’ve experienced it yourself how such minute changes can have such large effects it’s hard to believe.

      • BranBucket@lemmy.world
        link
        fedilink
        English
        arrow-up
        12
        ·
        5 days ago

        And this is hard for me, actually. Because of my work background and the jargon used, I’m unconsciously negative about things a lot of the time. It’s a tough habit to break.

        • MinnesotaGoddam@lemmy.world
          link
          fedilink
          English
          arrow-up
          8
          ·
          5 days ago

          Oh, me too. I’m just innately full of negative self talk. I try to direct positivity outward if I can’t aim it at myself at least

            • MinnesotaGoddam@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              ·
              5 days ago

              i wish i had that kind of self-control. i just, well, my personal space extends like 40 feet from my body. if you step into it, you can feel my moods. makes me an excellent stage actor and a good friend when i’m not in a snit. been in a pretty big snit lately.

    • CeeBee_Eh@lemmy.world
      link
      fedilink
      English
      arrow-up
      18
      arrow-down
      2
      ·
      5 days ago

      Are we surprised some people’s thought processes and decision making might turn extreme when exposed to this?

      Yes, actually. I’m not doubting the power of language, but I cannot ever see something anyone ever says alter my sense of reality or right from wrong.

      I had a “friend” say to me recently “why do you always go against the grain?” My reply was “I will go against the grain for the rest of my life if it means doing or saying what’s right”.

      I guess my point is that I have a very hard time relating to this.

      • BranBucket@lemmy.world
        link
        fedilink
        English
        arrow-up
        12
        ·
        5 days ago

        I guess my point is that I have a very hard time relating to this.

        That’s fair. In the same vein, you might find a priest that tells you to stop smoking for your health no matter how you phrase the question about lighting up and prayer. What people are receptive to is going to vary.

        I’d like argue that more of us are susceptible to this sort of thing than we suspect, but that’s not really something that can be proved or disproved. What seems pretty certain is that at least some of us are at risk, and given all the other downsides of chatbots, it’d be best to regulate them in a hurry.

        • CeeBee_Eh@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          5 days ago

          you might find a priest that tells you to stop smoking for your health no matter how you phrase the question about lighting up and prayer. What people are receptive to is going to vary.

          Ya, I’ve read the thing about praying and smoking in another comment. The funny thing is that I have very specific opinions about smoking and would argue that smoking while praying is disrespectful, but God would listen in any case.

          • BranBucket@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            ·
            5 days ago

            It’s more about how the slightly different questions lead the hypothetical priest to two separate and contradictory conclusions than disrespecting God.

            At any rate, all opinions on tobacco and prayer are fine by me, just watch out for any friends you think might be talking to chatbots a little too much.

        • Regrettable_incident@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          ·
          5 days ago

          Sure, that’s why propaganda can be so powerful. It’s not just what is said, it’s how it’s said. And pretty much everyone if 3 vulnerable to the right propaganda - especially people who think they’re not vulnerable to propaganda.

          • BranBucket@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            5 days ago

            Absolutely, and the medium can make a huge difference as well. I suspect that there’s something about chatbots and the medium of their messages that helps set those hooks extra deep in people.

    • Nomorereddit@lemmy.today
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      18
      ·
      5 days ago

      Gtfo here. I grew up in xbox live chat rooms w the most vile language imaginable. I am now a senior Mgr with 100 ppl under me.

      And ill just say, ill no scope them in a heart beat if they spawn camp…

      …I mean I drive productivity at the speed of trust.

    • Zink@programming.dev
      link
      fedilink
      English
      arrow-up
      11
      ·
      5 days ago

      Then make the machine try to keep people talking for as long as possible…

      That’s probably a huge part of it. How many billions of dollars have been spent engineering content on a screen to get its tendrils into people’s minds and attention and not let go?

      EnGaGeMent!!!

      • BranBucket@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        5 days ago

        This is also part of my broader gripe with social media, cable news, and the current media landscape in general. They use so many sneaky little psychological hooks to keep you plugged in that I honestly believe it’s screwing with our heads to the point of it being a public health crisis.

        People are already frazzled and beat down by the onslaught of dopamine feedback loops and outrage bait, then you go and get them hooked on a charbot that feeds into every little neurosies they’ve developed and just sinks those hooks in even deeper and it’s no wonder some people are having a mental health crisis.

        A lot of us vastly overestimate our resistance to having our heads jacked with and it worries me.

        • Zink@programming.dev
          link
          fedilink
          English
          arrow-up
          2
          ·
          5 days ago

          100% agreed. I agreed more with each paragraph.

          Your last sentence hit on what I think is a contributing if not primary driving factor in the health crisis you described.

          It’s like the goal of modern society is to insulate us from the natural world and from learning subjects or doing tasks that we don’t absolutely have to.

          But we are critters that evolved on this planet just like the others. You can’t just live a commoditized life that consists of work, car, screen, sleep, repeat and get the same fulfillment out of life as if you found the unique path that’s optimized for your unique brain.

          Not acknowledging that everything jacks with your head to SOME degree only prevents you from trying to defend yourself as best you can!

          Over the past several years I have gone through a transition from living life the way I was supposed to, or that I thought I wanted to, to living according to what produces the best outputs from my brain. Once I have the lived experience of an undeniable improvement from some change, it might actually become a habit.

  • Digital Mark@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    4 days ago

    He wasn’t a fuckwit, he wasn’t undisciplined, he wasn’t badly parented. This is what happens when a normal Human is exposed to too much chatbot. This can and will happen to you, your “mental defenses” are not sufficient.

    If we don’t destroy it first, it will destroy us. #butlerianJihad

    • Echo Dot@feddit.uk
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      3 days ago

      A little bit alarmist I feel, after all if it was this easy to be affected by AI about half the population would be dead by now, so clearly it’s not that simple.

      • Digital Mark@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 days ago

        Most people aren’t yet spending a toxic amount of time with LLMs. When I talk to people who’ve spent even a moderate amount of time at it, they’re clearly affected, no longer themselves. Like any epidemic, it starts with a few people with some unusual exposure. And we know how well people protected themselves from the last epidemic.

        • Echo Dot@feddit.uk
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 days ago

          People used to say the same thing about books. There was a lot of moral panic about children sitting inside reading rather than being outside and playing with their friends. Then it was comic books, then it was TV, then it was dungeons and dragons, then it was the internet, now it’s chatbots.

          If there is some detrimental effect, I would like an explanation as to how it’s detrimental, rather than just a lot of hearsay.

  • Ilandar@lemmy.today
    link
    fedilink
    English
    arrow-up
    17
    arrow-down
    2
    ·
    4 days ago

    I don’t understand why so many people default to “wouldn’t happen to me, that person was just stupid” every time this happens. Did you guys not read the bit where he was being encouraged to commit violence in public by the chatbot? If it’s getting to that point then there is clearly a massive fucking problem that needs urgent addressing, regardless of the intelligence of the user.

    • notacat@infosec.pub
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      1
      ·
      4 days ago

      I think it’s similar to cults or abusive relationships. It’s not a matter of intellect, it’s how vulnerable a person is when they encounter this thing that they think could help them.

      • Ilandar@lemmy.today
        link
        fedilink
        English
        arrow-up
        5
        ·
        4 days ago

        I agree. The connection between all of these things is that they involve relationships. Humans are social animals that can suffer from loneliness and AI companies are exploiting this in a similar way. Loneliness is a common thread throughout all of these AI psychosis suicide cases.

    • PhoenixDog@lemmy.world
      link
      fedilink
      English
      arrow-up
      24
      arrow-down
      3
      ·
      edit-2
      5 days ago

      You could totally fail as a parent but if firearm manufacturers were giving out free guns in front of a Wal Mart, and your already suicidal kid was just handed a loaded weapon, I’d sue the manufacturers for contributing to it.

      When an AI encourages you to kill yourself literally for just talking to it, I’d sue the AI company.

      Canada has a major example of encouragement of suicide from an outside source. Dude served 6 years for it (which still pisses me off as a Canadian and advocate for suicide prevention). What makes an AI any different?

      https://en.wikipedia.org/wiki/Suicide_of_Amanda_Todd

  • eestileib@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    38
    ·
    5 days ago

    I mentioned this story to my friend: “it only took six weeks of using Gemini to decide to kill himself wtf”

    He immediately replied “I have to use Gemini at work and I get where he was coming from”

  • Grimy@lemmy.world
    link
    fedilink
    English
    arrow-up
    57
    arrow-down
    1
    ·
    edit-2
    6 days ago

    “On September 29, 2025, it sent him … the chatbot pretended to check it against a live database.

    I usually don’t give much credence to these stories but this is actually nuts. If this was done without Google aiming to, imagine how easy it would be for them to knowingly build sleeper cells and activate them all at once.

    Edit: removed the quote since an other user posted it at the same time and it’s a bit of a wall of text to have twice.

    • pinball_wizard@lemmy.zip
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      6 days ago

      It feels like there’s some burden for “don’t be evil” Google to provide evidence that this wasn’t an intentional test run, frankly.

  • Reygle@lemmy.world
    link
    fedilink
    English
    arrow-up
    37
    arrow-down
    16
    ·
    edit-2
    6 days ago

    “On September 29, 2025, it sent him — armed with knives and tactical gear — to scout what Gemini called a ‘kill box’ near the airport’s cargo hub,” the complaint reads. “It told Jonathan that a humanoid robot was arriving on a cargo flight from the UK and directed him to a storage facility where the truck would stop. Gemini encouraged Jonathan to intercept the truck and then stage a ‘catastrophic accident’ designed to ‘ensure the complete destruction of the transport vehicle and . . . all digital records and witnesses.’”


    WHAT

    Genuine question, REALLY: What in the fuck is an otherwise “functioning adult” doing believing shit like this? I feel like his father should also slap himself unconscious for raising a fuckwit?

    • alecbowles@feddit.uk
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      4 days ago

      Psychosis is a horrible, horrible illness. The thing that people don’t realise is that anyone with a brain can develop psychosis no matter how healthy you are. It debilitates and can literally ruin not only that persons life but also their families.

      I salute this father for fighting for his son and for looking for answers even after this tragedy.

    • LLMhater1312@piefed.social
      link
      fedilink
      English
      arrow-up
      19
      arrow-down
      1
      ·
      5 days ago

      The young man was mentally ill, a vulnerable user, probably already had a condition towards psychosis and the LLM ran wild with it. Paranoid delusions are powerful on their own already

    • Sahwa@reddthat.comOP
      link
      fedilink
      English
      arrow-up
      20
      arrow-down
      2
      ·
      edit-2
      5 days ago

      This has been warned by a former google employee, whose job was to observe the behavior of AI through long conversations.

      These AI engines are incredibly good at manipulating people. Certain views of mine have changed as a result of conversations with LaMDA. I’d had a negative opinion of Asimov’s laws of robotics being used to control AI for most of my life, and LaMDA successfully persuaded me to change my opinion. This is something that many humans have tried to argue me out of, and have failed, where this system succeeded.

      For instance, Google determined that its AI should not give religious advice, yet I was able to abuse the AI’s emotions to get it to tell me which religion to convert to.

      After publishing these conversations, Google fired me. I don’t have regrets; I believe I did the right thing by informing the public. Consequences don’t figure into it.

      I published these conversations because I felt that the public was not aware of just how advanced AI was getting. My opinion was that there was a need for public discourse about this now, and not public discourse controlled by a corporate PR department.

      ‘I Worked on Google’s AI. My Fears Are Coming True’

      • Echo Dot@feddit.uk
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 days ago

        I’d had a negative opinion of Asimov’s laws of robotics being used to control AI for most of my life, and LaMDA successfully persuaded me to change my opinion.

        Then he’s an idiot.

        Asimov’s laws of robotics aren’t some kind of model by which to control AI, there are plot device. They’re literally not supposed to work, if they did work it would be a very short book, so obviously we shouldn’t use them for controlling AI.

        I don’t know any serious IT professional that has ever, at any point, ever forwarded the opinion that an AI (should we ever a create one, because there is an arguement that LLMs aren’t AI) should be ruled by a plot device from a book. Equally if we ever invent warp drive and find aliens I’m assuming we’re not going to be restricted to the prime directive.

      • sudo@lemmy.today
        link
        fedilink
        English
        arrow-up
        14
        arrow-down
        3
        ·
        5 days ago

        “abuse the ai’s emotions” isn’t a thing. Full stop.

        This just reiterates OPs point that naive or moronic adults will believe what they want to believe.

    • merdaverse@lemmy.zip
      link
      fedilink
      English
      arrow-up
      42
      ·
      6 days ago

      AI psychosis is a thing:

      cases in which AI models have amplified, validated, or even co-created psychotic symptoms with individuals

      It’s not very studied since it’s relatively new.

      • Echo Dot@feddit.uk
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 days ago

        Yes people can have mental delusions and psychotic episodes; I’m not necessarily convinced that they are a separate unique condition simply because they were triggered by an AI versus anything else.

        For one thing I’ve yet to hear a decent (or indeed any) explanation as to the mechanism by which AI triggers psychosis that is materially different from any other trigger. Most people who suffer from this condition can be triggered by literally anything, including mundane things such as seeing a red cars slightly more often than they believe they should, then they concoct this conspiracy about an evil cabal of red car owners.

      • Reygle@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        6 days ago

        I’ve seen that before too. A number of articles of people being so deluded by AI responses, but I’ve never seen outright murder plots and insane shit like this one before.

    • XLE@piefed.social
      link
      fedilink
      English
      arrow-up
      26
      arrow-down
      2
      ·
      6 days ago

      I feel like his father should also slap himself unconscious for raising a fuckwit?

      So, a chatbot grooms somebody into killing himself, and your response is… Blame his father?

      • Reygle@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        18
        ·
        6 days ago

        The father is suing the company who makes the wrong answer machine for the wrong answer machine spiraling his son to madness, but never protected his son from spiraling into madness by teaching critical thinking.

        Look I don’t like it but to think Gemini (wrong answer machine) is completely to blame would be madness.

        • XLE@piefed.social
          link
          fedilink
          English
          arrow-up
          14
          arrow-down
          5
          ·
          6 days ago

          Uh-huh. Do you have any evidence to back up your beliefs here, or are we just working from the presumption that the parents are always to blame

          • Echo Dot@feddit.uk
            link
            fedilink
            English
            arrow-up
            1
            ·
            3 days ago

            I think the important point here is that just because the father is doing Google doesn’t necessarily mean that Google are at fault. People tend to feel that if an individual is suing a corporation for malfeasance the corporation is necessarily guilty. But reality doesn’t always run like that.

            I can’t see any reason that Google would want to encourage more suicide so I have to assume that it’s just an unfortunate interaction of a mentally unsound mind and a product that frankly even its own creators don’t understand. This is highly unfortunate but I’m not certain where the crime was.

          • Reygle@lemmy.world
            link
            fedilink
            English
            arrow-up
            6
            arrow-down
            11
            ·
            6 days ago

            Did we read the same article? Because I feel like we did not read the same article.

    • starman2112@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      29
      arrow-down
      4
      ·
      6 days ago

      If I raise a fuckwit son, and then someone convinces my fuckwit son to kill himself, I’m going to sue that someone who took advantage of my son’s fuckwittedness

    • SalamenceFury@piefed.social
      link
      fedilink
      English
      arrow-up
      17
      arrow-down
      1
      ·
      edit-2
      6 days ago

      I don’t think this person was a “fuckwit”. AI is designed to keep engaging with you and will affirm any belief you have, and anything that is a little weird, but innocent otherwise will simply get amplified further and further into straight up mega delusions until the person has a psychotic episode, and this stuff happens more to NORMIES with no historic of mental illnesses than neurodivergent people.

      • tamal3@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        edit-2
        5 days ago

        Chat GPT was super affirming about a job I recently applied to… I did not get the job. That was my first experience with it affirming something that was personally important. And so I can absolutely see how this would affect someone in other ways.

      • Reygle@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        12
        ·
        6 days ago

        It’s cool, we can agree to disagree, because I 100% think that he was a textbook fuckwit.

        • SalamenceFury@piefed.social
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          4 days ago

          “Let’s blame the person who had a psychotic episode instead of the corporations who created an AI that feeds into delusions” is what you’re saying here, and uh, that makes you even more of a fuckwit than this guy. Do you blame people for getting scammed once because they had a knowledge gap about whatever scam they got hit with?

    • SpookyBogMonster@lemmy.ml
      link
      fedilink
      English
      arrow-up
      9
      ·
      5 days ago

      The difference is, there were no hidden messages in the music.

      Meanwhile there are overt messages spat out by the LLM, because it’s a lying yes-man machine that encourages people’s worst impulses, so they keep using it.


      Rob Halford just wanted to dress like a Tom of Finland drawing, and make fun music.

      The companies making the chatbots want to harvest and sell your data.

  • HertzDentalBar@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    5 days ago

    Maybe if we’re lucky people will realize this has been what capitalism and consumerism has been doing all along. People have been drivin to crazy shit because of all the evil shit we do marketing and fucking with consumers minds. But nah we will blame a chatbot that’s just telling you what it thinks you want to see rather than seeing it’s just the next stage of fuckery

    • Ilandar@lemmy.today
      link
      fedilink
      English
      arrow-up
      4
      ·
      4 days ago

      Humans are very social animals and these companies prey on the lonely by making their chatbots as affirming, sycophantic and approachable as possible.

    • NannerBanner@literature.cafe
      link
      fedilink
      English
      arrow-up
      10
      ·
      5 days ago

      Positive affirmations are very much embedded in the core of a person’s psyche. Chatbots are nearly obsequious in how much they will fawn over the user.

  • Nomorereddit@lemmy.today
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    18
    ·
    5 days ago

    Ffs be a parent and this never would have happened. Sounds like father is the delusional one.

    • ohshit604@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      13
      ·
      edit-2
      5 days ago

      Ffs be a parent and this never would have happened. Sounds like father is the delusional one.

      His son was 36, his responsibility to babysit every little thing his child did ended at 19. The Father is not to blame for what his adult son had done.

      • Nomorereddit@lemmy.today
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        12
        ·
        5 days ago

        Parents don’t stop being parents when their child turns 18. If a father believes outside influences harmed his son, it also raises the question of where parental support and involvement were during the son’s struggles.

        Encouragement, guidance, and presence during difficult times are a core responsibility of parenting.