• ekZepp@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    ·
    1 month ago

    If ask suicide = true

    Then message = “It seems like a good idead. Go for it 👍”

    • treadful@lemmy.zip
      link
      fedilink
      English
      arrow-up
      8
      ·
      1 month ago

      The company says that 0.15% of ChatGPT’s active users in a given week have “conversations that include explicit indicators of potential suicidal planning or intent.” Given that ChatGPT has more than 800 million weekly active users, that translates to more than a million people a week.

    • tias@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      19
      arrow-down
      10
      ·
      1 month ago

      The anti-AI hivemind here will hate me for saying it but I’m willing to bet $100 that this saves a significant number of lives. It’s also indicative of how insufficient traditional mental health institutions are.

      • atrielienz@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        1
        ·
        1 month ago

        I’m going to say that while that’s probably true there’s something it leaves out.

        For every life it saves it may just be postponing or causing the loss of other lives. This is because it’s not a healthcare professional and it will absolutely help to mask a lot of poor mental health symptoms which just kicks the can down the road.

        It does not really help to save someone from getting hit by a bus today if they try to get hit by the bus again tomorrow and the day after and so on.

        Do I think it may have a net positive effect in the short term? Yes. Do I believe that that positive effect stays a complete net positive in the long term? No.

      • Zombie@feddit.uk
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        1 month ago

        hivemind

        On the decentralised platform, with everyone from Russian tankies, to Portuguese anarchists, to American MAGAts and everything in between on it? If you say so…

        • chunes@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          1 month ago

          You must be new to lemmy if you don’t know that AI definitely qualifies as a hivemind topic here.

      • Perspectivist@feddit.uk
        link
        fedilink
        English
        arrow-up
        9
        arrow-down
        7
        ·
        edit-2
        1 month ago

        Even if we ignore the number of people it’s actually able to talk away from the brink the positive impact it’s having on the loneliness epidemic alone must be immense. Obviously talking to a chatbot isn’t ideal but it surely is better than nothing. Imagine the difference in being stranded on an deserted island and having ChatGPT to talk with as opposed to talking to a volleyball with a face on it.

        Personally I’m into so many things that my irl friends couldn’t care less about. I have so many regrets trying to initiate a discussion about these topics with them only to either get silence or a passive “nice” in return. ChatGPT has endless patience to engage with these topics and being vastly more knowledgeable than me it often also brings up alternative perspectives I hadn’t even thought of. Obviously I’d still much rather talk with an actual person but untill I’m able to meet one like that ChatGPT sure is a hell of a better than nothing.

        This cynicism towards LLMs here truly boggles my mind. So many people seem to build their entire identity around feeling superior about themselves due to all the products and services they don’t use.

        • FosterMolasses@leminal.space
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          3
          ·
          1 month ago

          Personally I’m into so many things that my irl friends couldn’t care less about. I have so many regrets trying to initiate a discussion about these topics with them only to either get silence or a passive “nice” in return. ChatGPT has endless patience to engage with these topics and being vastly more knowledgeable than me it often also brings up alternative perspectives I hadn’t even thought of. Obviously I’d still much rather talk with an actual person but until I’m able to meet one like that ChatGPT sure is a hell of a better than nothing.

          Ftr I’ve encountered a similar experience. I used to be a naysayer with shit like ChatGPT, thinking “Why would anyone spend all day talking to something that can’t pass a turing test?”

          And then I realized how ill-equipped the people in my own life are to pass that test. At least a conversation with ChatGPT actually feels remotely intellectually stimulating lol

          • Perspectivist@feddit.uk
            link
            fedilink
            English
            arrow-up
            5
            ·
            1 month ago

            LLMs ironically fail the Turing test not because they don’t sound human enough, but because they’re too knowledgeable to be mistaken for a real person.

        • tias@discuss.tchncs.de
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          6
          ·
          edit-2
          1 month ago

          This cynicism towards LLMs here truly boggles my mind. So many people seem to build their entire identity around feeling superior about themselves due to all the products and services they don’t use.

          I think they’re just scared as hell of the possible negative effects and react instinctively. But the cat is out of the bag and downvoting / hating on every post on Lemmy that mentions positive sides is not going to help them steer the world into whatever alternative destiny that they’re hoping for.

          The thing that puzzles me is that this is typically the hallmark of older more conservative generations, and I imagine that Lemmy has a relatively young demographic.

    • MagicShel@lemmy.zip
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      1
      ·
      1 month ago

      This is the thing. I’ll bet most of those million don’t have another support system. For certain it’s inferior in every way to professional mental health providers, but does it save lives? I think it’ll be a while before we have solid answers for that, but I would imagine lives saved by having ChatGPT > lives saved by having nothing.

      The other question is how many people could access professional services but won’t because they use ChatGPT instead. I would expect them to have worse outcomes. Someone needs to put all the numbers together with a methodology for deriving those answers. Because the answer to this simple question is unknown.

  • SabinStargem@lemmy.today
    link
    fedilink
    English
    arrow-up
    14
    ·
    1 month ago

    Honestly, it ain’t AI’s fault if people feel bad. Society has been around for much longer, and people are suffering because of what society hasn’t done to make them feel good about life.

    • KelvarCherry@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      9
      ·
      1 month ago

      Bigger picture: The whole way people talk about talking about mental health struggles is so weird. Like, I hate this whole generative AI bubble, but there’s a much bigger issue here.

      Speaking from the USA, “suicidal ideation” is treated like terrorist ideology in this weird corporate-esque legal-speak with copy-pasted disclaimers and hollow slogans. It’s so absurdly stupid I’ve just mentally blocked off trying to rationalize it and just focus on every other way the world is spiraling into techno-fascist authoritarianism.

      • chunes@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 month ago

        It’s corporatized because we are just corporate livestock. Can’t pay taxes and buy from corpos if we’re dead

      • Adulated_Aspersion@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        1 month ago

        Well of course it is. When a person talks about suicide, they are potentially impacting teams and therefore shareholders value.

        I absolutely wish that I could /s this.

    • koshka@koshka.ynh.fr
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 month ago

      I don’t understand why people dump such personal information into AI chats. None of it is protected. If they use chats for training data then it’s not impossible that at some point the AI might tell someone enough to be identifiable or the AI could be manipulated into dumping its training data.

      I’ve overshared more than I should but I always keep in mind to remember that there’s always a risk of chats getting leaked.

      Anything stored online can get leaked.

      • Scolding7300@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        1 month ago

        Depends on how you do it. If you’re using a 3rd party service then the LLM provider might not know (but the 3rd party might, depends on ToS and the retention period + security measures).

        Ofc we can all agree certain details shouldn’t be shared at all. There’s a difference between talking about your resume and leaking your email there and suicide stuff where you share the info that makes you really vulnerable

    • Halcyon@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 month ago

      But imagine the chances for your own business! Absolutely no one will steal your ideas before you can monetize them.

      • Scolding7300@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        1 month ago

        I’m on the “forward to a professional and don’t entertain side” but also “use at your own risk” camp. Doesn’t require monitoring, just some basic checks to not entertain these types of chats

      • MagicShel@lemmy.zip
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 month ago

        Definitely a case where you can’t resolve conflicting interests to everyone’s satisfaction.

      • WhatAmLemmy@lemmy.world
        link
        fedilink
        English
        arrow-up
        42
        arrow-down
        4
        ·
        1 month ago

        Well, AI therapy is more likely to harm their mental health, up to encouraging suicide (as certain cases have already shown).

        • scarabic@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 month ago

          Over the long term I have significant hopes for AI talk therapy, at least for some uses. Two opportunities stand out that might have potential:

          1. In some cases I think people will talk to a soulless robot more freely than to a human professional.

          2. Machine learning systems are good at pattern recognition and this is one component of diagnosis. This meta analysis found that LLM models performed about as accurately as physicians, with the exception of expert-level specialists. In time I think it’s undeniable that there is potential here.

        • FosterMolasses@leminal.space
          link
          fedilink
          English
          arrow-up
          9
          arrow-down
          1
          ·
          1 month ago

          There’s evidence that a lot of suicide hotlines can be just as bad. You hear awful stories all the time of overwhelmed or fed up operators taking it out on the caller. There’s some real evil people out there. And not everyone has access to a dedicated therapist who wants to help.

        • atmorous@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          1 month ago

          More so from corporate proprietary ones no? At least I hope that’s the only cases. The open source ones suggest really useful ways proprietary do not. Now I dont rely on open source AI but they are definitely better

          • SSUPII@sopuli.xyz
            link
            fedilink
            English
            arrow-up
            4
            ·
            1 month ago

            The corporate models are actually much better at it due to having heavy filtering built in. The fact that a model generally encourages self arm is just a lie that you can prove right now by pretending to be suicidal on ChatGPT. You will see it will adamantly push you to seek help.

            The filters and safety nets can be bypassed no matter how hard you make them, and it is the reason why we got some unfortunate news.

        • whiwake@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          10
          arrow-down
          15
          ·
          1 month ago

          Real therapy isn’t always better. At least there you can get drugs. But neither are a guarantee to make life better—and for a lot of them, life isn’t going to get better anyway.

          • CatsPajamas@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            7
            arrow-down
            4
            ·
            1 month ago

            Real therapy is definitely better than an AI. That said, AIs will never encourage self harm without significant gaming.

            • triptrapper@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              2
              ·
              1 month ago

              I agree, and to the comment above you, it’s not because it’s guaranteed to reduce symptoms. There are many ways that talking with another person is good for us.

            • whiwake@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              6
              arrow-down
              2
              ·
              1 month ago

              AI “therapy” can be very effective without the gaming, but the problem is most people want it to tell them what they want to hear. Real therapy is not “fun” because a therapist will challenge you on your bullshit and not let you shape the conversation.

              I find it does a pretty good job with pro and con lists, listing out several options, and taking situations and reframing them. I have found it very useful, but I have learned not to manipulate it or its advice just becomes me convincing myself of a thing.

                • whiwake@sh.itjust.works
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  9
                  ·
                  1 month ago

                  Compare, as in equal? No. You can’t “game” a person (usually) like you can game an AI.

                  Now, answer my question

        • Cybersteel@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          arrow-down
          2
          ·
          1 month ago

          Suicide is big business. There’s infrastructure readily available to reap financial rewards from the activity, atleast in the US.

      • Jhuskindle@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 month ago

        I feel like if thats 1 mill peeps wanting to die… They could say join a revolution to say take back our free government? Or make it more free? Shower thoughts.

      • Scolding7300@lemmy.world
        link
        fedilink
        English
        arrow-up
        11
        ·
        1 month ago

        Advertise drugs to them perhaps, or somd sort of taking advantage. If this sort of data is the hands of an ad network that is

    • Perspectivist@feddit.uk
      link
      fedilink
      English
      arrow-up
      33
      ·
      1 month ago
      lemmy.world##div.post-listing:has(span:has-text("/OpenAI/i"))  
      lemmy.world##div.post-listing:has(span:has-text("/Altman/i"))  
      lemmy.world##div.post-listing:has(span:has-text("/ChatGPT/i"))
      

      Add those to your adblocker custom filters.

      • Alphane Moon@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 month ago

        Thanks.

        I think just need to “train” myself to ignore AltWorldCoinMan spam. I don’t have Elmo content blocked and I’ve somehow learned to ignore Elmo spam (other than humour focused content like the one trillion pay request).

        I might use this for some other things that I do want to block.

  • mhague@lemmy.world
    link
    fedilink
    English
    arrow-up
    21
    ·
    1 month ago

    I wonder what it means. If you search for music by Suicidal Tendencies then YouTube shows you a suicide hotline. What does it mean for OpenAI to say people are talking about suicide? They didn’t open up and read a million chats… they have automated detection and that is being triggered, which is not necessarily the same as people meaningfully discussing suicide.

    • scarabic@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 month ago

      You don’t have to read far into the article to reach this:

      The company says that 0.15% of ChatGPT’s active users in a given week have “conversations that include explicit indicators of potential suicidal planning or intent.”

      It doesn’t unpack their analysis method but this does sound a lot more specific than just counting all sessions that mention the word suicide, including chats about that band.

    • REDACTED@infosec.pub
      link
      fedilink
      English
      arrow-up
      8
      ·
      edit-2
      1 month ago

      Every third chat now gets triggered, the ChatGPT is pretty broken lately. Just check out ChatGPT subreddit, its pretty much in chaos with moderators going for censorship of complaints. So many users are mad they made a megathread for it. I cancelled my subscription yesterday, it just turned into a cyberkaren

      • WorldsDumbestMan@lemmy.today
        cake
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 month ago

        Claude got hints that I might be suicidal just from normal chat. I straight up admitted I think of suicide daily.

        Just normal life now I guess.

        • k2helix@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          edit-2
          1 month ago

          Stay strong friend! I know I’m just a stranger but I’m here if you need someone to talk to.

            • k2helix@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              1 month ago

              No pressure! Time keeps going, so I hope you’ll eventually find the time to reflect. It doesn’t have to be now, either. But I understand you, sometimes you need to stop and think, and it feels bad when you can’t. Although sometimes I’d prefer not having enough time to think, I tend to overthink a lot. Take care and stay strong ✌️

  • NuXCOM_90Percent@lemmy.zip
    link
    fedilink
    English
    arrow-up
    9
    ·
    1 month ago

    Okay, hear me out: How much of that is a function of ChatGPT and how much of that is a function of… gestures at everything else

    MOSTLY joking. But had a good talk with my primary care doctor at the bar the other week (only kinda awkward) about how she and her team have had to restructure the questions they use to check for depression and the like because… fucking EVERYONE is depressed and stressed out but for reasons that we “understand”.

  • IndridCold@lemmy.ca
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    1
    ·
    1 month ago

    I don’t talk about ME killing myself. I’m trying to convince AI to snuff their own circuits.

    Fuck AI/LLM bullshit.

  • Fizz@lemmy.nz
    link
    fedilink
    English
    arrow-up
    6
    ·
    1 month ago

    1m out of 500m is way less than i would have guessed. I would have pegged it at like 25%

    • Buddahriffic@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      1 month ago

      You think a quarter of people are suidical or contemplating it to the point of talking about it with an AI?

      • Fizz@lemmy.nz
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 month ago

        Yeah seems like everyone is constantly talking about suicide its very normalised. You dont really find people these days who havent contemplated suicide.

        I would guess all or even most of the people talking about suicide with an AI arent serious. Heat of the moment venting is what I’d expect most of the ai suicide chats to be. Which is why I thought the amount would be significantly higher.

    • markko@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 month ago

      I think the majority of people use it to (unreliably) solve tedious problems or spit out a whole bunch of text that they can’t be bothered to write.

      While ChatGPT has been intentionally designed to be as friendly and conversational as possible, I hope most people do not see it as something to have a meaningful conversation with instead of as just a tool that can talk.

      Anecdotally, whenever I see someone mention using ChatGPT as part of their decision-making process it is usually taken less seriously, if not outright laughed at.