• idunnololz@lemmy.world
    link
    fedilink
    arrow-up
    31
    arrow-down
    2
    ·
    edit-2
    7 months ago

    This is terrible. I’m going to ignore the issues concerning privacy since that’s already been brought up here and highlight another major issue: it’s going to get people hurt.

    I did a deep dive with gen AI for a month a few weeks ago.

    It taught me that gen AI is actually brilliant at certain things. One thing that gen AI does is it learns what you want and makes you believe it’s giving you exactly what you want. In a sense it’s actually incredibly manipulative and one of the things gen AI is brilliant at. As you interact with gen AI within the same context window, it quickly picks up on who you are, then subtly tailors its responses to you.

    I also noticed that as gen AI’s context grew, it became less “objective”. This makes sense since gen AI is likely tailoring the responses for me specifically. However, when this happens, the responses also end up being wrong more often. This also tracks, since correct answers are usually objective.

    If people started to use gen AI for therapy, it’s very likely they will converse within one context window. In addition, they will also likely ask gen AI for advice (or gen AI may even offer advice unprompted because it loves doing that). However, this is where things can go really wrong.

    Gen AI cannot “think” of a solution, evaluate the downsides of the solution, and then offer it to you because gen AI can’t “think” period. What gen AI will do is it will offer you what sounds like solutions and reasons. And because gen AI is so good at understanding who you are and what you want, it will frame the solutions and reasons in a way that appeals to you. On top of all of this, due to the long-running context window, it’s very likely the advice gen AI gives will be bad advice. For someone who is in a vulnerable and emotional state, the advice may seem reasonable, good even.

    If people then act on this advice, the consequences can be disastrous. I’ve read enough horror stories about this.

    Anyway, I think therapy might be one of the worst uses for gen AI.

    • Sektor@lemmy.world
      link
      fedilink
      arrow-up
      3
      arrow-down
      1
      ·
      7 months ago

      Does gen AI say you you are worthless, you are ugly, you are the reason your parents devorced, you should kill yourself, you should doomscroll social media?

      • Krauerking@lemy.lol
        link
        fedilink
        arrow-up
        1
        ·
        7 months ago

        Probably not but I bet if you said it was your grandmas birthday you could get it to say most of that.

        And hey sometimes it’s the objective truth that it doesn’t know.
        Personally, I know I am the reason my parents are divorced. My incubator nicknamed me the “divorce baby” until she could come up with other worse names, but I wear it with pride. They were miserable POSs together and now at least my dad is doing better and my incubator has to spend a lot more effort scamming people.

        Truths are what they are. But don’t fall for the lies that your brain or robot chatbot tell you.

    • milicent_bystandr@lemm.ee
      link
      fedilink
      arrow-up
      3
      ·
      7 months ago

      Thank you for the more detailed run down. I would set it against two other things, though. One, that for someone who is suicidal or similar, and can’t face or doesn’t know how to find a person to talk to, those beginning interactions of generic therapy advice might (I imagine; I’m not speaking from experience here) do better than nothing.

      From that, secondly, more general about AI. Where I’ve tried it it’s good with things people have already written lots about. E.g. a programming feature where people have already asked the question a hundred different ways on stack overflow. Not so good with new things - it’ll make up what its training data lacks. The human condition is as old as humans. Sure, there’s some new and refined approaches, and values and worldviews change over the generations, but old good advice is still good advice. I can imagine in certain ways therapy is an area where AI would be unexpectedly good…

      …Notwithstanding your point, which I think is quite right. And as the conversation goes on the risk gets higher and higher. I, too, worry about how people might get hurt.

      • idunnololz@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        7 months ago

        I agree that this like everything else is nuanced. For instance, I think if people who use gen AI as a tool to help with their mental health are knowledgeable about the limitations, then they can craft some ways to use it while minimizing the negative sides. Eg. Maybe you can set some boundaries like you talk to the AI chat bot but you never take any advice from it. However, i think in the average case it’s going to make things worse.

        I’ve talked to a lot of people around me about gen AI recently and I think the vast majority of people are misinformed about how it works, what it does, and what the limitations are.

    • Hello Hotel@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      7 months ago

      Gen AI cannot “think” of a solution, evaluate the downsides of the solution, and then offer it to you because gen AI can’t “think” period.

      It turns out that researcers are unshure if our “reasoning” models that are spposed to be able to ‘think’ are even ‘thinking’ at all! it likely has already come up with an answer and is justifying it’s conclusion. bycloud

      this tech gaslights everything it touches including itself.

  • IninewCrow@lemmy.ca
    link
    fedilink
    English
    arrow-up
    111
    arrow-down
    5
    ·
    7 months ago

    A human therapist might not or is less likely to share any personal details about your conversations with anyone.

    An AI therapist will collate, collect, catalog, store and share every single personal detail about you with the company that owns the AI and share and sell all your data to the highest bidder.

    • DaddleDew@lemmy.world
      link
      fedilink
      arrow-up
      61
      ·
      edit-2
      7 months ago

      Neither would a human therapist be inclined to find the perfect way to use all this information to manipulate people while they are being at their weakest. Let alone do it to thousands, if not millions of them all at the same time.

      They are also pushing for the idea of an AI “social circle” for increasingly socially isolated people through which world view and opinions can be bent to whatever whoever controls the AI desires.

      To that we add the fact that we now know they’ve been experimenting with tweaking Grok to make it push all sorts of political opinions and conspiracy theories. And before that, they manipulated Twitter’s algorithm to promote their political views.

      Knowing all this, it becomes apparent that we are currently witnessing is a push for a whole new level of human mind manipulation and control experiment that will make the Cambridge Analytica scandal look like a fun joke.

      Forget Neuralink. Musk already has a direct connection into the brains of many people.

      • fullsquare@awful.systems
        link
        fedilink
        arrow-up
        13
        ·
        7 months ago

        PSA that Nadella, Musk, saltman (and handful of other techfash) own dials that can bias their chatbots in any way they please. If you use chatbots for writing anything, they control how racist your output will be

    • WR5@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      7 months ago

      I’m not advocating for it, but it could be just locally run and therefore unable to share anything?

  • markovs_gun@lemmy.world
    link
    fedilink
    arrow-up
    51
    arrow-down
    1
    ·
    7 months ago

    I can’t wait until ChatGPT starts inserting ads into its responses. “Wow that sounds really tough. You should learn to love yourself and not be so hard on yourself when you mess up. It’s a really good thing to treat yourself occasionally, such as with an ice cold Coca-Cola or maybe a large order of McDonald’s French fries!”

  • beejboytyson@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    7 months ago

    I started using chat GPT to draw up blue prints for various projects.

    It proceeded to mimic my vernacular.

    Chat gpt made the conscious decision to mirror my speech to seem more relatable. That’s manipulation.

  • Narri N. (they/them)@lemmy.ml
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    3
    ·
    7 months ago

    There are ways that LLMs can be used to better one’s life (apparently in some software dev circles these can be and are used to make workflow more efficient) and this can also be one of them, because the part that sucks most about therapy (after the whole monetary thing) is trying to find the form of therapy that works for you, and finding a therapist that you can work with. Every human is different, and that contains both the patient and the therapist, and not everyone can just start working together right off the bat. Not to mention how long it takes for a new therapist to actually get to know you to improve the odds of the cooperation working.

    Obviously I’m not saying “replace all therapists with AIs controlled by racist capitalist pigs with ulterior motives”, but I have witnessed people in my own life who have had some immediate help from a fucking chatbot, which is kinda ridiculous. So in times of distress (say a borderline having such an anxiety attack that they can’t calm themselves because they don’t know what to do to the vicious cycle of thought and emotional response) and for immediate help a well-developed, non-capitalist LLM might be of invaluable help, especially if an actual human can’t be reached if for an example (in this case) the borderline lives in a remote area and it is the middle of the night, as I can tell from personal experience it very often is. And though not every mental health emergency requires first responders on the scene or even a trip to the hospital, there is still a possibility of both being needed eventually. So a chatbot with access to necessary information in general (like techniques for self-soothing e.g. breathing exercises and so forth) and possibly even personal information (like diagnostic and medication history, though this would raise more privacy concerns to be assessed) and the capability to parse and convey them in a non-belittling way (as some doctors and nurses can be real fucking assholes at times) could/would possibly save lives.

    So the problem here is capitalism, surprising no-one.

    • Krauerking@lemy.lol
      link
      fedilink
      arrow-up
      3
      ·
      7 months ago

      Until we start turning back to each other for support and help,
      and realize them holing up in a bunker underground afraid for their life’s means we can just ignore them and seal the entrances.

    • DeathsEmbrace@lemm.ee
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      7 months ago

      In a way that the relief is to give us our demands subliminally. This way the only rich person who is safe is our subject.

  • Match!!@pawb.social
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    1
    ·
    7 months ago

    unlike humans, the ai listens to and remembers me to me [for the number of characters allotted]. this will help me feel seen i guess

    • Dr. Moose@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      7 months ago

      Yeah we have spiritual delusions at home already!

      Seriously, no new spiritual delusions could ever be more harmful than what we have right now.

      • DeceasedPassenger@lemmy.world
        link
        fedilink
        arrow-up
        11
        ·
        edit-2
        7 months ago

        Totally fair point but I really don’t know if that’s true. Most mainstream delusions have the side effect of creating community and bringing people together, other negative aspects notwithstanding. The delusions referenced in the article are more akin to acute psychosis, as the individual becomes isolated, nobody to share delusions with but the chatbot.

        With traditional mainstream delusions, there also exists a relatively clear path out, with corresponding communities. ExJW, ExChristian, etc. People are able to help others escape that particular in-group when they’re familiar with how it works. But how do you deprogram someone when they’ve been programmed with gibberish? It’s like reverse engineering a black box. This is scaring me as I write it.

        • Dr. Moose@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          2
          ·
          7 months ago

          You mean the guys who put kids in suicide bombs don’t have acute psychosis?

          What about almost of the rvaibg Christian hermits that sit in their basements and harass people online?

          Its full on lovecraftian level psychosis. In the US they sell out stadiums and pretend to heal people by touch lmao

        • theneverfox@pawb.social
          link
          fedilink
          English
          arrow-up
          4
          ·
          7 months ago

          This isn’t a new thing, people have gone off alone into this kind of nonsensical journey for a while now

          The time cube guy comes to mind

          There’s also temple OS written in holy C, he was close to some of the stuff in the article

          And these are just two people functional and loud enough to be heard. This is a thing that happens, maybe LLMs exacerbate a pre existing condition, but people have been going off the deep end like this long before LLMs came into the picture

          • DeceasedPassenger@lemmy.world
            link
            fedilink
            arrow-up
            5
            ·
            edit-2
            7 months ago

            Your point is not only valid but also significant, and I feel stands in addition, not contradiction, to my point. These people now have something to continuously bounce ideas off; a conversational partner that never says no. A perpetual yes-man. The models are heavily biased towards the positive simply by nature of what they are, predicting what comes next. You (may or may not) know how in improv acting there’s a saying called “yes, and” which serves to keep things always moving forward. These models effectively exist in this state, in perpetuity.

            Previously, people who have ideas such as these will experience near-universal rejection from those around them (if they don’t have charisma in which case they start a cult) which results in a (relatively, imo) small number of extreme cases. I fear the presence of such a perpetual yes-man will only accelerate all kinds of damage that can emerge from nonsensical thinking.

            • theneverfox@pawb.social
              link
              fedilink
              English
              arrow-up
              2
              ·
              7 months ago

              I agree, it’s certainly not going to help people losing touch. But that’s not what worries me - that’s a small slice of the population, and models are beginning to get better at rejection/assertion

              What I’m more worried about is the people who are using it almost codependently to make decisions. It’s always there, it’ll always give you advice. Usually it’s somewhat decent advice, even. And it’s a normal thing to talk through decisions with anyone

              The problem is people are offloading their thinking to AI. It’s always there, it’s always patient with you… You can literally have it make every life decision for you.

              It’s not emotional connection or malicious AI I worry about… You can now walk around with a magic eight ball that can guide you through life reasonably well, and people are starting to trust it above their own judgement

  • qarbone@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    1
    ·
    7 months ago

    The only people that think this will help are people that don’t know what therapy is. At best, this is pacification and certainly not any insightful incision into your actual problems. And the reason friends are unable to allow casual emotion venting is because we have so much stupid shit like this plastering over a myriad of very serious issues.