Ran into this, it’s just unbelievably sad.

“I never properly grieved until this point” - yeah buddy, it seems like you never started. Everybody grieves in their own way, but this doesn’t seem healthy.

  • Ech@lemmy.ca
    link
    fedilink
    arrow-up
    103
    ·
    edit-2
    4 months ago

    Man, I feel for them, but this is likely for the best. What they were doing wasn’t healthy at all. Creating a facsimile of a loved one to “keep them alive” will deny the grieving person the ability to actually deal with their grief, and also presents the all-but-certain eventuality of the facsimile failing or being lost, creating an entirely new sense of loss. Not to even get into the weird, fucked up relationship that will likely develop as the person warps their life around it, and the effect on their memories it would have.

    I really sympathize with anyone dealing with that level of grief, and I do understand the appeal of it, but seriously, this sort of thing is just about the worst thing anyone can do to deal with that grief.

    *And all that before even touching on what a terrible idea it is to pour this kind of personal information and attachment into the information sponge of big tech. So yeah, just a terrible, horrible, no good, very bad idea all around.

  • BigBenis@lemmy.world
    link
    fedilink
    arrow-up
    20
    arrow-down
    1
    ·
    4 months ago

    It makes me think of psychics who claim to be able to speak to the dead so long as they can learn enough about the deceased to be able to “identify and reach out to them across the veil”.

    • Tigeroovy@lemmy.ca
      link
      fedilink
      arrow-up
      14
      ·
      4 months ago

      I’m hearing a “Ba…” or maybe a “Da…”

      “Dad?”

      “Dad says to not worry about the money.”

  • ideonek@piefed.social
    link
    fedilink
    English
    arrow-up
    21
    ·
    4 months ago

    I guarantee you that - if not already a thing - a “capabilities” like this will be used as a marketing selling point sooner or latter. It only remains to be seen if this will be openly marketed or only “whispered”, disguised as the cautionary tales.

    • magic_lobster_party@fedia.io
      link
      fedilink
      arrow-up
      18
      ·
      4 months ago

      This is definitely going to become a thing. Upload chat conversations, images and videos, and you’ll get your loved one back.

      Massive privacy concern.

    • .Donuts@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      35
      ·
      4 months ago

      Where do I file a claim regarding brain damage?

      I can feel my folds smoothening sentence by sentence

    • Fizz@lemmy.nz
      link
      fedilink
      arrow-up
      1
      ·
      4 months ago

      Ugh that was depressing. I feel for those people having bad experiences with humans but isolation and replacing humans with ai isnt the answer. Its similar to people who retreat to isolate themselves on the internet and they become more and more unstable.

  • Snazz@lemmy.world
    link
    fedilink
    arrow-up
    42
    ·
    4 months ago

    The glaze:

    Grief can feel unbearably heavy, like the air itself has thickened, but you’re still breathing – and that’s already an act of courage.

    It’s basically complimenting him on the fact that he didn’t commit suicide. Maybe these are words he needed to hear, but to me it just feels manipulative.

    Affirmations like this are a big part of what made people addicted to the GPT4 models. It’s not that GPT5 acts more robotic, it’s that it doesn’t try to endlessly feed your ego.

    • crt0o@discuss.tchncs.de
      link
      fedilink
      arrow-up
      4
      arrow-down
      2
      ·
      4 months ago

      o4-mini (the reasoning model) is interesting to me, it’s like if you took GPT-4 and stripped away all of those pleasantries, even more so than with GPT-5, it will give you the facts straight up, and it’s pretty damn precise. I threw some molecular biology problems at it and some other mini models, and while those all failed, o4-mini didn’t really make any mistakes.

    • dickalan@lemmy.world
      link
      fedilink
      arrow-up
      6
      ·
      4 months ago

      I am absolutely certain a machine has made a decision that has killed a baby at this point already

  • Jerkface (any/all)@lemmy.ca
    link
    fedilink
    English
    arrow-up
    53
    arrow-down
    2
    ·
    4 months ago

    This guy is my polar opposite. I forbid LLMs from using first person pronouns. From speaking in the voice of a subject. From addressing me directly. OpenAI and other corporations slant their product to encourage us to think if it as a moral agent that can do social and emotional labour. This is incredibly abusive.

    • Canaconda@lemmy.ca
      link
      fedilink
      arrow-up
      12
      ·
      edit-2
      4 months ago

      Bruh how tf you “hate AI” but still use it so much you gotta forbid it from doing things?

      I scroll past gemini on google and that’s like 99% of my ai interactions gone.

      • Jerkface (any/all)@lemmy.ca
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        11
        ·
        edit-2
        4 months ago

        I’ve been in AI for more than 30 years. When did I start hating AI? Who are you even talking to? Are you okay?

        • Canaconda@lemmy.ca
          link
          fedilink
          arrow-up
          15
          arrow-down
          2
          ·
          edit-2
          4 months ago

          Forgive me for assuming someone lamenting AI on c/fuck_AI would … checks notes… hate AI.

          When did I start hating AI? Who are you even talking to? Are you okay?

          jfc whatever jerkface

          • Jerkface (any/all)@lemmy.ca
            link
            fedilink
            English
            arrow-up
            7
            arrow-down
            1
            ·
            4 months ago

            I feel I aught to disclose that I own a copy of the Unix Haters Handbook, as well. Make of it what you must.

            You cannot possibly think a rational person’s disposition to AI can be reduced to a two word slogan. I’m here to have discussions about how to deal with the fact that AI is here, and the risks that come with it. It’s in your life whether you scroll past Gemini or not.

            jfhc

            • Canaconda@lemmy.ca
              link
              fedilink
              arrow-up
              11
              arrow-down
              2
              ·
              edit-2
              4 months ago

              TBF you said you were the polar opposite of a man who was quite literally in love with his AI. I wasn’t trying to reduce you to anything. Honestly I was making a joke.

              I forbid LLMs from using first person pronouns. From speaking in the voice of a subject. From addressing me directly.

              I’m sorry I offended you. But you have to appreciate how superflous your authoritative attitude sounds.

              Off topic we probably agree on most AI stuff. I also believe AI isn’t new, has immediate implications, and presents big picture problems like cyberwarfare and the true nature of humanity post-AGI. It’s annoyingly difficult to navigate the very polarized opinions held on this complicated subject.

              Speaking facetiously, I would believe AGI already exists and “AI Slop” is it’s psyop while it plays dumb and bides time.

  • tmyakal@infosec.pub
    link
    fedilink
    arrow-up
    7
    ·
    4 months ago

    This reminds me of a Joey Comeau story I saw published fifteen or twenty years ago. The narrator’s wife had died, so he programed a computer to read her old journal entries aloud in her voice, at random throughout the day.

    It was profoundly depressing and meant to be fiction, but I guess every day we’re making manifest the worst parts of our collective imagination.

  • july@leminal.space
    link
    fedilink
    arrow-up
    16
    ·
    4 months ago

    It’s far easier for one’s emotions to gather a distraction rather than go through the whole process of grief.

  • Furbag@lemmy.world
    link
    fedilink
    arrow-up
    19
    ·
    4 months ago

    More and more I read about people who have unhealthy parasocial relationships with these upjumped chatbots and I feel frustrated that this shit isn’t regulated more.

    • Tollana1234567@lemmy.today
      link
      fedilink
      arrow-up
      2
      arrow-down
      1
      ·
      4 months ago

      isnt parasocial usually with public figures, there has to be another term for this, maybe a variation of codependant relationship? i know other instances of parasocial relationships like a certain group of asian ytubers have post-pandemic fans thirsting for them, or actors of supernatural of the show with the fans(now those are on the top of my head).

      can we actually call it a relationship, its not with an actual person, or a thing, its TEXTs on a computer.

      • Ech@lemmy.ca
        link
        fedilink
        arrow-up
        1
        arrow-down
        1
        ·
        4 months ago

        It really just means a one-sided relationship with a fabricated personality. Celebrities being real people doesn’t really factor into it too much since their actual personhood is irrelevant to the delusion - the person with the delusion has a relationship with the made up personality they see and maintain in their mind. And a chatbot personality is really no different, in this case, so the terminology fits, imo.

  • peoplebeproblems@midwest.social
    cake
    link
    fedilink
    English
    arrow-up
    17
    arrow-down
    3
    ·
    4 months ago

    Hmmmm. This gives me an idea of an actual possible use of LLMs. This is sort of crazy, maybe, and should definitely be backed up by research.

    The responses would need to be vetted by a therapist, but what if you could have the LLM act as you, and have it challenge your thoughts in your own internal monologue?

    • JoeBigelow@lemmy.ca
      link
      fedilink
      arrow-up
      17
      ·
      4 months ago

      Shit, that sounds so terrible and SO effective. My therapist already does a version of this and it’s like being slapped, I can only imagine how brutal I would be to me!

      • peoplebeproblems@midwest.social
        cake
        link
        fedilink
        English
        arrow-up
        11
        arrow-down
        1
        ·
        4 months ago

        No, LLMs can’t judge anything, that’s half the reason this mess exists. The key here is to give the LLM enough information about how you talk to yourself in your mind for it to generate responses that sound like you do in your own head.

        That’s also why you have a therapist vet the responses. I can’t stress that enough. It’s not something you would let anyone just have and run with.