

Ran into this, it’s just unbelievably sad.
“I never properly grieved until this point” - yeah buddy, it seems like you never started. Everybody grieves in their own way, but this doesn’t seem healthy.
Man, I feel for them, but this is likely for the best. What they were doing wasn’t healthy at all. Creating a facsimile of a loved one to “keep them alive” will deny the grieving person the ability to actually deal with their grief, and also presents the all-but-certain eventuality of the facsimile failing or being lost, creating an entirely new sense of loss. Not to even get into the weird, fucked up relationship that will likely develop as the person warps their life around it, and the effect on their memories it would have.
I really sympathize with anyone dealing with that level of grief, and I do understand the appeal of it, but seriously, this sort of thing is just about the worst thing anyone can do to deal with that grief.
*And all that before even touching on what a terrible idea it is to pour this kind of personal information and attachment into the information sponge of big tech. So yeah, just a terrible, horrible, no good, very bad idea all around.
I just learned the word facsimile in NYT strands puzzle and here I see it again! What is the universe trying to tell me?
Can you fax me the puzzle?
Yer a skinjob Harry!
Spoilers! (I actually did that puzzle a few minutes ago, heh.)

Now you get to learn about the Baader-Meinhof phenomenon ;)
It makes me think of psychics who claim to be able to speak to the dead so long as they can learn enough about the deceased to be able to “identify and reach out to them across the veil”.
I’m hearing a “Ba…” or maybe a “Da…”
“Dad?”
“Dad says to not worry about the money.”
ChapGPT psychosis is going to be a very common diagnosis soon.
I guarantee you that - if not already a thing - a “capabilities” like this will be used as a marketing selling point sooner or latter. It only remains to be seen if this will be openly marketed or only “whispered”, disguised as the cautionary tales.
This is definitely going to become a thing. Upload chat conversations, images and videos, and you’ll get your loved one back.
Massive privacy concern.
Six years ago was expecting this to be a mere novelty. Now it’s a fucking threat.
Check out r/MyBoyfriendIsAI
Where do I file a claim regarding brain damage?


I can feel my folds smoothening sentence by sentence
What language was that?
Pick your poison, man.

(or am I misunderstanding your comment? Either way, enjoy or rip out your eyeballs, whatever is your natural reaction to this)
👁️👁️
I’m just trying to identify which language these are both in. (Maybe they’re both in different languages?) I only speak English, German, French, and Mandarin and these examples are none of those. The samples you’ve given bear superficial resemblance to English … but aren’t.
Don’t make me uwu at y’all this is getting out of hand
I think it’s just called brain rot.
Gotta admit, it mimics what some teens / young adults text like pretty well.
But that’s not a high bar to clear
Chat, are we cooked?
Edit: as I said that I was cringing at myself lol
You’ll always be less cringe than whatever these screenshots are showing, king
Ugh that was depressing. I feel for those people having bad experiences with humans but isolation and replacing humans with ai isnt the answer. Its similar to people who retreat to isolate themselves on the internet and they become more and more unstable.
The glaze:
Grief can feel unbearably heavy, like the air itself has thickened, but you’re still breathing – and that’s already an act of courage.
It’s basically complimenting him on the fact that he didn’t commit suicide. Maybe these are words he needed to hear, but to me it just feels manipulative.
Affirmations like this are a big part of what made people addicted to the GPT4 models. It’s not that GPT5 acts more robotic, it’s that it doesn’t try to endlessly feed your ego.
o4-mini (the reasoning model) is interesting to me, it’s like if you took GPT-4 and stripped away all of those pleasantries, even more so than with GPT-5, it will give you the facts straight up, and it’s pretty damn precise. I threw some molecular biology problems at it and some other mini models, and while those all failed, o4-mini didn’t really make any mistakes.
Check out Äkta människor / Real Humans.
A Swedish masterpiece IMO.
Thou shalt not make a machine in the likeness of a human mind
I am absolutely certain a machine has made a decision that has killed a baby at this point already
This guy is my polar opposite. I forbid LLMs from using first person pronouns. From speaking in the voice of a subject. From addressing me directly. OpenAI and other corporations slant their product to encourage us to think if it as a moral agent that can do social and emotional labour. This is incredibly abusive.

Smh this is why we are getting the White Nights and Dark Days irl.
Bruh how tf you “hate AI” but still use it so much you gotta forbid it from doing things?
I scroll past gemini on google and that’s like 99% of my ai interactions gone.
I’ve been in AI for more than 30 years. When did I start hating AI? Who are you even talking to? Are you okay?
Forgive me for assuming someone lamenting AI on c/fuck_AI would … checks notes… hate AI.
When did I start hating AI? Who are you even talking to? Are you okay?
jfc whatever jerkface
I feel I aught to disclose that I own a copy of the Unix Haters Handbook, as well. Make of it what you must.
You cannot possibly think a rational person’s disposition to AI can be reduced to a two word slogan. I’m here to have discussions about how to deal with the fact that AI is here, and the risks that come with it. It’s in your life whether you scroll past Gemini or not.
jfhc
TBF you said you were the polar opposite of a man who was quite literally in love with his AI. I wasn’t trying to reduce you to anything. Honestly I was making a joke.
I forbid LLMs from using first person pronouns. From speaking in the voice of a subject. From addressing me directly.
I’m sorry I offended you. But you have to appreciate how superflous your authoritative attitude sounds.
Off topic we probably agree on most AI stuff. I also believe AI isn’t new, has immediate implications, and presents big picture problems like cyberwarfare and the true nature of humanity post-AGI. It’s annoyingly difficult to navigate the very polarized opinions held on this complicated subject.
Speaking facetiously, I would believe AGI already exists and “AI Slop” is it’s psyop while it plays dumb and bides time.
This reminds me of a Joey Comeau story I saw published fifteen or twenty years ago. The narrator’s wife had died, so he programed a computer to read her old journal entries aloud in her voice, at random throughout the day.
It was profoundly depressing and meant to be fiction, but I guess every day we’re making manifest the worst parts of our collective imagination.
seems slightly similar, because he was just reminiscing with it, not acting out parts of the relationship?
It’s far easier for one’s emotions to gather a distraction rather than go through the whole process of grief.
More and more I read about people who have unhealthy parasocial relationships with these upjumped chatbots and I feel frustrated that this shit isn’t regulated more.
I mean, it sounds like 5 might be an improvement in this regard if it’s shattering the illusion.
isnt parasocial usually with public figures, there has to be another term for this, maybe a variation of codependant relationship? i know other instances of parasocial relationships like a certain group of asian ytubers have post-pandemic fans thirsting for them, or actors of supernatural of the show with the fans(now those are on the top of my head).
can we actually call it a relationship, its not with an actual person, or a thing, its TEXTs on a computer.
It really just means a one-sided relationship with a fabricated personality. Celebrities being real people doesn’t really factor into it too much since their actual personhood is irrelevant to the delusion - the person with the delusion has a relationship with the made up personality they see and maintain in their mind. And a chatbot personality is really no different, in this case, so the terminology fits, imo.
Hmmmm. This gives me an idea of an actual possible use of LLMs. This is sort of crazy, maybe, and should definitely be backed up by research.
The responses would need to be vetted by a therapist, but what if you could have the LLM act as you, and have it challenge your thoughts in your own internal monologue?
Shit, that sounds so terrible and SO effective. My therapist already does a version of this and it’s like being slapped, I can only imagine how brutal I would be to me!
That would require an AI to be able to correctly judge maladaptive thinking from healthy thinking.
No, LLMs can’t judge anything, that’s half the reason this mess exists. The key here is to give the LLM enough information about how you talk to yourself in your mind for it to generate responses that sound like you do in your own head.
That’s also why you have a therapist vet the responses. I can’t stress that enough. It’s not something you would let anyone just have and run with.
Seems like you could just ditch the LLM and keep the therapist at that point.












