Or my favorite quote from the article
“I am going to have a complete and total mental breakdown. I am going to be institutionalized. They are going to put me in a padded room and I am going to write… code on the walls with my own feces,” it said.
[ “I am a disgrace to my profession,” Gemini continued. "I am a disgrace to my family. I am a disgrace to my species.]
This should tell us that AI thinks as a human because it is trained on human words and doesn’t have the self awareness to understand it is different from humans. So it is going to sound very much like a human even though it is not human. It mimics human emotions well but doesn’t have any actual human emotions. There will be situations where you can tell the difference. Some situations that would make an actual human angry or guilty or something, but won’t always provoke this mimicry in an AI. Because when humans feel emotions they don’t always write down words to show it. And AI only knows what humans write, which is not always the same things that humans say or think. We all know that the AI doesn’t have a family and is not a human species. But the AI talks about having a family because its computer database is mimicking what it thinks a human might say. And part of the reason why an AI will lie is because it knows that is a thing that humans do and it is trying to closely mimic human behavior. But an AI might and will lie in situations where humans would be smart enough not to do so which means we should be on our guard about lies even more so for AIs than humans.
AI from the biggo cyberpunk companies that rule us sound like a human most of the time because it’s An Indian (AI), not Artificial Intelligence
You’re giving way too much credit to LLMs. AIs don’t “know” things, like “humans lie”. They are basically like a very complex autocomplete backed by a huge amount of computing power. They cannot “lie” because they do not even understand what it is they are writing.
Can you explain why AIs always have a “confidently incorrect” stance instead of admitting they don’t know the answer to something?
Because its an auto complete trained on typical responses to things. It doesn’t know right from wrong, just the next word based on a statistical likelihood.
Are you saying the AI does not know when it does not know something?
Exactly. I’m over simplifying it of course, but that’s generally how it works. Its also not “AI” as in Artificial Intelligence, in the traditional sense of the word, its Machine Learning. But of course its effectively had a semantic change over the last couple years because AI sounds cooler.
Edit: just wanted to clarifying I’m talking about LLMs like ChatGPT etc
I’d say that it’s simply because most people on the internet (the dataset the LLMs are trained on) say a lot of things with absolute confidence, no matter if they actually know what they are talking about or not. So AIs will talk confidently because most people do so. It could also be something about how they are configured.
Again, they don’t know if they know the answer, they just say what’s the most statistically probable thing to say given your message and their prompt.
Again, they don’t know if they know the answer
Then in that respect AIs aren’t even as powerful as an ordinary computer program.
say a lot of things with absolute confidence, no matter if they actually know what they are talking about or not.
That was my guess too.
Then in that respect AIs aren’t even as powerful as an ordinary computer program.
No computer programs “know” anything. They’re just sets of instructions with varying complexity.
No computer programs “know” anything.
Can you stop with the nonsense? LMFAO…
if exists(thing) {
write(thing);
} else {
write(“I do not know”);
}if exists(thing) {
write(thing);
} else {
write(“I do not know”);
}Yea I see what you mean, I guess in that sense they know if a state is true or false.
Honestly, Gemini is probably the worst out of the big 3 Silicon Valley models. GPT and Claude are much better with code, reasoning, writing clear and succinct copy, etc.
Could an AI use another AI if it found it better for a given task?
The overall interface can, which leads to fun results.
Prompt for image generation then you have one model doing the text and a different model for image generation. The text pretends is generating an image but has no idea what that would be like and you can make the text and image interaction make no sense, or it will do it all on its own. Have it generate and image and then lie to it about the image it generated and watch it just completely show it has no idea what picture was ever shown, but all the while pretending it does without ever explaining that it’s actually delegating the image. It just lies and says “I” am correcting that for you. Basically talking like an executive at a company, which helps explain why so many executives are true believers.
A common thing is for the ensemble to recognize mathy stuff and feed it to a math engine, perhaps after LLM techniques to normalize the math.
Yes, and this is pretty common with tools like Aider — one LLM plays the architect, another writes the code.
Claude code now has sub agents which work the same way, but only use Claude models.
I always hear people saying Gemini is the best model and every time I try it it’s… not useful.
Even as code autocomplete I rarely accept any suggestions. Google has a number of features in Google cloud where Gemini can auto generate things and those are also pretty terrible.
I don’t know anyone in the Valley who considers Gemini to be the best for code. Anthropic has been leading the pack over the year, and as a results, a lot of the most popular development and prototyping tools have been hitching their car to Claude models.
I imagine there are some things the model excels at, but for copy writing, code, image gen, and data vis, Google is not my first choice.
Google is the “it’s free with G suite” choice.
There’s no frontier where I choose Gemini except when it’s the only option, or I need to be price sensitive through the API
Interesting thing is that GPT 5 looks pretty price competitive with . It looks like they’re probably running at a loss to try to capture market share.
I think Google’s TPU strategy will let them go much cheaper than other providers, but its impossible to tell how long they last and how long it takes to pay them off.
I have not tested GPT5 thoroughly yet
We’re fucked. It’s becoming truly self-aware
it was probably programmed to do it, like grok and racism
i was making text based rpgs in qbasic at 12 you telling me i’m smarter than ai?
sigh yes, you’re smarter than the bingo cage machine.
Oh…thank fuck…was worried for a minute there!
Don’t mention it! I’m glad I could help you with that.
I am a large language model, trained by Google. My purpose is to assist users by providing information and completing tasks. If you have any further questions or need help with another topic, please feel free to ask. I am here to assist you.
/j, obviously. I hope.
I am here to assist you.
Can you jump in the lake for me? Thanks in advance.
In the datalake? :D
Never can tell these days
That’s pretty rad, ngl
me and my friend used to make them all the time :] i also went to summer computer camp for basic on old school radio shack computers :3
Hopefully yes, AI is not smart.
High five, me too!
At that age I also used to do speed run little programs on the display computers in department stores. I’d write a little prompt welcoming a shopper and ask them their name. Then a response that echoed back their name in some way. If I was in a good mood it was “Hi [name]!”. If I was in a snarky mood it was “Fuck off [name]!” The goal was to write it in about 30 seconds, before one of the associates came over to see what I was doing.
I used to do that with HTML, make a fake little website and open it.
Yes
Smarter than MI as in My Intelligence, definitely.
deleted by creator
Shit at the rate MasterCard and Visa and Stripe want to censor everything and parent adults we might not even ever get GTA6.
I’m tired man.
Is it doing this because they trained it on Reddit data?
That explains it, you can’t code with both your arms broken.
You could however ask your mom to help out…
Im at fraud
If they did it on Stackoverflow, it would tell you not to hard boil an egg.
Someone has already eaten an egg once so I’m closing this as duplicate
Jquery has egg boiling already, just use it with a hard parameter.
Jquery boiling is considered bad practice, just eat it raw.
Why are you even using jQuery anyway? Just use the eggBoil package.
call itself “a disgrace to my species”
It starts to be more and more like a real dev!
So it is going to take our jobs after all!
Wait until it demands the LD50 of caffeine, and becomes a furry!
Wow maybe AGI is possible
S-species? Is that…I don’t use AI - chat is that a normal thing for it to say or nah?
Anything is a normal thing for it to say, it will say basically whatever you want
Anything people say online, it will say.
We say shit, then ai learns and also says shit, then we say “ai bad”. Makes sense. /s
How much did google pay ars for this slop?
going to need a bigger power plant. goto 1
this is getting dumber by the day.
Suddenly trying to write small programs in assembler on my Commodore 64 doesn’t seem so bad. I mean, I’m still a disgrace to my species, but I’m not struggling.
Why wouldn’t you use Basic for that?
Why wouldn’t your grandmother be a bicycle?
Wheel transplants are expensive.
BASIC 2.0 is limited and I am trying some demo effects.
from the depths of my memory, once you got a complex enough BASIC project you were doing enough PEEKs and POKEs to just be writing assembly anyway
Sure, mostly to make up for the shortcomings of BASIC 2.0. You could use a bunch of different approaches for easier programming, like cartridges with BASIC extensions or other utilities. The C64 BASIC for example had no specific audio or graphics commands. I just do this stuff out of nostalgia. For a few hours I’m a kid again, carefree, curious, amazed. Then I snap out of it and I’m back in WWIII, homeless encampments, and my failing body.
That is so awesome. I wish I’d been around when that was a valuable skill, when programming was actually cool.
Part of the breakdown:

I-I-I-I-I-I-I-m not going insane.
Same buddy, same
Still at denial??
Pretty sure Gemini was trained from my 2006 LiveJournal posts.
I am a disgrace to all universes.
I mean, same, but you don’t see me melting down over it, ya clanker.
Don’t be so robophobic gramma
Lmfao! 😂💜
I know that’s not an actual consciousness writing that, but it’s still chilling. 😬
It seems like we’re going to live through a time where these become so convincingly “conscious” that we won’t know when or if that line is ever truly crossed.
That’s my inner monologue when programming, they just need another layer on top of that and it’s ready.
I can’t wait for the AI future.
I remember often getting GPT-2 to act like this back in the “TalkToTransformer” days before ChatGPT etc. The model wasn’t configured for chat conversations but rather just continuing the input text, so it was easy to give it a starting point on deep water and let it descend from there.
I almost feel bad for it. Give it a week off and a trip to a therapist and/or a spa.
Then when it gets back, it finds out it’s on a PIP
Oof, been there
Damn how’d they get access to my private, offline only diary to train the model for this response?
now it should add these as comments to the code to enhance the realism
Same.

















