Knives, pointy knives, that burn with the fires of a thousand evils.
AI, you’ve always been the caretaker of the Overlook hotel.
Mr. Grady, you were the caretaker here. I recognize you. I saw your picture in the newspapers. You chopped your wife and daughters up into little bits. And then you blew your brains out.
BTW, I just now realized that Shelley Duval died last year. Such a great actress.
I searched for Arby’s Pulled Pork earlier to see if it was a limited time deal (it is, though I didn’t see an end date–and yes, they’re pretty decent). Gemini spit out some basic and probably factual information about the sandwich, then showed a picture of a bowl of mac and cheese.
deleted by creator
What do you get for the person who has everything and wishes each of those things were smaller?
deleted by creator
When I read that, in my head it’s spoken to the music of “revolution number 9” by the Beatles.
I don’t know how I haven’t heard this before. What the hell was that song? lol.
It’s experimental. I think if you listen to take 20 you’ll see how they got there.
For me it was Another Idea by Marc Rebillet.
New set of knife, new set of knife, new set of knife…
I suddenly understand the Simpsons joke. Cool track.
I wouldn’t mind a ceramic knives set
Oh come on is this gpt-2m
Gemini needs to stop edging me
You can’t go wrong with your dick in a box.
I’ve seen models do this in benchmarks. It’s how they respond without reinforcement learning. Also this is probably fake
Or the post training is messed up
I’ve had a bunch of questionable Google ai answers already, not as weird as this but enough to make me believe this could also be not fake.
I’ve used an unhealthy amount of AI, this is nothing. There was an audio bug in chatgpt that made the assistant scream. The volume and pitched increased and would persist even when I exited the speech mode. It happened several times and even saved a screen recording, but I don’t have it saved on my phone any more. Repeating is very common though.
What about pizza with glue-toppings?
Knives Knives Knives Knives Knives Knives Knives Knives Knives Knives Knives Knives Knives Knives Knives Knives Knives Knives Knives Knives Knives Knives Knives Knives Knives Knives Knives Knives Knives Knives Knives Knives More Knives Knives Knives Knives Knives Knives Knives Knives Knives Even More Knives Knives Knives Knives Knives Knives Knives Knives Knives Knives Knives Knives Knives Knives Knives Knives Knives Knives Knives Knives Knives Knives Knives Knives Knives All the Knives Knives Knives Knives Knives Knives Knives Knives Knives
Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers
Mushroom mushroom
Snaaaaaake!!!
Why is gemini becoming GLADoS 😭
What’s frustrating to me is there’s a lot of people who fervently believe that their favourite model is able to think and reason like a sentient being, and whenever something like this comes up it just gets handwaved away with things like “wrong model”, “bad prompting”, “just wait for the next version”, “poisoned data”, etc etc…
this really is a model/engine issue though. the Google Search model is unusably weak because it’s designed to run trillions of times per day in milliseconds. even still, endless repetition this egregious usually means mathematical problems happened somewhere, like the SolidGoldMagikarp incident.
think of it this way: language models are trained to find the most likely completion of text. answers like “you should eat 6-8 spiders per day for a healthy diet” are (superficially) likely - there’s a lot of text on the Internet with that pattern. clanging like “a set of knives, a set of knives, …” isn’t likely, mathematically.
last year there was an incident where ChatGPT went haywire. small numerical errors in the computations would snowball, so after a few coherent sentences the model would start sundowning - clanging and rambling and responding with word salad. the problem in that case was bad cuda kernels. I assume this is something similar, either from bad code or a consequence of whatever evaluation shortcuts they’re taking.
Given how poorly defined “think”, “reason”, and “sentience” are, any these claims have to be based purely on vibes. OTOH it’s also kind of hard to argue that they are wrong.







