Ashley Judd looks nothing like Angelina Jolie.
In short: BONK
It probably thought you were Elon Musk.
A statistical model predicted that “in heat” with no upper-case H nor quotes, was more likely to refer to the biological condition. Don’t get me wrong: I think these things are dumb, but that was a fully predictable result. (‘…the movie “Heat”’ would probably get you there).
While I get your point of the capital H thing, Google’s AI itself decided to put “heat” in quotes all on its own…
I tried the search myself and the non-AI results that aren’t this Bluesky post are pretty useless, but at least they’re useless without using two small towns’ worth of electricity
Non-AI results are not going to generally include sites about how something isn’t true unless it is a common misconception.
google strips capitalization from searches
Slut
They love it.
As a comparison I ran the same all lower case query in bing and got the answer about the movie because asking about a movie is statistically more likely than asking if a human is in heat. Google’a ai is worse than fucking bing, while google’s old serach algorith consistently had the right answers.
Google made itself worse by replacing a working system with ai.
It might be the way Bing is tokenizing and/or how far back it’s looking to connect things when compared to Google.
Kagi quick answers for comparison gets this tweet, but now it thinks that heat is not the movie kind lol
The AI ouroboros in action
It’s hilarious I got the same results with Charlize Theron with the exact same movie, I guess we both don’t know who actresses are apparently.
Heat is an excellent movie, and one of my top five. Coincidentally, I just watched it last night. For a film released in 1998, it has aged well. OOP is in the ballpark, too - a young Natalie Portman is in it, not Jolie.
Yeah it’s a movie that nails “then suddenly… all hell breaks loose.”
Why is the search query in the top and bottom different?
Google correction does not reflect in the tab name; genuinely happens
It’s not helpful for OOP since they’re on iOS, but there’s a Firefox extension that works on desktop and Android that hides the AI overview in searches: https://addons.mozilla.org/en-US/android/addon/hide-google-ai-overviews/
This is why no one can find anything on Google anymore, they don’t know how to google shit.
How can she be fertile if her ovaries are removed?
Because you’re not getting an answer to a question, you’re getting characters selected to appear like they statistically belong together given the context.
A sentence saying she had her ovaries removed and that she is fertile don’t statistically belong together, so you’re not even getting that.
You think that because you understand the meaning of words. LLM AI doesn’t. It uses math and math doesn’t care that it’s contradictory, it cares that the words individually usually came next in it’s training data.
It’s not even words, it “thinks” in “word parts” called tokens.
It has nothing to do with the meaning. If your training set consists of a bunch of strings consisting of A’s and B’s together and another subset consisting of C’s and D’s together (i.e.
[
and ]+[
in regex) and the LLM outputs “ABBABBBDA”, then that’s statistically unlikely because D’s don’t appear with A’s and B’s. I have no idea what the meaning of these sequences are, nor do I need to know to see that it’s statistically unlikely. ]+In the context of language and LLMs, “statistically likely” roughly means that some human somewhere out there is more likely to have written this than the alternatives because that’s where the training data comes from. The LLM doesn’t need to understand the meaning. It just needs to be able to compute probabilities, and the probability of this excerpt should be low because the probability that a human would’ve written this is low.
Honestly this isn’t really all that accurate. Like, a common example when introducing the Word2Vec mapping is that if you take the vector for “king” and add the vector for “woman,” the closest vector matching the resultant is “queen.” So there are elements of “meaning” being captured there. The Deep Learning networks can capture a lot more abstraction than that, and the Attention mechanism introduced by the Transformer model greatly increased the ability of these models to interpret context clues.
You’re right that it’s easy to make the mistake of overestimating the level of understanding behind the writing. That’s absolutely something that happens. But saying “it has nothing to do with the meaning” is going a bit far. There is semantic processing happening, it’s just less sophisticated than the form of the writing could lead you to assume.
Unless they grabbed discussion forums that happened to have examples of multiple people. It’s pretty common when talking about fertility, problems in that area will be brought up.
People can use context and meaning to avoid that mistake, LLMs have to be forced not to through much slower QC by real people (something Google hates to do).
And the text even ends with a mention of her being in early menopause…
Deepseek also gets this wrong.
So she is in heat …
Wouldn’t removing your ovaries and fallopian tubes make you not “fertile” by definition?
Yes, it contradicts itself within the next couple of sentences.
As per form for these “AIs”.
Hey, be fair, the “I” in “LLM” stands for “intelligent”. Please continue consuming the slop.
I never heard of the movie and was enjoying the content you created that I thought was supposed to be funny.
Leaving aside the fact that this looks like AI slop/trash bait; who the fudge is so clueless as to think Ashley Judd, assuming that she’s who they’re confusing, looks anything like Angelina Jolie back then
First, it’s the internet, you can cuss. Either structure the sentence not to include it at all or just cuss for fuck’s sake. Second, not everyone knows every actor/actress or is familiar, especially one that’s definitely not in the limelight anymore like Ashley Judd. Hell even when she was popular she wasn’t in a lot.
How do you know that OP even saw Heat? Maybe they were just curious to see if she was in it.
I am watching the movie Heat and I wanted to check if the actress…
People Google questions like that? I would have looked up “Heat” in either Wikipedia or imdb and checked the cast list. Or gone to Jolie’s Wikipedia or imdb pages to see if Heat is listed
doesn’t matter, this is “AI” and it should know the difference from context. not to mention you can have gemini as an assistant, which is supposed to respond to natural language input. and it does this.
best thing about it is that it doesn’t remember previous questions most of the time so after listening to your “assistant” being patronizing about the term “in heat” not applying to humans you can try to explain saying “dude I meant the movie heat”, it will go “oh you mean the 1995 movie? of course… what do you want to know about it?”