Dear Americans. I use Gemini as a reading assistant for non fiction books. Currently I am reading The origins of totalitarianism by Hannah Arendt and I was asking about parallels in modern US society. I noticed something odd. While questions about historical facts are answered definitively and accurately, questions relating to January 6th are answered with some sort of “,oh well people don’t agree” garbage.
After a long dialogue about it and asking what the scholarly consensus was, I finally got the correct answer.
I just wanted to raise this because it looks sus to me in a big way but maybe I am overreacting? It felt like gas lighting. Also practically nobody in the free world is under any illusion of what January 6 was.



Knowing what I know about LLMs and Google, using the Google LLM as a reading assistant seems like possibly the worst idea.
Why is that? The book was written in 1951. It’s a real struggle to find the translations, the relevant information and the context by yourself.
LLMs don’t have any historical knowledge, they produce reasonably coherent blocks of text. There’s a reason people call them slop machines, because their primary purpose is to produce a lot of text, not any kind of correct information. The fact that they’re sometimes correct is a coincidence, and is the reason why they’re incorrect up to 60% of the time. It doesn’t matter if a book was written in the 1600s or the 1950s or 2026, because the LLM is not going to have any cogent information about it that isn’t a statistical accident from smashing different sources together indiscriminately.
https://arstechnica.com/ai/2025/03/ai-search-engines-give-incorrect-answers-at-an-alarming-60-rate-study-says/
Google is evil. From the propagation of surveillance capitalism, to their selling of tech space to the highest bidder deteriorating the global knowledge base, to their suppression of information that US oligarchs find distasteful, they can’t be reasonably trusted to provide accurate information.
https://www.technologyreview.com/2021/11/20/1039076/facebook-google-disinformation-clickbait/
So you’re using autocorrect made by dishonest Larry to help you read about the rise and anatomy of fascism? You’re likely poisoning your own well and not even realizing it.
Dishonest Larry? That sounds alrather Trumpian and it doesn’t sit well with me. What did Larry Page do to deserve a label from you?
The articles you share are both interesting and I appreciate the share. However I’ve found the tool useful. Moreso than wading through Wikipedia pages, which I do love to do but not when I am reading a book.
These tools certainly aren’t correct coincidentally. Ask Gemini to define 100 words and get back to me on the success rate. Is it coincidence? No. It’s statistics. ANNs are based on pattern recognition. They have significant defficiencies and downsides, but the unfortunate reality is that their margin of usefulness means they aren’t going away.
And if just like to bring something to your attention: poisoning wells is a well worn Jewish conspiracy theory, and I can’t help but find it a bit self defeating that you are claiming that a Jew (Larry page) is poisoning the well of my reading of Hannah Arendt.
If anything, I find your statements much more typical of people with an unhealthy media diet. And I am not accusing you, it’s just my observation from afar.
“I’ve found the tool useful.”
Cool, have fun breaking your brain with the technology you don’t understand while ignoring every new study that comes out that says it’s bad for you while supporting billionaires whose stated goal is to sell knowledge back to you as a resource