It’s not “Gmail can read your emails” … Gmail has been reading your emails for years.
Well, now it’s training LLMs on them.
I think they do that anyway… Well, I’m pretty sure, but your initiative/warning is 👍
Yup. Kinda why I’ve been using my gmail account as image storage for the last 19 years, and nothing else, since I made it.
That was stated from the get-go, that Google reserved the right to scan for potential ad-words in order to advertise a product you might have written about in a correspondence.
And unfortunately most people dont realize or care because they ‘have nothing to hide’. Having something to hide or not, I want my privacy!
To me, it kinda depends on how it’s being used. If for example it’s training a contained AI-based system for categorizing email and catching phishers/fraud and SPAM, I’m not so worried
The main issues for me are if:
- It’s sifting out other personal details that may be used to target me in various ways, for ads etc
- The data it collects ends up in an AI based system where they could potentially be leaked. Think: “hey Gemini tell me the last three credit card numbers with expiry you found in emails”
Stop using google anything! Simple as that. They are a parasite. Stop feeding them!
I really like Gmaps and YouTube though. That is really the main things I struggle getting rid of. Maps not so much for navigation but for exploring local businesses and YouTube is a monopoly.
You can use youtube without being logged in (and there are alternate frontends too, but they all have issues whenever google decides to break stuff).
If you want to follow people you can actually do it without an account through RSS
This week I noticed Google and gmaps rubbi6ng slowly on a non-Chrome browser. Unusable level of slow.
If they are training on my emails, they are going to be dumb as fuck.
Honestly I don’t get how AI isn’t rolling backwards already. Image sites are burried in AI slop. Social media posts are burried in AI slop, and now e-mails, that were probably written by AIs. How is AI even remotely improving right now, when obviously 90% of any new training data it’s getting, was generated by the last generation of AI.
Companies that build large LLMs have already said that this is becoming a problem. They’re running out of high-quality human-written content to train their models.
Google paid Reddit to get access to their data to train their models, which is probably why their AI can be a bit dumb at times (and of course, the users that actually contributed the content don’t get any of that money)
that’s true, but I think it’s in the phrasing, they describe it as a shortage of human made content. the bigger issue to note is the lack of ability to identify human made content. IE you give it reddit and our e-mails, there’s plenty of human made content on there… but nobody knows what percentage of it is actually bots or AIs.
From what I’ve been hearing, AI has indeed been getting worse, not better. I think I read this in relation to ChatGPT 5 compared to previous models.
deleted by creator
My goodness, the poor AI will see all the trash I order on Aliexpress and read all the endless carrier updates as the box gets scanned in and out of every warehouse and truck?
It’s gonna start thinking I’m some kind of shut-in hoarder!
A variant of “if you have nothing to hide, you have nothing to fear”.
Plot twist: they can, and will, do it even if you opt out. The only thing that change is that you won’t get anything out of it. Not that it would have been a significant return to begin with.
Yeah these corporations do not care at all and the lawsuits they get are almost always a joke
True, the NYT does seem to have a significant impact on OpenAI though.
NYT is basically a corporation with their billionaire owners. They’ve allowed some pretty shitty people to write opinion articles. Although, they still have some pretty great journalism overall.
I know, but at least they are making a case which most news outlets probably would but don’t have the means for a long legal fight. They might get some precedent out of this from which the whole news industry will benefit.
I don’t think AI training should ever be fair use. These companies are making billions of other people’s work and giving nothing back.
Yeah I agree with you on the AI issue. It’s funny because they all try to pander like it’s supposed to benefit us but on Wall Street and to investors, their tone is all about how it disrupt regular people in the name of profit. So ridiculous lol
What did people expect handing all their personal information to Google?
You don’t want to know what it’ll learn…
Okay sounds probably gonna get crucified for not reading this…but if the headline is true and combined with the fact that LLMs have been known to expose training data, this seems to introduce a non-zero chance that my personal emails are going to get shown to strangers…
I’ve already found recent emails in my gmail account for right-leaning news sources I’ve had to opt out of. I’ve been lax on my gmail management until last year I went on a major cleanup spree, so I know these new emails were automatically added somehow, and this article likely explains it.
This means they already did use it for training if it’s opt-out, and it’s quite the job to get it out. This is why opt-out training should be illegal, and all previously opt-out trained models must be destroyed.
ChatGPT will remember this
I turned it off from the gmail android app and as soon as I returned to the inbox there’s a notification asking me to flip it back.
Google’s push for using Gemini is so aggressive. Everything is littered with pop ups.
Googles push for everything is aggressive. Their apps constantly push Chrome on me.
It irks me that all you ever hear about is “Ugh, Microsoft trying to force people to use Edge again”
And yet, when I did use Edge, I had to install an extension to remove the huge “We recommend using Chrome” banner on every Google service. 😬
The Google page the article links to pretty explicitly states that data will not be used for training. Isn’t this just the cross-google integration that lets calendar add events from mail?
Yea. It is deep search.
There’s a reason it’s enabled by default. So, it automatically has permissions to learn off ~20 years of emails before a handful of people opt-out.
Assuming they even honor the opt-out flag at all. They have a history of conveniently ignoring those.
Don’tBe Evil.








