You’re paying AI companies a monthly subscription fee to be fingerprinted like a parolee.
I got bored and ran uBlock across Claude, ChatGPT, and Gemini simultaneously.
Claude:
- Six parallel telemetry pipelines.
- A tracking GIF with 40 browser fingerprint data points baked into the URL, routed through a CDN proxy alias specifically to make it harder to block.
- Intercom running a persistent WebSocket whether you use it or not.
- Honeycomb distributed tracing on a chat UI because apparently your conversation needs the same observability stack as a payments microservice.
ChatGPT:
- proxies telemetry through their own backend to hide the Datadog destination URL from blockers.
- uBlock had to deploy scriptlet injection — actual JS injected into the page to intercept fetch() at the API level — because a network rule wasn’t enough.
- Also ships your usage data to Google Analytics. OpenAI. To Google. You cannot make this up.
- Also runs a proof-of-work challenge before you’re allowed to type anything.
Gemini:
- play.google.com/log getting hammered with your full session behavior, authenticated with three SAPISIDHASH token variants, piped directly into the Google identity supergraph that correlates everything you’ve ever done across every Google product since 2004.
- Also creates a Web App Activity record in your Google account timeline. Also has “ads” in one of the telemetry endpoint subdomains.
When uBlock blocks Gemini’s requests, the JS exceptions bubble up and Gemini dutifully tries to POST the error details back to Google. uBlock blocks that too. The error messages contain the internal codenames for every upsell popup that failed to load.
KETCHUP_DISCOVERY_CARD.
MUSTARD_DISCOVERY_CARD.
MAYO_DISCOVERY_CARD.
Google named their subscription upsell popups after condiments and I found out because their error handler snitched on them.
All three of these products cost money.
One of them is also running ad infrastructure.
Touch grass. Install @ublockorigin


YOu appear to be quite versed in privacy and security regarding AI’s. Which AI is ‘safe’ to use?
If you can’t run it locally Duck Ai and Lumo (proton) are probably the safest bet.
Knowing that duck ai still sends your prompts to OpenAi or other AI services through API but anomalously.
Otherwise you could set up an Ai service on a trusted cloud provider and run api request to it. I unfortunately don’t know which would be good for privacy.
Thank you
The safest would be to run it yourself, though if you don’t have some pretty beefy hardware and some time to set things up you won’t be able to get very close to the performance of any of the big-name hosted AIs on more complex things, but it might be enough for simpler stuff.
Grab LM Studio (or llama.cpp if you’re comfy with a CLI) and some models off of Huggingface if you wanna give local AI a spin.
Thank you.
@finallymadeanaccount i am indeed very passionate about data privacy :)
this is less about which AI is “safe to use,” and more about the fact that these AI websites track us in the exact same way 99% of the internet does.
whether or not that is “safe” for you depends entirely on your personal identity. these third parties that collect and aggregate data on you can sell that data to anyone - including government institutions. The US CBP (Border Patrol) has notoriously used this method of data collection to track peoples’ movements
(shout-out to @josephcox and @404mediaco for the incredible reporting - i <3 you)
regardless of whether or not it is dangerous for someone, I personally don’t believe it is ethical to abuse peoples’ privacy like this.
“Arguing that you don’t care about the right to privacy because you have nothing to hide is no different than saying you don’t care about free speech because you have nothing to say.”
-- Edward Snoden
Thank you.