One would think they’d be extra careful not to piss the users off at this point… but no.
There’s a master “kill switch” for all AI features in Firefox now. I suggest everyone who’s concerned about this kind of thing just go and turn it off, and then we need never bother each other over this again.
“When it comes to privacy, defaults matter.”
- Mozilla
Why not remove the AI and offer them as a separate extension? That way you’re happy, and everybody else doesn’t have crap shoved down their throats.
Or pick a Firefox fork that doesn’t have the AI bullshit. Libre Wolf is great for people who take security very seriously,l. I hear Water Fox is a much closer equivalent to Firefox without AI, and also has a focus on privacy. I’ve also been using Iron Fox on my android with basically no issues.
With Mozilla’s current track record I don’t trust them to not fuck with the AI “killswitch”.
The only thing stopping me from switching is the unreliability of updates to uBlock on forked versions of Firefox.
I’ve been using Waterfox for months and not had a single issue with ublock origin or any other extension.
Good to know! The last time I looked into it was early last year, good to know it works good for you.
It’s still opt out, not opt in because on first install that LLM garbage is enabled by default. The kill switch should’ve been for people that chose to try LLM garbage and found it lacking; needing an easy way to disable it all.
I won’t stop complaining until Firefox makes their LLM nonsense opt-in, letting a user choose at first boot if they want that shit or not. That would be the most ethical and user respecting way to handle their LLM shit.
LLM bullshit should have been an extension
Exactly, it should never be bundled into software without explicit user consent.
My master AI killswitch was just to switch to Waterfox.
Till those options are turned off by the browser updating like has already happened with some people.
Much like Adobe‘s Acrobat which I also have to use for work. At least from what I can tell when it suddenly summarizes a PDF. There‘s no way in hell that happens locally. But the fact that it seemingly automatically processes potentially sensitive data from customers didn‘t even do as little as raising eyebrows when I brought it up.
I use Firefox as a PDF reader at work. it’s better than Adobe Reader or Chrome, and it’s good enough, so I haven’t bothered finding something else.
I deal with secure information sometimes, in PDF form. I haven’t even considered that this information might not remain local.
I use KDE’s PDF reader Okular
Sumatra is the notepad++ of pdf readers (minus the state sponsored hackers https://notepad-plus-plus.org/news/hijacked-incident-info-update/)
I’ve been using it for ages whenever I need to open PDFs on Windows. Though nowadays browsers handle them too. And I avoid Windows.
It is genuinely excellent software.
Looks interesting! I can’t see it in their docs, can it display the font type and size of a selected text? I’ve been using PDF-XChange Viewer mostly for that feature. (Lol I just noticed it has been discontinued since 2018 😬)
If your company has an enterprise/privacy agreement with Adobe, it might be considered addressed, similar to the millions of companies using Microsoft 365 and Sharepoint.
If, OTOH, it’s a “free” feature of Adobe, it could be eating your company’s data without constraints.
If the latter, let us know your company’s name so that we can avoid it.
No way in hell? My understanding is that an NPU could perform that type of processing locally. I welcome info & correction!
(I know other types of local ai processors could too, but there’s little chance Acrobat would be geared to look for them - even GPUs - unlike NPUs.)
Now if we switch to talking about policy instead of capability, I don’t think Adobe would miss a chance to be evil. So yeah they’re probably stealing all the data they possibly can.
few computers have an embedded NPU
True! More all the time, unfortunately. (Unfortunately because we’re paying for tech we don’t want.)
Also doesn’t negate my argument. He said no way in hell, yet… not only is there a way but it’s already out there.
oh I mostly agree, but forgot to include the quote. I wanted to say, all that AI processing probably does not happen locally
Guess I need to self host sync now as I don’t trust Mozilla with any of my data at this point.
Ai in browsers is stupid
Companies at every layer are competing, from the OS, to the browser, to the website so you can experience triple the spam.
Soon your OS will get into a never ending loop with the browser and the website.
I used to enjoy AI a lot, and I still think the technology is really cool, but lately I’m beginning to despise it. It spreads and nestles itself into every corner of our life, and it rots whatever it touches, be it the humans that rely on it or the projects in which it’s used. I see so many open source projects that are tainted with it, it’s almost impossible to avoid it. It’s sad. The generations that will grow up with AI will be fucked.
The generations that will grow up with AI will be fucked.
Eh. That’s something every single generation before us in at least the past 150 years has been saying about other new society-changing stuff. They’ll be fine, society just changes.
Generations that will grow up with social media will be fucked.
Generations that will grow up with internet will be fucked.
Generations that will grow up with video games will be fucked.
Generations that will grow up with computers will be fucked.
Generations that will grow up with morning-after pills will be fucked.
…
…
What about Cambridge Analytica, the mental health impacts, the addiction, … we’re still learning the social impact of social media — especially capitalistic social media. To pretend we aren’t is just plain ignorant, no? You can’t say people are fine when you don’t even know how they’ve been affected.
I’m not saying everything is lovely and we are at the peak of civilization. I’m saying that every form of progress comes with challenges and downsides, and this saying of “Next generation will be fucked” is a cognitive bias every generation has had for a pretty long time.
They also have positive sides.
I don’t know if I expressed myself that poorly (I was pretty tired after all), but I did not mean at all that there are no downsides to any of these. I meant that despite these sayings, every generation so far has ended up as fine as the previous ones.
Looks around at how fucked the world is Maybe they were right?
Yeah, sorry but I have to disagree with you pretty hard there. Generations that grew up with social media, internet, video games, … they are fucked. We’ve been watching the fuckening for a long time now. Saying that they haven’t been fucked is reminiscent of my grandparents saying ADHD and Anxiety aren’t real.
Are they really more fucked than generations who didn’t have access to social media, internet, and video games? It seems to me that you are biased by the negative effects these had, and ignoring the positive ones.
Saying that they haven’t been fucked is reminiscent of my grandparents saying ADHD and Anxiety aren’t real.
How is that in any way comparable? I’m not saying the downsides of social media, internet, video games are not real, I’m saying “People growing up with X will be fucked” is a saying that every generation has been saying, ignoring the positive impacts. This is a cognitive bias in the likes of the rosy retrospection.
I don’t think so. First off, all the examples mentioned were computers, social media, video games, … these are all still pretty darn novel in the grand scheme of things. We still don’t know what happens to a society that doesn’t need to contend with boredom because they have algorithmically optimized content feeds jacking up their prefrontal cortex at all times of day and night. We still don’t understand the full extent to which walled garden social media ecosystems can influence politics and cultural bias, detriment democratic processes, or empower individuals. We still aren’t taking privacy seriously as a society, having relied for hundreds of years on the fact that complete and total surveillance systems is infeasible for a government. It’s far too early to say whether anyone is or isn’t fucked by any of this, which is why I draw comparisons to grandmama saying “back in my day, the boy was just excited. He didn’t have ‘ADHD.’ He was fine.” It’s about ignorance when new information comes to light.
We have a lot of reason to believe there is some serious consequences to technology that unfortunately isn’t obvious from the get go. More unfortunately, we have a culture of not caring. Innovation first, policy second, right? Except that only works while policy can still catch up. We’ve been slow walking into a situation where, yeah, one of these generations is definitely getting fucked. Probably, though, it’s each generation getting a little more fucked as we continue having them.
I’m not ignoring the benefits. The benefits are part of how we justify not impeding the innovation process — it’s literally part of the problem. The root of the issue is that we ignore the consequences, pretend that’s just the way things are, and think in weird metaphors like “the market will self correct” and “the market is never wrong.” If the future generations aren’t fucked, it’ll be because they solved the problems that were created here. It won’t be “they weren’t fucked because they were never fucked.” No, they were fucked. Hopefully they figure it out.
Funny you say that. In ways not everyone (obviously) sees, each generation was right about that.
No. Each generation was fucked in their own way, regardless of the two edges of the progress that they grew up with.
Most pointed questions you type will start a Google search1. This loads a regular search results page, and sees Firefox’s AI chatbot shift to a sidebar on the right. The AI reads the top results (including any AI overview), and produces a response based on them.
AI reads AI reading AI reading AI reading AI reading…
Exactly what I thought when I got to that bit of the article.
bUt ThErE’s A kiLL SwItCh!
Yrah, Its called uninstalling Firefox
And instead install a chromium based browser, right?
No thanks.
You can opt out in the settings
Glad I turned that shit off right away.
Just let them shoot themselves in the foot. Get familiar with the forks. Someone at Mozilla has no idea what they’re doing. You would think they would learn their lesson last AI garbage but I guess not.
Removed by mod
deleted by creator
The problem with Firefox doing AI is theyre one foot out always. The features they add are always undercooked compared to the rest of the market. This looks really shit and useless in its current state like a worse version of perplexity browser.
All AI is undercooked: Errors are baked into LLMs and there is no viable solution to prevent the mistakes and outright bullshit they produce other than to assume it fucked up and pay an actual expert manually check literally everything it does.
Errors are baked in but I don’t agree with the “no viable solution” part. One research team actually was able to identify the “neurons” responsible for hallucinations and adjust the contribution to negligible amounts.
https://www.youtube.com/watch?v=1ONwQzauqkc (Linking a youtuber instead of the actual study because he summarizes it pretty well and the research itself is not geared for laypersons.)
If this was implemented industry wide would it completely solve the problem? I don’t know, but I do know it would be a massive improvement.
I remain deeply skeptical.
Either way, it uses a ridiculous amount of power and comes at great environmental cost.
Fuck me, you and people in general jump to conclusions so easily. My post was meant to educate, to shore up knowledge. To help out.
In no way was I saying “AI is good and the tech bros are right about it.” 🤦♂️
I never took what you wrote to mean that, but I am deeply skeptical that they can successfully elminiate hallucinations to the point that “AI” can be trusted to given correct results.
Why bring up power and environmental cost? What did that have to do with anything?
Also if you’ll re-read what I wrote I used careful language to indicate I didn’t think this method would completely eliminate errors. Nevermind bridge the gap to “trusted.” (🤮 I will never trust AI.)
(Yeah I know the YouTuber used a sensational title; in their defense they kind of have to in order to get clicks. imho blame the algorithm and people’s reinforcement of that algorithm.)
Why wouldn’t I? It’s pretty fucking important! Why would you take exception to that? I also think it’s weird you assumed what conclusion I was jumping to.
Not quoting the primary source does not per chance have anything to do with the source being a not peer reviewed archive of the Cornell University, does it? I wonder, is that normal in the field of AI research?
Here’s the source https://arxiv.org/abs/2512.01797
What does Cornell have to do with it? Genuinely curious as that seems completely out of the blue to me. Source was clearly Chinese.
Errors are baked into everything, the tasks LLMs can do do not require perfect output and do not require an expert to manually review everything. It doesnt need to be perfect to be useful















