• mimavox@piefed.social
    link
    fedilink
    English
    arrow-up
    12
    ·
    2 days ago

    One would think they’d be extra careful not to piss the users off at this point… but no.

  • FaceDeer@fedia.io
    link
    fedilink
    arrow-up
    19
    arrow-down
    9
    ·
    2 days ago

    There’s a master “kill switch” for all AI features in Firefox now. I suggest everyone who’s concerned about this kind of thing just go and turn it off, and then we need never bother each other over this again.

    • XLE@piefed.social
      link
      fedilink
      English
      arrow-up
      23
      arrow-down
      1
      ·
      2 days ago

      “When it comes to privacy, defaults matter.”

      - Mozilla

      Why not remove the AI and offer them as a separate extension? That way you’re happy, and everybody else doesn’t have crap shoved down their throats.

    • chickenf622@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      22
      arrow-down
      1
      ·
      2 days ago

      Or pick a Firefox fork that doesn’t have the AI bullshit. Libre Wolf is great for people who take security very seriously,l. I hear Water Fox is a much closer equivalent to Firefox without AI, and also has a focus on privacy. I’ve also been using Iron Fox on my android with basically no issues.

      With Mozilla’s current track record I don’t trust them to not fuck with the AI “killswitch”.

      • AnchoriteMagus@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        4
        ·
        2 days ago

        The only thing stopping me from switching is the unreliability of updates to uBlock on forked versions of Firefox.

        • Dultas@lemmy.world
          link
          fedilink
          English
          arrow-up
          11
          ·
          2 days ago

          I’ve been using Waterfox for months and not had a single issue with ublock origin or any other extension.

          • AnchoriteMagus@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            1
            ·
            2 days ago

            Good to know! The last time I looked into it was early last year, good to know it works good for you.

    • LostWanderer@fedia.io
      link
      fedilink
      arrow-up
      24
      arrow-down
      1
      ·
      2 days ago

      It’s still opt out, not opt in because on first install that LLM garbage is enabled by default. The kill switch should’ve been for people that chose to try LLM garbage and found it lacking; needing an easy way to disable it all.

      I won’t stop complaining until Firefox makes their LLM nonsense opt-in, letting a user choose at first boot if they want that shit or not. That would be the most ethical and user respecting way to handle their LLM shit.

    • KiwiTB@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      2 days ago

      Till those options are turned off by the browser updating like has already happened with some people.

  • CosmoNova@lemmy.world
    link
    fedilink
    English
    arrow-up
    16
    ·
    2 days ago

    Much like Adobe‘s Acrobat which I also have to use for work. At least from what I can tell when it suddenly summarizes a PDF. There‘s no way in hell that happens locally. But the fact that it seemingly automatically processes potentially sensitive data from customers didn‘t even do as little as raising eyebrows when I brought it up.

    • mrgoosmoos@lemmy.ca
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 day ago

      I use Firefox as a PDF reader at work. it’s better than Adobe Reader or Chrome, and it’s good enough, so I haven’t bothered finding something else.

      I deal with secure information sometimes, in PDF form. I haven’t even considered that this information might not remain local.

    • Joe@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      5
      ·
      2 days ago

      If your company has an enterprise/privacy agreement with Adobe, it might be considered addressed, similar to the millions of companies using Microsoft 365 and Sharepoint.

      If, OTOH, it’s a “free” feature of Adobe, it could be eating your company’s data without constraints.

      If the latter, let us know your company’s name so that we can avoid it.

    • Analog@lemmy.ml
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 day ago

      No way in hell? My understanding is that an NPU could perform that type of processing locally. I welcome info & correction!

      (I know other types of local ai processors could too, but there’s little chance Acrobat would be geared to look for them - even GPUs - unlike NPUs.)

      Now if we switch to talking about policy instead of capability, I don’t think Adobe would miss a chance to be evil. So yeah they’re probably stealing all the data they possibly can.

        • Analog@lemmy.ml
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 day ago

          True! More all the time, unfortunately. (Unfortunately because we’re paying for tech we don’t want.)

          Also doesn’t negate my argument. He said no way in hell, yet… not only is there a way but it’s already out there.

          • WhyJiffie@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            19 hours ago

            oh I mostly agree, but forgot to include the quote. I wanted to say, all that AI processing probably does not happen locally

  • Dultas@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    2 days ago

    Guess I need to self host sync now as I don’t trust Mozilla with any of my data at this point.

  • versionc@lemmy.world
    link
    fedilink
    English
    arrow-up
    30
    ·
    22 hours ago

    I used to enjoy AI a lot, and I still think the technology is really cool, but lately I’m beginning to despise it. It spreads and nestles itself into every corner of our life, and it rots whatever it touches, be it the humans that rely on it or the projects in which it’s used. I see so many open source projects that are tainted with it, it’s almost impossible to avoid it. It’s sad. The generations that will grow up with AI will be fucked.

    • iglou@programming.dev
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      13
      ·
      edit-2
      16 hours ago

      The generations that will grow up with AI will be fucked.

      Eh. That’s something every single generation before us in at least the past 150 years has been saying about other new society-changing stuff. They’ll be fine, society just changes.

      Generations that will grow up with social media will be fucked.

      Generations that will grow up with internet will be fucked.

      Generations that will grow up with video games will be fucked.

      Generations that will grow up with computers will be fucked.

      Generations that will grow up with morning-after pills will be fucked.

      • partofthevoice@lemmy.zip
        link
        fedilink
        English
        arrow-up
        9
        ·
        13 hours ago

        What about Cambridge Analytica, the mental health impacts, the addiction, … we’re still learning the social impact of social media — especially capitalistic social media. To pretend we aren’t is just plain ignorant, no? You can’t say people are fine when you don’t even know how they’ve been affected.

        • iglou@programming.dev
          link
          fedilink
          English
          arrow-up
          2
          ·
          9 hours ago

          I’m not saying everything is lovely and we are at the peak of civilization. I’m saying that every form of progress comes with challenges and downsides, and this saying of “Next generation will be fucked” is a cognitive bias every generation has had for a pretty long time.

          They also have positive sides.

          I don’t know if I expressed myself that poorly (I was pretty tired after all), but I did not mean at all that there are no downsides to any of these. I meant that despite these sayings, every generation so far has ended up as fine as the previous ones.

      • partofthevoice@lemmy.zip
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        2
        ·
        14 hours ago

        Yeah, sorry but I have to disagree with you pretty hard there. Generations that grew up with social media, internet, video games, … they are fucked. We’ve been watching the fuckening for a long time now. Saying that they haven’t been fucked is reminiscent of my grandparents saying ADHD and Anxiety aren’t real.

        • iglou@programming.dev
          link
          fedilink
          English
          arrow-up
          1
          ·
          9 hours ago

          Are they really more fucked than generations who didn’t have access to social media, internet, and video games? It seems to me that you are biased by the negative effects these had, and ignoring the positive ones.

          Saying that they haven’t been fucked is reminiscent of my grandparents saying ADHD and Anxiety aren’t real.

          How is that in any way comparable? I’m not saying the downsides of social media, internet, video games are not real, I’m saying “People growing up with X will be fucked” is a saying that every generation has been saying, ignoring the positive impacts. This is a cognitive bias in the likes of the rosy retrospection.

          • partofthevoice@lemmy.zip
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            9 hours ago

            I don’t think so. First off, all the examples mentioned were computers, social media, video games, … these are all still pretty darn novel in the grand scheme of things. We still don’t know what happens to a society that doesn’t need to contend with boredom because they have algorithmically optimized content feeds jacking up their prefrontal cortex at all times of day and night. We still don’t understand the full extent to which walled garden social media ecosystems can influence politics and cultural bias, detriment democratic processes, or empower individuals. We still aren’t taking privacy seriously as a society, having relied for hundreds of years on the fact that complete and total surveillance systems is infeasible for a government. It’s far too early to say whether anyone is or isn’t fucked by any of this, which is why I draw comparisons to grandmama saying “back in my day, the boy was just excited. He didn’t have ‘ADHD.’ He was fine.” It’s about ignorance when new information comes to light.

            We have a lot of reason to believe there is some serious consequences to technology that unfortunately isn’t obvious from the get go. More unfortunately, we have a culture of not caring. Innovation first, policy second, right? Except that only works while policy can still catch up. We’ve been slow walking into a situation where, yeah, one of these generations is definitely getting fucked. Probably, though, it’s each generation getting a little more fucked as we continue having them.

            I’m not ignoring the benefits. The benefits are part of how we justify not impeding the innovation process — it’s literally part of the problem. The root of the issue is that we ignore the consequences, pretend that’s just the way things are, and think in weird metaphors like “the market will self correct” and “the market is never wrong.” If the future generations aren’t fucked, it’ll be because they solved the problems that were created here. It won’t be “they weren’t fucked because they were never fucked.” No, they were fucked. Hopefully they figure it out.

      • Reygle@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        edit-2
        13 hours ago

        Funny you say that. In ways not everyone (obviously) sees, each generation was right about that.

        • iglou@programming.dev
          link
          fedilink
          English
          arrow-up
          1
          ·
          9 hours ago

          No. Each generation was fucked in their own way, regardless of the two edges of the progress that they grew up with.

  • TheSeveralJourneysOfReemus@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    ·
    1 day ago

    Most pointed questions you type will start a Google search1. This loads a regular search results page, and sees Firefox’s AI chatbot shift to a sidebar on the right. The AI reads the top results (including any AI overview), and produces a response based on them.

    AI reads AI reading AI reading AI reading AI reading…

  • senna@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    2 days ago

    Just let them shoot themselves in the foot. Get familiar with the forks. Someone at Mozilla has no idea what they’re doing. You would think they would learn their lesson last AI garbage but I guess not.

  • Fizz@lemmy.nz
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    6
    ·
    1 day ago

    The problem with Firefox doing AI is theyre one foot out always. The features they add are always undercooked compared to the rest of the market. This looks really shit and useless in its current state like a worse version of perplexity browser.

    • DudeImMacGyver@kbin.earth
      link
      fedilink
      arrow-up
      16
      arrow-down
      1
      ·
      1 day ago

      All AI is undercooked: Errors are baked into LLMs and there is no viable solution to prevent the mistakes and outright bullshit they produce other than to assume it fucked up and pay an actual expert manually check literally everything it does.

      • Analog@lemmy.ml
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        4
        ·
        1 day ago

        Errors are baked in but I don’t agree with the “no viable solution” part. One research team actually was able to identify the “neurons” responsible for hallucinations and adjust the contribution to negligible amounts.

        https://www.youtube.com/watch?v=1ONwQzauqkc (Linking a youtuber instead of the actual study because he summarizes it pretty well and the research itself is not geared for laypersons.)

        If this was implemented industry wide would it completely solve the problem? I don’t know, but I do know it would be a massive improvement.

        • DudeImMacGyver@kbin.earth
          link
          fedilink
          arrow-up
          2
          arrow-down
          1
          ·
          1 day ago

          I remain deeply skeptical.

          Either way, it uses a ridiculous amount of power and comes at great environmental cost.

          • Analog@lemmy.ml
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            1 day ago

            Fuck me, you and people in general jump to conclusions so easily. My post was meant to educate, to shore up knowledge. To help out.

            In no way was I saying “AI is good and the tech bros are right about it.” 🤦‍♂️

            • DudeImMacGyver@kbin.earth
              link
              fedilink
              arrow-up
              2
              arrow-down
              1
              ·
              1 day ago

              I never took what you wrote to mean that, but I am deeply skeptical that they can successfully elminiate hallucinations to the point that “AI” can be trusted to given correct results.

              • Analog@lemmy.ml
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                1
                ·
                1 day ago

                Why bring up power and environmental cost? What did that have to do with anything?

                Also if you’ll re-read what I wrote I used careful language to indicate I didn’t think this method would completely eliminate errors. Nevermind bridge the gap to “trusted.” (🤮 I will never trust AI.)

                (Yeah I know the YouTuber used a sensational title; in their defense they kind of have to in order to get clicks. imho blame the algorithm and people’s reinforcement of that algorithm.)

                • DudeImMacGyver@kbin.earth
                  link
                  fedilink
                  arrow-up
                  2
                  arrow-down
                  1
                  ·
                  1 day ago

                  Why wouldn’t I? It’s pretty fucking important! Why would you take exception to that? I also think it’s weird you assumed what conclusion I was jumping to.

        • Jiral@lemmy.org
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 day ago

          Not quoting the primary source does not per chance have anything to do with the source being a not peer reviewed archive of the Cornell University, does it? I wonder, is that normal in the field of AI research?

      • Fizz@lemmy.nz
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        16 hours ago

        Errors are baked into everything, the tasks LLMs can do do not require perfect output and do not require an expert to manually review everything. It doesnt need to be perfect to be useful