EDIT: I misunderstood exactly what role “generative” AI is having in the Linux space, I’m sorry for misleading people. I got quite upset upon learning it was even within arms reach and failed to get a good enough grasp on it before writing this post. I’m still pissed, so I’m leaving the post up, but hopefully it’s not factually inaccurate anymore. Let me know if I’m still missing something, please.

Sorry for my frequent posts here recently, I don’t mean to spam at all, but I’m so mad about this. I heard a few rumors about AI being used for something involving the Linux kernel, but never enough about it to think it was actively happening, for some reason.
I switched to Linux last year once I realized Windows 11 would have revolting LLM shit built in. I had finally had enough of Microsoft and their exploration and spyware.
I went cold turkey, started using open source programs for everything, got involved in the Fediverse (as you can see), I altered all my creative workflows to use only FLOSS software, down to the individual plugins in my Digital Audio Workstation. It’s been a big effort, but I’m happier with my life technologically then ever before. Now it appears that no matter what distro we choose, the developers of the kernel itself are attempting to incorporate LLM reviews into their workflows, for some damn reason.

I can’t say this in a way that doesn’t sound overly dramatic so I guess I’ll just lean into it: how fucking dare they even consider something so verifiably harmful to society and the planet as a whole? It’s total hypocrisy and goes against the reasons Linux exists. I genuinely feel betrayed. Why haven’t I seen more outrage over this? istfg it’s like nothing is safe anymore. Even most FLOSS projects are too cowardly to say they don’t support “gen” AI explicitly, and I often have to ask them myself. Fuck.

  • atrielienz@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    19 hours ago

    Could you link to an article or a write up on the stuff you’re referencing so I can look into it more, please?

      • cecilkorik@lemmy.ca
        link
        fedilink
        English
        arrow-up
        16
        ·
        19 hours ago

        It is for most people. You are welcome to create your own issues in life if you wish. But Linus Torvalds is infamous for his meticulous, detailed, thorough, and often expletive-laden code reviews, and if he is willing to review AI-generated or AI-assisted code that’s entirely up to him, there is no indication he is willing to lower his coding standards one iota. He trusts his maintainers not to bring him any shitty AI code, but he’s giving them the freedom to make that choice themselves. If they abuse it, he will punish them. There’s no doubt if you know anything about how the Linux kernel development process works.

      • breadsmasher@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        ·
        edit-2
        19 hours ago

        Did you read Linus’ response on the mailing list? From that linked article

        https://lwn.net/ml/all/CAHk-=wj3fQVEcAqy82JnrX2KKi4NjnEGGSH2Pf_ztnLCcveWkQ@mail.gmail.com/

        Given his sole control over what gets merged, thats all that matters.

        also note hes discussing ai as a tool for reviewing patches. nothing about ai actually writing code.

        make of that what you will

        I have not yet seen any evidence of actual ai slop code being merged into the kernel

        • cloudskater@piefed.blahaj.zoneOP
          link
          fedilink
          English
          arrow-up
          5
          ·
          19 hours ago

          Linux, of all projects, should be opposed to these kind of things as a whole. Sure, we could argue it’s not as bad, but I’m not comforted by that statement. The fact that those in charge don’t see or care about the obvious problems is shocking.

          • breadsmasher@lemmy.world
            link
            fedilink
            English
            arrow-up
            7
            ·
            edit-2
            19 hours ago

            Im not trying to say one way or the other, or take anyone’s side here.

            just putting into context “ai generated code in the linux kernel” isnt whats currently happening.

            unless you have evidence otherwise.

            edit - KDEs response maybe clarifies “ai” to me in this discussion.

            We agree and we agree with many of your objections. AI has become a synonym of tech irresponsibility, greed and exploitation, like crypto was before it. The difference is AI existed before the current craze and pursued legitimate goals. That is still happening in some areas of AI research and ignoring all uses of AI would be throwing the baby out with the bath water.

            LLM providers like OpenAI are scum. But the general technology around “ai” isnt as bad as that

            • cloudskater@piefed.blahaj.zoneOP
              link
              fedilink
              English
              arrow-up
              5
              ·
              19 hours ago

              Right. Sorry, I’ll try to make the post clearer bc I’m not trying to mislead anyone, I’m just so upset that this is even being entertained by Linux devs, it’s boiling my blood.

              • breadsmasher@lemmy.world
                link
                fedilink
                English
                arrow-up
                3
                ·
                18 hours ago

                Just edited my reply for context. Hopefully it explains at least my personal view.

                LLMs provided by billionaire techbros? Burn that shit to the ground.

                The scientific idea and application of ai when its helpful and relevant I dont see a problem.

                The difference being no vibe coded ai generated bullshit ends up in the kernel. The use of this technology elsewhere can be completely fine, if its treated correctly.

                But Im totally on your side with “openai llm vibe coded slop should never ever land in the kernel”. And I trust linus on that. given his history with regular 100% human maintainers, he wouldnt let that garbage slide.

                “ai” as a term has become synonymous with openai, anthropic, gemini. theyre just LLM products sold by companies. They should never be near real critical production systems. But the wider scientific/technology side could be applied ethically - without using those LLM products

                • cloudskater@piefed.blahaj.zoneOP
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  ·
                  18 hours ago

                  I like your thought process, but struggle to agree with you because I don’t believe there is a way for this kind of technology to be ethical without being rebooted (haha) from the ground up. Even so, about the only thing I think it should be used for is double checking what humans have already done, or perhaps printing text from human speech for quick and dirty subtitles. However, all that is assuming that the data it is working off of was not stolen, and that it is not able to “generate” anything “new” because that’s just theft and exploitation.