A user asked on the official Lutris GitHub two weeks ago “is lutris slop now” and noted an increasing amount of “LLM generated commits”. To which the Lutris creator replied:

It’s only slop if you don’t know what you’re doing and/or are using low quality tools. But I have over 30 years of programming experience and use the best tool currently available. It was tremendously helpful in helping me catch up with everything I wasn’t able to do last year because of health issues / depression.

There are massive issues with AI tech, but those are caused by our current capitalist culture, not the tools themselves. In many ways, it couldn’t have been implemented in a worse way but it was AI that bought all the RAM, it was OpenAI. It was not AI that stole copyrighted content, it was Facebook. It wasn’t AI that laid off thousands of employees, it’s deluded executives who don’t understand that this tool is an augmentation, not a replacement for humans.

I’m not a big fan of having to pay a monthly sub to Anthropic, I don’t like depending on cloud services. But a few months ago (and I was pretty much at my lowest back then, barely able to do anything), I realized that this stuff was starting to do a competent job and was very valuable. And at least I’m not paying Google, Facebook, OpenAI or some company that cooperates with the US army.

Anyway, I was suspecting that this “issue” might come up so I’ve removed the Claude co-authorship from the commits a few days ago. So good luck figuring out what’s generated and what is not. Whether or not I use Claude is not going to change society, this requires changes at a deeper level, and we all know that nothing is going to improve with the current US administration.

  • atrielienz@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    4
    ·
    edit-2
    3 days ago

    I think the simple fact is that some of the people in this thread don’t understand is that the people they’re asking to vet the code don’t know how.

    They may mean that the people who can vet code should do so before making a fuss about the AI written portions of it, but I don’t know that most of the people in opposition to their comments understand that context.

    I haven’t coded anything since the 90’s. I know HTML and basic CSS and that’s it. I wouldn’t have known where to start without guides to explain what commands in Linux do and how they work together. Growing up with various versions of Windows and DOS, I’d still consider myself a novice computer user. I absolutely do know how to go into command line and make things happen. But I wouldn’t know where to start to make a program. It’s not part of my skill set.

    Most users are like that. They engage with only parts of a thing. It’s why so many people these days are computer illiterate due to the rise of smartphone usage and apps for everything.

    It’d be like me asking a frequent flyer to inspect a plane engine for damage or figure out why the landing gear doesn’t retract. A lot of people wouldn’t know where to start.

    I fully agree that other coders on the internet who frequent places like GitHub and make it a point to vet the code of other devs who provide their code for free probably should vet the code before they make assumptions about its quality. And I fully agree that deliberately stirring shit without actually contributing anything meaningful to the community or the project is really just messed up behavior.

    But the way I see it there’s two different groups and they have very different views of this situation.

    The people who can’t code are consumers. Their contribution is to use the software if they want, and if it works for them to spread by word of mouth what they like about it. Maybe to donate if they can and the dev accepts donations.

    If those people choose to boycott, it’ll be on the basis of their moral feelings about the use of AI or at the recommendation of the second group due to quality.

    The second group are the peer reviewers so to speak and they can and should both vet the code and sound the alarm if there’s something wrong.

    I suppose there’s a third subset of people in the case of FOSS work who can and often do help with projects and I wonder if that is better or worse for the reasons listed in the thread like poorly human written code and simple mistakes.

    Humans certainly aren’t infallible. But at least they can tell you how they got the output they got or the reason why they did x. You can have a rational conversation with a human being and for the most part they aren’t going to make something up unless they have an ulterior motive.

    Perhaps breaking things down into tiny chunks makes AI better or it’s outputs more usable. Maybe there’s a 'sweet spot".

    But I think people also get worried that what happens a lot is people who use AI often start to offload their own thinking onto it and that’s dangerous for many reasons.

    This person also admits to having depression. Depression can affect how you respond to information, how well you actually understand the information in front of you. It can make you forget things you know, or make things that much harder to recall.

    I know that from experience. So in this case does the AI have more potential to help or do harm?

    There’s a lot to this. I have not personally used Lutris, but before this happened I wouldn’t have thought twice about saying that I’ve heard good things about it if someone asked me for a Heroic launcher style software for Linux.

    But just like the Ladybird fork of Firefox I don’t know that I feel comfortable suggesting it if this is the state of things. For the same reason I don’t currently feel comfortable recommending Windows 11 or Chrome.

    There are so many sensitive things that OS’s, and web browsers handle that people take for granted. If nobody was sounding the alarm about those, I feel like nothing would get better. By contrast, Lutris isn’t swimming in a big pond of sensitive information but it is running on people’s hardware and they should have both the right to be informed and the right to choose.

  • adeoxymus@lemmy.world
    link
    fedilink
    English
    arrow-up
    134
    arrow-down
    39
    ·
    3 days ago

    Tbh I agree, if the code is appropriate why care if it’s generated by an LLM

    • Kowowow@lemmy.ca
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      4
      ·
      3 days ago

      I want to one day make a game and there is no way I’m not prototyping it with llm code, though I would want to get things finalized by a real coder if I ever got the game finished but I’ve never made real progress on learning code even in school

    • drolex@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      21
      arrow-down
      5
      ·
      3 days ago
      • Ethical issue: products of the mind are what makes us humans. If we delegate art, intellectual works, creative labour, what’s left of us?
      • Socio-economic issue: if we lose labour to AI, surely the value produced automatically will be redistributed to the ones who need it most? (Yeah we know the answer to this one)
      • Cultural issue: AIs are appropriating intellectual works and virtually transferring their usufruct to bloody billionaires
    • The_Blinding_Eyes@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 days ago

      While I know there is more nuance than this, but why should I spend any of my time on something, when you spent no time creating it? I know that applies more to the slop, but that’s where I am with most LLM generated stuff.

          • wholookshere@piefed.blahaj.zone
            link
            fedilink
            English
            arrow-up
            25
            arrow-down
            4
            ·
            3 days ago

            LLMs have stolen works from more than just artists.

            ALL of public repositories at a minimum have been used as training, regardless of licence. including licneses that require all dirivitive work be under the same license.

            so there’s more than just lutris stollen.

            • Lung@lemmy.world
              link
              fedilink
              English
              arrow-up
              4
              arrow-down
              26
              ·
              3 days ago

              So he’s a badass Robinhood pirate that steals code from corporations and gives it to the people?

              • wholookshere@piefed.blahaj.zone
                link
                fedilink
                English
                arrow-up
                8
                ·
                3 days ago

                The fuck you talking about.

                Using a tool with billions of dollars behind it robinhood?

                How is stealing open source prihcets code regardless of license stealing fr corporation’s?

                • Lung@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  1
                  ·
                  edit-2
                  3 days ago
                  • he’s not anthropic, and doesn’t have billions of dollars
                  • stealing from open source is not stealing, that’s the point of open source
                  • the argument above is that these models are allegedly trained “regardless of license” i.e. implying they are trained on non-oss code
          • prole@lemmy.blahaj.zone
            link
            fedilink
            English
            arrow-up
            4
            ·
            3 days ago

            No, the LLM was trained on other code (possibly including Lutris, but also probably like billions of lines from other things)

        • adeoxymus@lemmy.world
          link
          fedilink
          English
          arrow-up
          25
          arrow-down
          3
          ·
          3 days ago

          Tbh all programmers have been copy pasting from each other forever. The middle step of searching stack overflow or GitHub for the code you want is simply removed

          • galaxy_nova@lemmy.world
            link
            fedilink
            English
            arrow-up
            7
            ·
            3 days ago

            Exactly. If someone has already come up with an optimal solution why the hell would I reimplement it. My real problems are not with LLMs themselves but rather the sourcing of the training data and the power usage. If I could use an “ethically sourced” llm locally I’d be mostly happy. Ultimately LLMs are also only good for code specifically. Architecture or things that require a lot of thought like data pipelines I’ve found AI to be pretty garbage at when experimenting

        • Dremor@lemmy.worldM
          link
          fedilink
          English
          arrow-up
          38
          arrow-down
          4
          ·
          edit-2
          3 days ago

          Being a developer, I don’t care if someone else uses my code. Code is like a brick. By itself it has little value, the real value lies on how it is used.
          If I find an optimal way to do something, my only wish is to make it available to as much people as possible. For those who comes after.

            • Dremor@lemmy.worldM
              link
              fedilink
              English
              arrow-up
              1
              ·
              2 days ago

              That’s not how LLMs work either.

              An LLM had no knowledge, but has the statically probability of a token to follow another token, and given an overall context it create the statically most likely text.
              To calculate such probability as accurently as possible you need as much examples as possible, to determine how often word A follow word B. Thus the immense datasets required.
              Luckily for us programmers, computer programs are inherently statically similar, which makes LLMs quite good at it.
              Now, the programs it create aren’t perfect, but it allows to write long, boring code fast, and even explain it if you require it to. This way I’ve learned a lot of new things that I wouldn’t have unless I had the time and energy to screw around with my programs (which I wished I had, but don’t), or looked around Open Source programs source code, which would take years to an average human.

              Now there is the problem of the ethic use of AI, which is a whole other aspect. I use only local models, which I run on my own hardware (usually using Ollama, but I’m looking into NPU enabled alternatives).

    • criss_cross@lemmy.world
      link
      fedilink
      English
      arrow-up
      14
      ·
      3 days ago

      If a human is reviewing the code they submit and owning the changes I don’t care if they use an LLM or not. It’s when you just throw shit at the wall and hope it sticks that’s the problem.

      I’m more concerned with the admitted OpenClaw usage. That’s a hydrogen bomb heading straight for a fireworks factory.

      • pivot_root@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        ·
        3 days ago

        It’s the same for me.

        I don’t care if somebody uses Claude or Copilot if they take ownership and responsibility over the code it generates. If they ask AI to add a feature and it creates code that doesn’t fit within the project guidelines, that’s fine as long as they actually clean it up.

        I’m more concerned with the admitted OpenClaw usage. That’s a hydrogen bomb heading straight for a fireworks factory.

        This is the problem I have with it too. Using something that vulnerable to prompt injection to not only write code but commit it as well shows a complete lack of care for bare minimum security practices.

    • RightHandOfIkaros@lemmy.world
      link
      fedilink
      English
      arrow-up
      14
      arrow-down
      1
      ·
      3 days ago

      Personally, I have never seen LLM generated code that works without needing to be edited, but I imagine for routine blocks of code and very common things it probably does fine. I dont see why a programmer needs to rewrite the same code blocks over and over again for different projects when an LLM can do that part leaving more time for the programmer to write the more specialized parts. The programmer will still have to edit and verify the generated code, but programming is more mechanical than something like art.

      However, for more specialized code, I would be concerned. It would likely not function at all without editing, and if it did function it probably wouldn’t be optimized or secure. However, this programmer claims to have 30 years of experience, and if thats the case then he likely knows this and probably edits the LLM output code himself.

      As I have said before, Generative AI is a tool, like PhotoShop. I dont see why people should reject a tool if it can make their job easier. It won’t be able to completely replace people effectively. Businesses will try, but quality will drop off because its not being used by people that understand what the end result needs to be, and businesses will inevitably lose money.

      • P03 Locke@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        2
        ·
        3 days ago

        However, for more specialized code, I would be concerned. It would likely not function at all without editing, and if it did function it probably wouldn’t be optimized or secure.

        That’s not completely true. Claude and some of the Chinese coding models have gotten a lot better at creating a good first pass.

        That’s also why I like tests. Just force the model to prove that it works.

        Oh, you built the thing and think it’s finished? Prove it. Go run it. Did it work? No? Then go fix the bugs. Does it compile now? Cool, run the unit test platform. Got more bugs? Fix them. Now, go write more unit tests to match the bugs you found. You keep running into the same coding issue? Go write some rules for me that tell yourself not to do that shit.

        I mean, I’ve been doing this programming shit for many decades, and even I’ve been caught by my overconfidence of trying to write some big project and thinking it’s just going to work the first time. No reason to think even a high-powered Claude thinking model is going to magically just write the whole thing bug-free.

    • deadcade@lemmy.deadca.de
      link
      fedilink
      English
      arrow-up
      87
      arrow-down
      43
      ·
      3 days ago

      It’s still made by the slop machine, the same one that could only be created by stealing every human made artwork that’s ever been published. (And this is not “just one company”, every LLM has this issue.)

      Not only that, the companies building massive datacenters are taking valuable resources from people just trying to live.

      If the developer isn’t able to keep up, they should look for (co-)maintainers. Not turn to the greedy megacorps.

      • silver_wings_of_morning@feddit.dk
        link
        fedilink
        English
        arrow-up
        9
        arrow-down
        1
        ·
        3 days ago

        Speaking only on the programming part of the slop machine, programmers typically copy code anyways. It’s not an ethical issue for a programmer using a tool that has been trained on other people’s “stolen” code.

      • bookmeat@fedinsfw.app
        link
        fedilink
        English
        arrow-up
        50
        arrow-down
        13
        ·
        3 days ago

        A few years ago we were all arguing about how copyright is unfair to society and should be abolished.

          • P03 Locke@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            5
            ·
            3 days ago

            The GPL license only exists because copyright fucked over the public contract that it promised to society: Copyrights are temporary and will be given back to public domain. Instead, shitheads like Mark Twain and Disney extended copyright to practically forever.

            • everett@lemmy.ml
              link
              fedilink
              English
              arrow-up
              1
              ·
              3 days ago

              I don’t understand your position here. If we went back to a more reasonable 7 or 14 year copyright term, how would that obviate the need for a license like the GPL, which permits instant use of code provided you share-alike? Those shorter copyright lengths would be pretty reasonable for books or movies, but would still suck for tech.

              • P03 Locke@lemmy.dbzer0.com
                link
                fedilink
                English
                arrow-up
                2
                ·
                2 days ago

                We would be faaaaaar less hostile towards copyrights if we had a regular source of RECENT public domain coming out every year.

                I’m not saying that it would make GPL or OSS licenses useless. I’m just saying that the motivation and need for those licenses are because we don’t live in a society where freely available media and data are much more commonplace.

          • Luminous5481 "Lawless Heathen" [they/them]@anarchist.nexus
            link
            fedilink
            English
            arrow-up
            5
            arrow-down
            10
            ·
            edit-2
            3 days ago

            Licenses only matter if you care about copyright. I’d much rather just appropriate whatever I want, whenever I want, for whatever I want. Copyright is capitalist nonsense and I just don’t respect notions of who “owns” what. You won’t need the GPL if you abolish the concept of intellectual property entirely.

            • astro@leminal.space
              link
              fedilink
              English
              arrow-up
              6
              arrow-down
              1
              ·
              3 days ago

              It is offensive to me on a philosophical level to see that so many people feel that they should have control, in perpetuity, over who can see/read/experience/use something that they’ve put from their mind into the world. Doubly so when considering that their own knowledge and perspective is shaped by the works of those who came before. Software especially. It is sad that capitalism has so thoroughly warped the notion of what society should be that even self-proclaimed leftists can’t imagine a world where everything isn’t transactional in some way.

              • obelisk_complex@piefed.ca
                link
                fedilink
                English
                arrow-up
                2
                ·
                2 days ago

                Precisely this, yes, well said. We all stand on the shoulders of those who came before us, one way or another.

        • wirelesswire@lemmy.zip
          link
          fedilink
          English
          arrow-up
          58
          arrow-down
          2
          ·
          3 days ago

          Sure, but these same companies will drag you to court and rake you over the coals if you infringe on their copyrights.

          • lumpenproletariat@quokk.au
            link
            fedilink
            English
            arrow-up
            20
            arrow-down
            2
            ·
            3 days ago

            More reason to destroy copyright.

            Normal people can’t afford to fight the big companies who break theirs anyway. It’s only really a tool for big businesses to use against us.

        • Bronzebeard@lemmy.zip
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          2 days ago

          Yeah people making that argument were dumb. Copyright needs to be fixed, not abolished.

        • Beacon@fedia.io
          link
          fedilink
          arrow-up
          5
          arrow-down
          1
          ·
          3 days ago

          We weren’t all saying copyright altogether was unfair. In fact i think most of us have always said copyright law should exist, just that it shouldn’t be like ‘lifetime of the creator plus another 75 years after their death’. Copyright should be closer to how it was when the law was first started, which is something like 20 years.

          (And personally imo there should also be some nuanced exceptions too.)

      • Goretantath@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        15
        ·
        3 days ago

        Just like how every other human artist learned how to draw by looking at examples their art teacher gave them, aka “stealing it” in your words.

      • Ganbat@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        17
        arrow-down
        1
        ·
        edit-2
        3 days ago

        If the developer isn’t able to keep up, they should look for (co-)maintainers.

        Same energy as “Just go on Twitter and ask for free voice actors,” a la Vivziepop. A lot of people think this kind of shit is super easy, but realistically, it’s nearly impossible to get people to dedicate that kind of effort to something that can never be more than a money/time sink.

        • prole@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          6
          arrow-down
          2
          ·
          3 days ago

          I was under the impression that FOSS developers do it for the love of the game and not for monetary compensation. They’re literally putting the software out for free even though they don’t need to. They are going to be making this shit regardless.

          • tempest@lemmy.ca
            link
            fedilink
            English
            arrow-up
            3
            ·
            3 days ago

            That is what they are technically doing but they often don’t always consider the consequences and often react poorly when they realize that an Amazon (it whatever) comes along and contributes nothing and monetizes their work while dumping the support and maintenance on them.

            That is the name of the game though if you use an MIT license.

          • P03 Locke@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            2
            ·
            3 days ago

            At this point, teachers do it “for the love of the game”, but they still want to get paid more than minimum wage.

          • Ganbat@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            5
            ·
            3 days ago

            My point was “Help me with my passion project for nothing” is a much harder sell. “Just find some help,” is advice along the lines of “Just get in a plane and fly it.”

        • Vlyn@lemmy.zip
          link
          fedilink
          English
          arrow-up
          2
          ·
          3 days ago

          Hey, if your project is important enough you might get your own Jia Tan (:

        • deadcade@lemmy.deadca.de
          link
          fedilink
          English
          arrow-up
          3
          ·
          3 days ago

          Absolutely true, but there’s one clear and obvious way; drop support for the project yourself.

          If a FOSS project is archived/unmaintained, for a large enough project, someone else will pick up where the original left off.

          FOSS maintainers don’t owe anyone anything. What some developers do is amazing and I want them to keep developing and maintaining their projects, but I don’t fault them for quitting if they do.

          • P03 Locke@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            4
            ·
            edit-2
            3 days ago

            XKCD, of course

            If a FOSS project is archived/unmaintained, for a large enough project, someone else will pick up where the original left off.

            No, they won’t. This line of thinking is how we got the above.

            Their line of work is thankless, and nobody wants to do a fucking thankless job, especially when the last maintainer was given a bunch of shit for it.

    • Dettweiler@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      40
      arrow-down
      1
      ·
      3 days ago

      It’s all about curation and review. If they use AI to make the whole project, it’s going to be bloated slop. If they use it to write sections that they then review, edit, and validate; then it’s all good.

      I’m fairly anti-AI for most current applications, but I’m not against purpose-built tools for improving workflow. I use some of Photoshop’s generative tools for editing parts of images I’m using for training material. Sometimes it does fine, sometimes I have to clean it up, and sometimes it’s so bad it’s not worth it. I’m being very selective, and if the details are wrong it’s no good. In the end, it’s still a photo I took, and it has some necessary touchups.

    • XLE@piefed.social
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      3
      ·
      edit-2
      3 days ago

      “If” doing all the lifting here.

      If we ignore the mountain of evidence saying the opposite…

  • peacefulpixel@lemmy.world
    link
    fedilink
    English
    arrow-up
    26
    arrow-down
    14
    ·
    3 days ago

    if you’re going to stoop so low as to use fucking AI have the decency to show it so people with actual standards know to avoid it. but to be fair, a cat n mouse game of whether it was used or not would make me avoid it anyway

      • peacefulpixel@lemmy.world
        link
        fedilink
        English
        arrow-up
        11
        arrow-down
        5
        ·
        edit-2
        3 days ago

        if you don’t want people to complain about you using AI, then don’t use AI. it’s easier than you think

        • 4am@lemmy.zip
          link
          fedilink
          English
          arrow-up
          11
          arrow-down
          3
          ·
          3 days ago

          This guy gets it.

          Be open about it. Many people will not like it. Many people will not trust your product any longer. You need to be willing to let those people go with grace, or else you’re already taking on a project you can’t handle.

  • darkangelazuarl@lemmy.world
    link
    fedilink
    English
    arrow-up
    37
    arrow-down
    5
    ·
    3 days ago

    If he’s using like an IDE and not vibe coding then I don’t have much issue with this. His comment indicates that he has a brain and uses it. So many people just turn off their brain when they use AI and couldn’t even write this comment I just wrote without asking AI for assistance.

    • Auli@lemmy.ca
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 days ago

      Just wait in a couple months he’ll have a teenage girl sentient AI.

    • Holytimes@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      4
      ·
      3 days ago

      Hell most people turn off their brains when the word gets mentioned at all. There’s plenty of basic shit an ai can do exactly as good as a human. But people hear AI and instantly become the equivalent of a shit eating insect.

      As long as your educated and experienced enough to know the limitations of your tools and use them accurately and correctly. Then AI is literally a non factor and about as likely to make an error as the dev themselves.

      The problem with AI slop code comes from executives in high up positions forcing the use of it beyond the scope it can handle and in use cases it’s not fit for.

      Lutris doesn’t have that problem.

      So unless the guy suddenly goes full stupid and starts letting AI write everything the quality is not going to change. If anything it’s likely to improve as he off loads tedious small things to his more efficient tools.

      • Echo Dot@feddit.uk
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        1
        ·
        3 days ago

        The problem is I’ve seen people who supposedly have a brain start to use a high and over time they become increasingly confident in the AI’s abilities. Then they stop bothering to review the code.

        • P03 Locke@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          10
          arrow-down
          1
          ·
          2 days ago

          Then they stop bothering to review the code.

          This happens with human code reviews all the time.

          “I don’t really understand this code, but APPROVE!”

          “You need this thing merged today? APPROVE!”

          “This code is too long, and it’s almost my lunch break. APPROVE!”

          Over and over and over again. The worse thing you can insult me with is take code I spend days working on and approve it five minutes after I submitted it to you.

          • ÚwÙ-Passwort@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            2 days ago

            Do i feel bad about 10comments on my review, all with basic shit? Yes. Do i prefer that over Idiot managed to slip error surpression by review? Yes. At the end im Happy that someone looked deep enough to find my small stuff, so its not going to Master and life on for decades.

        • Auli@lemmy.ca
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 days ago

          That is the problem. They become dependent on it and it is human nature to be lazy. So eventually the “safe guards” well come off.

    • Ephera@lemmy.ml
      link
      fedilink
      English
      arrow-up
      12
      ·
      3 days ago

      Yeah, that’s my biggest worry. I always have to hold colleagues to the basics of programming standards as soon as they start using AI for a task, since it is easier to generate a second implementation of something we already have in the codebase, rather than extending the existing implementation.

      But that was pretty much always true. We still did not slap another implementation onto the side, because it’s horrible for maintenance, as you now need to always adjust two (or more) implementations when requirements change.
      And it’s horrible for debugging problems, because parts of the codebase will then behave subtly different from other parts. This also means usability is worse, as users expect consistency.

      And the worst part is that they don’t even have an answer to those concerns. They know that it’s going to bite us into the ass in the near future. They’re on a sugar high, because adding features is quick, while looking away from the codebase getting incredibly fat just as quickly.

      And when it comes to actually maintaining that generated code, they’ll be the hardest to motivate, because that isn’t as fun as just slapping a feature onto the side, nor do they feel responsible for the code, because they don’t know any better how it actually works. Nevermind that they’re also less sharp in general, because they’ve outsourced thinking.

  • Rentlar@lemmy.ca
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    8
    ·
    3 days ago

    I don’t mind if the developer adds AI-generated code, but if they mix it with their own work without appropriate attribution in a way that it could be considered all AI-generated, it may become un-copyrightable.

    • Pika@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      4
      ·
      3 days ago

      I don’t think in this case it will really matter since as it’s GPL anyway, so the worst case scenario is some private company takes the code and tries to use it without giving back but I can see the issue with other projects or if they wanted to use a more restrictive license

      • Rentlar@lemmy.ca
        link
        fedilink
        English
        arrow-up
        3
        ·
        3 days ago

        It’s a mixed bag, I’m pretty neutral on it since it prevents copyleft licensing as much as copyright.

  • jumjummy@lemmy.world
    link
    fedilink
    English
    arrow-up
    17
    arrow-down
    23
    ·
    2 days ago

    The AI hate crowd on Lemmy is pretty insufferable. Same folks would be complaining about Cloud tech back in the day.

    Know the limits of AI and use it appropriately. Completely shunning AI is just silly.

  • Katana314@lemmy.world
    link
    fedilink
    English
    arrow-up
    50
    arrow-down
    4
    ·
    3 days ago

    To admit some context: My company has strongly encouraged some AI usage in our coding. They also encourage us to be honest about how helpful, or not, it is. Usually, I tell them it turns out a lot of garbage and once in a while helps make a lengthy task easier.

    I can believe him about there being a sweet spot; where it’s not used for everything, only for processes that might have taken a night of manual checks. The very real, very reasonable backlash to it is how easily a poor management team or overconfident engineer will fall away from that sweet spot, and merge stuff that hasn’t had enough scrutiny.

    Even Bernie Sanders acknowledged on the senate floor that in a perfect world, where AI is owned by people invested in world benefit, moderate AI use could improve many people’s lives. It’s just sad that in 99.9% of cases, we’re not anywhere near that perfect world.

    I don’t totally blame the dev for defending his use of AI backed by industry experience, if he’s still careful about it. But I also don’t blame people who don’t trust it. It’s kind of his call, and if the avoidance of AI is important enough to you, I’d say fork it. I think it’s a small red flag, but not nearly enough of one for me to condemn the project.

    • tb_@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      3
      ·
      3 days ago

      It can be useful for generating switch cases and other such not-quite copy-paste work too. There are reasonable use cases… if you ignore how the training data was sourced.

      • ChocolateFrostedSugarBombs@lemmy.world
        link
        fedilink
        English
        arrow-up
        22
        arrow-down
        1
        ·
        3 days ago

        And the incredible amount of damage and destruction it’s still inflicting on the environment, society, and the economy.

        No amount of output is worth that cost, even if it was always accurate with no unethical training.

    • underisk@lemmy.ml
      link
      fedilink
      English
      arrow-up
      10
      ·
      3 days ago

      Even Bernie Sanders acknowledged on the senate floor that in a perfect world, where AI is owned by people invested in world benefit, moderate AI use could improve many people’s lives.

      I don’t think you should make a claim like this while AI is being heavily subsidized and burning VC cash to stay afloat. The truth is whatever value it may add to such a society might actually be completely negated by it’s resource costs. Is even “moderate” AI use ecologically or economically sustainable?

      • Katana314@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        3 days ago

        For full disclosure, I remembered once someone claimed to me there are AI models that use much less power. But, to confirm that statement before replying, I looked up an investigation, and they say it’s much murkier, and that a company’s own claims are usually understating it. So, you’re on point.

      • utopiah@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        1
        ·
        edit-2
        2 days ago

        Indeed, as they said in Italian “if my grandmother had wheels she would have been a bike” … the reasoning might be theoretically correct but in the current situation it’s just not the case.

  • aksdb@lemmy.world
    link
    fedilink
    English
    arrow-up
    14
    arrow-down
    20
    ·
    3 days ago

    Does everything have to be a god damn culture war now?! I really don’t give a fuck how people do their work. Judge the outcome not the workflow. No one gave a damn how sloppy some developers hacked together solutions that are widely used. But suddenly it’s an issue if coding agents are used? WTF.

    Stop the damn polarization for completely irrelevant things; we get polarized enough for political reasons; we don’t have to bring even more dissent into our communities and fuck each other up with in-fighting.

    • TrickDacy@lemmy.world
      link
      fedilink
      English
      arrow-up
      30
      arrow-down
      2
      ·
      3 days ago

      Culture war? Lol

      Yes, the observation that software quality seems negatively impacted by ai use is not allowed to be expressed, because you don’t observe it.

      • aksdb@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        14
        ·
        3 days ago

        The culture war part is the call to boycott a project or shit on its author because they use coding agents, as is done throughout these comments. The whole separation into “those who use AI are bad” and “those who hate AI are good” is a culture war. A needless one at that.

        • TrickDacy@lemmy.world
          link
          fedilink
          English
          arrow-up
          14
          arrow-down
          1
          ·
          3 days ago

          TIL fact-based opinions and the arguments that come from them are “culture wars”.

          • aksdb@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            5
            ·
            edit-2
            3 days ago

            I also brought facts and objective reasoning, yet I get downvoted.

            Yet anecdotal comments like “I tested it myself and it sucks” get upvoted; apparently simply because it fits the own worldview.

            That’s not polarization to you?

            • TrickDacy@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              ·
              3 days ago

              It’s for sure a polarizing topic, I just don’t see how it’s a culture war. “Sub-culture war” maybe?

              • aksdb@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                ·
                3 days ago

                Ok maybe I mis-use the word. If that’s the case, sorry about that. But I hope my point comes across anyway: I really really dislike that the community (or multiple communities, even) get split between people who are ok with AI and who are against AI. This is, IMO, completely unnecessary. That doesn’t mean everyone should be ok with it, but we should not judge or condemn each other because of a different opinion on the matter.

                If you notice a project goes downhill, it’s fine to criticize the author (or the whole project) for the degredation in quality. If there are strong indicators that AI is involved, by all means leave a snarky remark about that while complaining. But ultimately it’s the fuckup of a human.

                • TrickDacy@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  ·
                  3 days ago

                  What you’re taking issue with though is deeper than ai. It’s online discourse that is so rude and nuance-less.

                  In any case, this thread is full of people saying things like “that’s his right to do this but he communicated poorly about this” and getting piles of upvotes. So, yes ai is very polarizing in this corner of the Internet, but I think it’s much more at issue here that people don’t like his handling of it. I know that personally if it weren’t for that I probably would’ve thought “hmm sounds sketchy to use ai in a product thousands of people depend on” and kept scrolling. But no, he was a dick about it and is now hiding his use of ai moving forward. So the people who hate AI are extra pissed about it. Likely because they fear others will follow that lead and enshittify the software they currently enjoy.

        • Tony Bark@pawb.socialOP
          link
          fedilink
          English
          arrow-up
          6
          arrow-down
          2
          ·
          3 days ago

          As I’ve said in an earlier thread, AI over engineers code and hallucinates APIs that don’t exist. Furthermore, hallucinations themselves are a very well studied phenomenon that has proven difficult to combat. People have very legit compliments about AI that you seem to be determined to dismiss as nothing more than a culture war.

          • aksdb@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            6
            ·
            3 days ago

            But those issues get determined by reviews and tests. You determined these issues and worked against them, why do you think the author of Lutris is not able to? Neither I nor the author says anyone should use AI produced results as is (i.e vibe code).

          • dev_null@lemmy.ml
            link
            fedilink
            English
            arrow-up
            3
            ·
            edit-2
            3 days ago

            And why is that a problem?

            Google searches also usually generate mostly useless results, which is impossible to combat. Thankfully the person doing the search knows what they are looking for, can try different solutions, and learn from multiple results to get to a working solution.

            Why do you consider AI different? Nobody is expecting it always give correct solutions, just like nobody is expecting Googling something to always give the correct solution.

            I’m not saying AI is useful, but I’m saying that a tool being fallible doesn’t make it useless. So I’m wondering why do you consider AI different? If Googling is fine even though you need to checks multiple results before finding something useful, why is searching with AI held to a higher standard? Genuine question. Because I agree with your critique of AI, I just don’t agree the critique means no one should ever use it. There are much less reliable tools than AI, that are still useful at times.

          • aksdb@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            2
            ·
            3 days ago

            The way flat earthers act? Yes. They treat it as a culture war. Just like anti-vaxers.

    • Tony Bark@pawb.socialOP
      link
      fedilink
      English
      arrow-up
      23
      arrow-down
      4
      ·
      3 days ago

      AI has caused plenty of headaches for developers. This isn’t some culture war shit.

      • Voroxpete@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        2
        ·
        3 days ago

        But that kind of proves their point, right?

        Yes, a lot of projects have had issues with contributers who push unreviewed AI slop that they don’t understand, ultimately creating more work for the project. Or with avalanches of AI code review bug reports that do nothing to help. But that’s not what’s happening here.

        In this case, the main developer of the project is choosing to use AI, on their own terms, because they find it helpful, and people are giving them shit for it. It’s their project and they feel this technology is beneficial. Isn’t that their call to make? Why are people treating the former and the latter as completely interchangeable scenarios when they’re clearly not? It kind of does suggest that people are coming at this from a more ideological rather than rational perspective.

      • aksdb@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        arrow-down
        11
        ·
        edit-2
        3 days ago

        That is for each developer to decide, if they can handle it or not.

        As I said: judge the result, not the workflow.

        • prole@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          1
          ·
          3 days ago

          judge the result, not the workflow.

          This kind of seems like bad advice in general. The process to create a result is often extremely important to be aware of. For example, if possible, I would like to not consume products built with slave labor.

          • aksdb@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            3 days ago

            Depends. If you are generally careful about what products/projects you use and audit them, and you notice that the owner has horrible code hygiene, bad dependency management, etc., then sure. But why judge them for the tools they use? You can still audit the result the same way. And if you notice that code hygiene and dependencies suck, does it matter if they suck because the author mis-used coding agents, because they simply didn’t give a damn, or because they are incapable of doing any better?

            You’ve likely stumbled on open source repos in the past where you rolled your eyes after looking into them. At least I have. More than once. And that was long long before we had coding agents. I’ve used software where I later saw the code and was suprised this ever worked. Hell, I’ve found old code of myself where I wondered why this ever worked and what the fuck I’ve been smoking back then.

            It’s ok to consider agent usage a red flag that makes you look closer at the code. But I find it unfair to dismiss someones work or abilities just because they use an agent, without even looking at what they (the author, ultimately) produce. And by produce I don’t mean the final binary, but their code.

          • Voroxpete@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            4
            ·
            edit-2
            3 days ago

            The thing is, you’re conflating ethical and practical concerns here. The commenter you’re responding to is clearly talking about the practical aspects of using AI tools.

            If you have a fundamental moral issue with AI that is entirely independent of how efficacious it is, that’s fine. That’s a completely reasonable position to hold. But don’t fall into the trap of wanting every use of genAI to be impractical because it aligns with your morality to feel that way.

            If this is an ethical stance that you truly hold, you should be willing to believe that using these tools is bad even when they’re effective. But a lot of people instead have to insist that every use of AI is impractical, in the face of any evidence to the contrary, because they’ve talked themselves into believing that on some fundamental level. Like “If AI is ever useful, that means I’m wrong about it being immoral.”

        • Tony Bark@pawb.socialOP
          link
          fedilink
          English
          arrow-up
          16
          arrow-down
          4
          ·
          3 days ago

          As I said: judge the result, not the workflow.

          I’ve tested AI myself and seen the results. I’ll judge how I see fit.

          • aksdb@lemmy.world
            link
            fedilink
            English
            arrow-up
            8
            arrow-down
            8
            ·
            3 days ago

            I am not talking about the result of the AI. I am talking about Lutris. If the code that ends up in the repo is fine, it doesn’t matter if it was the author, an agent, or an agent followed by a ton of cleanup by the author. If the code is shit it also doesn’t matter if it was an incompetent AI or an incompetent human. Shitty code is shitty, good code is good. The result matters.

            • atrielienz@lemmy.world
              link
              fedilink
              English
              arrow-up
              4
              ·
              3 days ago

              There’s a problem with that. The vast majority of Linux users are probably more tech savvy than average but I’d wager not all of them or even the vast majority have the skills to vet the code.

              Lots of the people in the gaming space who are having Lutris suggested/recommended to them are not going in to check that code for problems. They install the flatpak on move on with their lives.

              It appears (from what I’ve read which isn’t necessarily the end all be all) that the people taking exception to the use of AI to code Lutris are doing so because they do decompile and vet code.

              My understanding is that it’s harder to get AI code in general because when it hallucinates it may do so in ways that appear correct on the surface, and or do so in ways that don’t even give a significant indication of what that code is attempting to do. This is the problem with vibe coding in general from my understanding and it becomes harder and harder even for senior code engineers to check the output because of the lack of a frame of reference.

              You’re asking people who don’t have the skills to ignore people who do have the skills who are sounding the alarm.

              I get that this person is a single person writing code and disseminating it for free. I get that we should be thankful for free and open software. I fully understand why this person might use AI to help with coding.

              I understand that they are upset about the backlash. But that was a very much foreseeable consequence of the credits they gave the AI (a choice they made), and honestly the use of AI (which might have been called out later on if they hadn’t credited it).

              They shot themselves in the foot with the part of their response that was flippant and a “fuck you” to anyone who might find the use of AI concerning.

              There’s also the fact that AI is something that a lot of people in the Linux community at large seem to already be boycotting and boycotting derivatives of it make sense.

              Just because you create something for free doesn’t mean people have to use it. Or that people aren’t free to boycott it.

              • aksdb@lemmy.world
                link
                fedilink
                English
                arrow-up
                4
                ·
                3 days ago

                Thanks for that long answer. I agree completely with the second half of it. I also agree with most of the first half of it, but I have to add a remark to it:

                My understanding is that it’s harder to get AI code in general because when it hallucinates it may do so in ways that appear correct on the surface, and or do so in ways that don’t even give a significant indication of what that code is attempting to do. This is the problem with vibe coding in general from my understanding and it becomes harder and harder even for senior code engineers to check the output because of the lack of a frame of reference.

                That is mostly true, but also depends on the usage. You don’t have to tell an agent to “develop feature X” and then go for a coffee. You can issue relatively narrow scoped prompts that yield small amounts of changes/code which are far easier to review. You can work that way in small iterations, making it completely possible to follow along and adjust small things instead of getting a big ball of mud to entangle.

                And while it’s true that not everyone is able to vet code, that was also true before and without coding agents. Yet people run random curl-piped-to-bash commands they copy from some website because it says it will install whatever. They install something from flathub without looking at the source (not even talking about chain of trust for the publishing process here). There is so much bad code out there written by people who are not really good engineers but who are motivated enough to put stuff together. They also made and make ugly mistakes that are hard to spot and due to bad code quality hard to review.

                The main risk of agents is, that they also increase the speed of these developers which means they pump out even more bad code. But the underlying issue existed before and agents don’t automatically mean something is bad. That would also be dangerous to believe that, because that might enforce even more the feeling of security when using a piece of code that was (likely) written without any AI influence. But that’s just not true; this code could be as harmful or even more harmful. You simply don’t know if you don’t review it. And as you said: most people don’t.

                • Voroxpete@sh.itjust.works
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  3 days ago

                  Frankly, most AI generated code is often easier to review, thanks to a combination of standardized practices (LLMs regress to the mean by design) and a somewhat overly enthusiastic approach to commenting and segmented layouts.

  • Captain_Stupid@lemmy.world
    link
    fedilink
    English
    arrow-up
    23
    arrow-down
    10
    ·
    2 days ago

    To be honest I don’t give a shit if a dev uses AI or not. As long as the code does what it is suppost to. In my personal experience AI, while still not anywhere near to capabilitys of a decent dev, can sometimes find and fix errors that I would have missed.

    • neomachino@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 days ago

      I use AI to look at my git diffs before I push them up. I use a local LLM and specifically instruct it to look for typos, left over debug prints, or stupid logic.

      It’s caught quite a few stupid things that I’m apparently blind to and my coworker appreciates it.

      That’s not to say I’d sit back and let it write whole features, pushing it right to master after a short skim… Like someone else I know has started doing. But it can absolutely have a useful purpose.

    • RagingRobot@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      14
      ·
      2 days ago

      When we write code we use a compiler to translate it into other code that the computer can understand. Now we tell AI to write code that is then compiled into other code that the computer can understand.

      It seems very similar at the end of the day. The problem is it makes the process easier. That’s what everyone is so upset about. And that’s only an issue because we don’t feel special anymore. It sucks but I’m sure it will pass. Even if it takes a generation

      • Evotech@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        1 day ago

        The ai to do tests and boilerplate was like AI 3 months ago. Now just genuinely oneshots complex implementations

  • Skankhunt420@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    26
    arrow-down
    5
    ·
    2 days ago

    Open source stuff is awesome and I really like people improving Linux in their spare time

    But, to do it this way is basically saying “fuck you” to the community which is fucked up.

    Could have talked about how AI helps him or how he uses it for templates or whatever and damn even if I didn’t agree with those points either that’s a lot better than being like “alright good luck finding it now then bitch

    I wouldn’t mess with anything this guy does anymore after this.

  • Retail4068@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    8
    ·
    3 days ago

    You’re going to screech at this guy contributing his time and code, who in all likelihood will pump out more features. Absurd. Prejudice and fear has blinded a significant portion of the foss community

  • magikmw@piefed.social
    link
    fedilink
    English
    arrow-up
    65
    arrow-down
    10
    ·
    3 days ago

    Worth mentioning that the user that started the issue jumps around projects and creates inflammatory issues to the same effect. I’m not surprised lutris’ maintainer went off like they did, the issue is not made with good faith.

  • TheSeveralJourneysOfReemus@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    ·
    2 days ago

    the temptation of using claude code is probably higher than it looks like for a single dev, I think. Hey, in the end one can just have this in their IDE and essentially have your own unpaid intern. It’s a fairly new situation.

    • Reginald_T_Biter@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      2
      ·
      2 days ago

      Your own unpaid intern, who is paid by someone else, employed by someone else, and who has access to all your repos secrets and business logic.

      Yeah, nah. I think id rather not train my competitors.

    • pheelicks@lemmy.zip
      link
      fedilink
      English
      arrow-up
      5
      ·
      2 days ago

      Yup, single dev here, can confirm. I’m coding for a living but am mediocre at it since I jumped from civil engineering to something I kind of enjoy. To me coding assistants are a huge help. Finding solutions, discussing ideas, writing down implementation plans, can’t do all that stuff with my colleagues since they have no clue about my work.

      • Auli@lemmy.ca
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 days ago

        So your not that food and using AI to make your products better so you can sell your coding abilities. Or your better then you think you are.

        • TheSeveralJourneysOfReemus@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          2 days ago

          if you’re interested i used AI to learn a library to make my link-scraping script and return me only the open access pdf from Google scholar. yeah. it is virtually useless because i need to check all the same. But boy did it make me feel smart.

          • doing this might be slightly a lot out of the TOS tho.
  • lohky@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    2
    ·
    2 days ago

    There hasn’t been anything I haven’t been able to run between Heroic and Steam. I didn’t like using lutris anyway. ¯\_(ツ)_/¯