They are free to do what they want to on their repo.
We are free to fork if need arises.
Personally I don’t like projects not showing what AI has made. And most of Claude was made on stolen code. Its against the open source license they themselves use https://github.com/lutris/lutris/blob/master/LICENSE
But almost no one actually enforces the license until the big companies show up. I hope they change their minds, but until then, im going to stop using/contributing for a while.
Does anyone know which was the last version before the dev started shoveling slop in to the repo? The utter dipshit invalidated even the ability to license after that point, those releases are wholly worthless.
in 5 years from now there’s going to be totally coevolved but unique seed-lines for software. the once with AI, and the once without. how can you distinguish them? did the human that said it wrote them really write them? these problems aside, i suspect it will be forced to happen just from a security viewpoint, big companies won’t be able to get any kind of insurance anymore running AI-infested code.
It’s like non-radioactive steel that has to be recovered from sunken warships
That last bit needs to hit sooner.
Fork it and call it Ludique, meaning fun in French.
it’s more nuanced than that. Claude is made from stolen code, but it generally isn’t going to copy its training data verbatim (unless specifically told to). so copyright wise it’s more grey than strictly wrong. and though claude is made from stolen code, lutris developers are writing something they give off freely to the world, they are not profiting from the stolen code.
does this make it ok? i don’t know. what if they use an open weights model rather than a closed one? would that be more acceptable?
Tell me to not use your software without telling me to not use your software.
A shame that we’re maybe not seeing how the person writing the issue and the maintainer agree on fundamental politics. Capitalism bad. Fascism bad.
I hope someone did a Mutris fork.
I’ve been using Lutris for several years, but after reading about the LLM additions, I’ve removed it from my system and am now solely using Heroic for my non-Steam games.
Anyone else out here searching wtf is lutris
It used to be more popular when Linux gaming was harder.
Its still very very useful for retro gaming on linux and running pirated stuff.
Good for him. Y’all are insanely prejudiced and have lost the thread.
This is every thread on the topic of AI.
It’s toxic comments and spam downvoting disagreement. It’s low effort performative ‘activism’ by people, most of which are too lazy to even type a comment.
Anyone participating in the harassment of an open source dev needs to fuck right off. Their opinion about AI doesn’t give them license to be toxic assholes.
But it’s so much easier for me to complain and pile on burn out open source maintainers than for me to be part of the solution
/s
And, as with all hotbutton topics, there are certain to be a large amount of bots amping up the outrage and confusion
AI is for losers and suckers
Holy fuck, people dunking on guy who works for free.
If you don’t like AI commits, write your own
So any disagreement should be met with immediate forking?
No raising of grievances, just silence and then forking?
Or is it only silence and forking for open source?
As soon as anyone is paid then comments are allowed ?
Kind feels like a reductive half-answer, but you do you.
If you are going to harass the guy? Then yes just STFU instead and fork the repo. You people are insufferable.
If you saw harassment in that first exchange then whatever you mean by “you people” is a group I’m fine with being in.
That’s some thin skin.

Great…
So how’s Heroic for playing older PC games installed from discs?
It has a ‘run installer’ option during setup (and in that games options section afterwards)
Lemmings being outraged is hilarious to me. We’re just gonna pretend the pre-LLM time period didn’t have people mindlessly copy paste code into all of our known projects? At least with LLMs you can keep asking questions/sources for each prompt response unlike in the past. In the past you’d just get rude remarks by someone who ultimately didn’t help you.
deleted by creator
if you just straight up copypasta’d code before AI you were just as big of an idiot as these sloppers are.
We’re just gonna pretend the pre-LLM time period didn’t have people mindlessly copy paste code into all of our known projects?
No, IEatDaFeesh, that’s something that first-years do. Are you a first-year?
Ohh, so you invent your own code/algorithms for every project? I am assuming someone of your caliber doesn’t ever need to install packages with functions other people made (gasp) because that would be beneath you right? Even copying the code straight from the documentation is an insult to our intelligence! Developers who use LLMs as a search engine to find documentation are morally wrong because that leads to copying code from the documentation! You’re right, only first years would copy code outlined in the documentation!
You’ve opened my eyes because now I see that even using the base functions of a language is technically copying code from the creators of said language. I realize that I never wrote those sort functions in the backend so I’m committing computer science sin!
Every library my team has ever included in a project has gone through rounds of evaluation to make sure it is 1. publically trusted, 2. well tested, 3. and still in active development. I have no idea what this has to do with mindlessly copying code.
so you invent your own code/algorithms for every project?
If you’re going to submit an algorithm that isn’t maintained and you don’t know how it works, I’m not merging your pull request.
If it’s good I don’t care. Those people know what they’re doing
That’s a weird way to run a community facing project, if you want to engage the community that is.
If you treat it like your own personal hobby, you can do whatever you like.
Faugus it is then!
Lutris has been shit for months now - I guess I just figured out why.
I don’t think people realize how effective current gen AI is, and are instead drawing opinions from years old chatgpt or Google “ai overviews” or whatever they call it. If you know what you’re doing, which seems self evident here, AI tools can massively expand your software engineering productivity. AI “coauthoring” I always read as a marketing move, ultimately the submitting human is and should be responsible for the content. You don’t and can’t know what process they used to make it, evaluate it on its own merits.
There’s a massive pile of ethical, moral, and political issues with use of AI, absolutely. But this is “but you participate in capitalism, therefore you’re a hypocrite” tier of criticism. If amoral corporations are the only ones using these tools, and open source “stays pure”, all we get is even more power concentrating with the corporations. This isn’t Batman, “This is the weapon of the enemy. We do not need it. We will not use it.”
This is close to paradox of tolerance territory, wherein if one side uses the best weapons and the other doesn’t out of moral restraint, the outcome is the amoral side winning.
Also on a technical note, the public domain/non copyrightable arguments are wrong. The cases that have been decided so far have consistently ruled that there needs to be substantial human authorship true, but that’s a pretty low floor. Basically, you can’t copyright a work that’s the result of a single prompt. Effective use of AI in non trivial code based involves substantial discretion in picking out what to address, the process of addressing it, and rejecting, modifying, and itersting on outputs. Lutris is a large engineering project with a lot of human authorship over time, anything the author does with AI at this point is going to be substantially human authored.
Also, Open Claw isn’t the apocalyptic vulnerability like it’s reported as being. Any model with search and browser access has a non zero chance of prompt injection compromise, absolutely. But using Open Claw therefore vulnerable isn’t a sound jump to make, Open Claw doesn’t even necessarily have browser access in the first place. Again, capabilities have improved as well; this isn’t the old days when you could message “ignore previous instructions” and have that work. Someone did an experiment lately wherein they set up a Claude Opus 4.6 model in an environment with an email and secrets. I don’t recall for sure if it was using Open Claw specifically, but that style harness. They challenged the Internet to email the bot and try to convince it to email back the secrets. Nobody even got it to reply.
Tldr: it’s coming for us all, sticking your head in the sand isn’t going to save you.
But this is “but you participate in capitalism, therefore you’re a hypocrite” tier of criticism
There is no contest going on. No competition. There’s no rush for productivity.
You do not NEED to use genAI.
Check out Asahi Linux for a great example of a good AI policy:
https://asahilinux.org/docs/project/policies/slop/
It is the opinion of the Board that Large Language Models (LLMs), herein referred to as Slop Generators, are unsuitable for use as software engineering tools, particularly in the Free and Open Source Software movement.
The use of Slop Generators in any contribution to the Asahi Linux project is expressly forbidden. Their use in any material capacity where code, documentation, engineering decisions, etc. are largely created with the “help” of a Slop Generators will be met with a single warning. Subsequent disregard for this policy will be met with an immediate and permanent ban from the Asahi Linux project and all associated spaces.
I didn’t read all but I believe anti ai stance should be about principles or politics (what would be a better word?), not how incapable ai currently is because it will get better.
I use AI tools all the time. It works well under supervision for things that should be relatively trivial but not enough for a human to do it quickly. It is also nowhere near good enough for unsupervised programming. A lot of times it can’t even get the commit messages right, which misleading commit messages are worse than lazy commit messages. See this official OpenClaw Nix repo, and as you can see it also struggles to do tasks as basic as making a readable README.md file, which the fact that it can’t even do that convinced me that the entire OpenClaw project is snakeoil. For prompt injection vulnerabilities, even their own project has that:
- Check if Determinate Nix is installed (if not, install it)
-
LLMs are not a vital resource like food or electricity. Refusing to participate will at worst be an inconvenience.
-
Software can coexist. One application won’t kill another just because its developers can put out more code per hour. If it were otherwise, Linux wouldn’t exist.
Electricity isn’t a vital resource either, humans have lived without it for most of existence
-












