A user asked on the official Lutris GitHub two weeks ago “is lutris slop now” and noted an increasing amount of “LLM generated commits”. To which the Lutris creator replied:
It’s only slop if you don’t know what you’re doing and/or are using low quality tools. But I have over 30 years of programming experience and use the best tool currently available. It was tremendously helpful in helping me catch up with everything I wasn’t able to do last year because of health issues / depression.
There are massive issues with AI tech, but those are caused by our current capitalist culture, not the tools themselves. In many ways, it couldn’t have been implemented in a worse way but it was AI that bought all the RAM, it was OpenAI. It was not AI that stole copyrighted content, it was Facebook. It wasn’t AI that laid off thousands of employees, it’s deluded executives who don’t understand that this tool is an augmentation, not a replacement for humans.
I’m not a big fan of having to pay a monthly sub to Anthropic, I don’t like depending on cloud services. But a few months ago (and I was pretty much at my lowest back then, barely able to do anything), I realized that this stuff was starting to do a competent job and was very valuable. And at least I’m not paying Google, Facebook, OpenAI or some company that cooperates with the US army.
Anyway, I was suspecting that this “issue” might come up so I’ve removed the Claude co-authorship from the commits a few days ago. So good luck figuring out what’s generated and what is not. Whether or not I use Claude is not going to change society, this requires changes at a deeper level, and we all know that nothing is going to improve with the current US administration.


Sure, but that’s just your view.
And also not how LLMs work.
They gobble up everything and cause unreadable code. Not learning.
That’s not how LLMs work either.
An LLM had no knowledge, but has the statically probability of a token to follow another token, and given an overall context it create the statically most likely text.
To calculate such probability as accurently as possible you need as much examples as possible, to determine how often word A follow word B. Thus the immense datasets required.
Luckily for us programmers, computer programs are inherently statically similar, which makes LLMs quite good at it.
Now, the programs it create aren’t perfect, but it allows to write long, boring code fast, and even explain it if you require it to. This way I’ve learned a lot of new things that I wouldn’t have unless I had the time and energy to screw around with my programs (which I wished I had, but don’t), or looked around Open Source programs source code, which would take years to an average human.
Now there is the problem of the ethic use of AI, which is a whole other aspect. I use only local models, which I run on my own hardware (usually using Ollama, but I’m looking into NPU enabled alternatives).