A user asked on the official Lutris GitHub two weeks ago “is lutris slop now” and noted an increasing amount of “LLM generated commits”. To which the Lutris creator replied:
It’s only slop if you don’t know what you’re doing and/or are using low quality tools. But I have over 30 years of programming experience and use the best tool currently available. It was tremendously helpful in helping me catch up with everything I wasn’t able to do last year because of health issues / depression.
There are massive issues with AI tech, but those are caused by our current capitalist culture, not the tools themselves. In many ways, it couldn’t have been implemented in a worse way but it was AI that bought all the RAM, it was OpenAI. It was not AI that stole copyrighted content, it was Facebook. It wasn’t AI that laid off thousands of employees, it’s deluded executives who don’t understand that this tool is an augmentation, not a replacement for humans.
I’m not a big fan of having to pay a monthly sub to Anthropic, I don’t like depending on cloud services. But a few months ago (and I was pretty much at my lowest back then, barely able to do anything), I realized that this stuff was starting to do a competent job and was very valuable. And at least I’m not paying Google, Facebook, OpenAI or some company that cooperates with the US army.
Anyway, I was suspecting that this “issue” might come up so I’ve removed the Claude co-authorship from the commits a few days ago. So good luck figuring out what’s generated and what is not. Whether or not I use Claude is not going to change society, this requires changes at a deeper level, and we all know that nothing is going to improve with the current US administration.



so you draw the line at stealing artists work, but not programmers work?
Being a developer, I don’t care if someone else uses my code. Code is like a brick. By itself it has little value, the real value lies on how it is used.
If I find an optimal way to do something, my only wish is to make it available to as much people as possible. For those who comes after.
Sure, but that’s just your view.
And also not how LLMs work.
They gobble up everything and cause unreadable code. Not learning.
Tbh all programmers have been copy pasting from each other forever. The middle step of searching stack overflow or GitHub for the code you want is simply removed
That’s not what an LLM is doing is it.
Exactly. If someone has already come up with an optimal solution why the hell would I reimplement it. My real problems are not with LLMs themselves but rather the sourcing of the training data and the power usage. If I could use an “ethically sourced” llm locally I’d be mostly happy. Ultimately LLMs are also only good for code specifically. Architecture or things that require a lot of thought like data pipelines I’ve found AI to be pretty garbage at when experimenting
Lutris is GPL-licenced, so isn’t it the opposite of stealing?
No, the LLM was trained on other code (possibly including Lutris, but also probably like billions of lines from other things)
LLMs have stolen works from more than just artists.
ALL of public repositories at a minimum have been used as training, regardless of licence. including licneses that require all dirivitive work be under the same license.
so there’s more than just lutris stollen.
So he’s a badass Robinhood pirate that steals code from corporations and gives it to the people?
The fuck you talking about.
Using a tool with billions of dollars behind it robinhood?
How is stealing open source prihcets code regardless of license stealing fr corporation’s?
he’s using a tool that took billions in funding.
that’s not how open source licensing work.
no, I’m saying some licneses restrict LLM usage in the form of derivative work must also be licensed under the same license. Using that work as a starting point requires you to also open that portion of code.
side note:
https://www.securityweek.com/github-copilot-chat-flaw-leaked-data-from-private-repositories/
why does AI have access to private code?