A user asked on the official Lutris GitHub two weeks ago “is lutris slop now” and noted an increasing amount of “LLM generated commits”. To which the Lutris creator replied:

It’s only slop if you don’t know what you’re doing and/or are using low quality tools. But I have over 30 years of programming experience and use the best tool currently available. It was tremendously helpful in helping me catch up with everything I wasn’t able to do last year because of health issues / depression.

There are massive issues with AI tech, but those are caused by our current capitalist culture, not the tools themselves. In many ways, it couldn’t have been implemented in a worse way but it was AI that bought all the RAM, it was OpenAI. It was not AI that stole copyrighted content, it was Facebook. It wasn’t AI that laid off thousands of employees, it’s deluded executives who don’t understand that this tool is an augmentation, not a replacement for humans.

I’m not a big fan of having to pay a monthly sub to Anthropic, I don’t like depending on cloud services. But a few months ago (and I was pretty much at my lowest back then, barely able to do anything), I realized that this stuff was starting to do a competent job and was very valuable. And at least I’m not paying Google, Facebook, OpenAI or some company that cooperates with the US army.

Anyway, I was suspecting that this “issue” might come up so I’ve removed the Claude co-authorship from the commits a few days ago. So good luck figuring out what’s generated and what is not. Whether or not I use Claude is not going to change society, this requires changes at a deeper level, and we all know that nothing is going to improve with the current US administration.

  • ipkpjersi@lemmy.ml
    link
    fedilink
    English
    arrow-up
    19
    arrow-down
    3
    ·
    edit-2
    5 hours ago

    Honestly, unfortunately, I agree. It IS unfortunately helpful, and if you’re a competent developer using AI tooling, you can make sure it doesn’t generate slop. You are responsible for your code, at the end of the day.

    AI does generate societal damage, but that’s mostly because of how companies abuse it and less because of the technology itself.

    • InternetCitizen2@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      14 minutes ago

      but that’s mostly because of how companies abuse it and less because of the technology itself.

      In any other context this is tech to help us in our post scarce future.

    • Auli@lemmy.ca
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 hour ago

      If he is using it for backlog because he is swamped do you honestly think he is verifying the code.

    • Tony Bark@pawb.socialOP
      link
      fedilink
      English
      arrow-up
      6
      ·
      3 hours ago

      By telling people he expected this and obfuscating the authorship afterwards, he is doing damage in the form of eroding trust for a tool that has otherwise proven reliable.

    • Voroxpete@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      3
      ·
      5 hours ago

      As I’ve said elsewhere here, I really don’t have a problem with people holding a moral stance against the use of genAI. It’s fine to just say “However useful this might be, I don’t want to see it used because I think it has too many ethical costs/consequences.” But blanket accusing all work that involved genAI in any capacity of being “slop” isn’t holding a moral stance, it’s demanding that reality conform to your beliefs; “I hate this, therefore it must be terrible in every respect.”

      If you truly hold a well founded ethical stance against the use of genAI, that stance shouldn’t be threatened by people doing good and effective work with genAI, because it’s effectiveness should have nothing to do with your objections.