• menas@lemmy.wtf
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 minutes ago

    Ecological, social, economic issues and the answer is on the legal site. FOSS as usual I guess

  • 404found@lemmy.zip
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 hour ago

    I don’t understand the full picture here, but the person who is submitting AI slop will be held accountable. Never a company.

    So if a company is pushing staff to us AI to complete projects faster and their code ends up being AI slop when submitted, only the person working for the company will be held responsible.

    I’m not sure what the repercussions are here but hopefully it’s not a large fine. Those fines could add up quick if the person is submitting code all the time and doesn’t know they are messing up.

    • Wispy2891@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      44 minutes ago

      Which fines, this is just an internal rule in an organization.

      At most can be rightfully banned from contributing

      It someone is contributing with code that doesn’t really understand, then shouldn’t contribute

      • 404found@lemmy.zip
        link
        fedilink
        English
        arrow-up
        1
        ·
        41 minutes ago

        Ah okay got it now. Thanks. I didn’t understand it all the way. My comment is irrelevant

  • catlover@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    16
    ·
    2 hours ago

    I’d still be highly sceptical about pull requests with code created by llms. Personally what I noticed is that the author of such pr doesn’t even read the code, and i have to go through all the slop

    • kcuf@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 hour ago

      Ya I’m finding myself being the bad code generator at work as I’m scattered across so many things at the moment due to attrition and AI can do a lot of the boilerplate work, but it’s such a time and energy sink to fully review what it generates and I’ve found basic things I missed that others catch and shows the sloppiness. I usually take pride in my code, but I have no attachment to what’s generated and that’s exposing issues with trying to scale out using this

      • Repple (she/her)@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        21 minutes ago

        Same. There’s reduction in workforce, pressure to move faster, and no good way to do that without sloppiness. I have never been this down on the industry before; it was never great, but now it’s terrible.

    • terabyterex@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      3
      ·
      edit-2
      9 minutes ago

      Did we all forget about stackoverflow?

      Peopleblindly copy/pasted from there all the time.

      • Railcar8095@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        12 seconds ago

        Couple of years back I got a PR at work that used a block of code that read a CSV, used some stream method to covert it to binary to then feed it to pandas to make a dataframe. I don’t remember the exact steps it did, but was just crazy when pd.read_csv existed.

        On a hunch I pasted the code in google and found an exact match on overflow for a very weird use case on very early pandas.

        I’m lucky and if people send obvious shit at work I can just cc their manager, but I fell for the volunteers at large FOSS projects, or even paid employees.

  • Blue_Morpho@lemmy.world
    link
    fedilink
    English
    arrow-up
    90
    ·
    4 hours ago

    The title of the article is extraordinary wrong that makes it click bait.

    There is no “yes to copilot”

    It is only a formalization of what Linux said before: All AI is fine but a human is ultimately responsible.

    " AI agents cannot use the legally binding “Signed-off-by” tag, requiring instead a new “Assisted-by” tag for transparency"

    The only mention of copilot was this:

    “developers using Copilot or ChatGPT can’t genuinely guarantee the provenance of what they are submitting”

    This remains a problem that the new guidelines don’t resolve. Because even using AI as a tool and having a human review it still means the code the LLM output could have come from non GPL sources.

    • marlowe221@lemmy.world
      link
      fedilink
      English
      arrow-up
      20
      ·
      edit-2
      3 hours ago

      Yeah, that’s also my question. Partially because I am a former-lawyer-turned-software-developer… but, yeah. How are the kernel maintainers supposed to evaluate whether a particular PR contains non-GPL code?

      Granted, this was potentially an issue before LLMs too, but nowhere near the scale it will be now.

      (In the interests of full disclosure, my legal career had nothing to do with IP law or software licensing - I did public interest law).

      • Alex@lemmy.ml
        link
        fedilink
        English
        arrow-up
        6
        ·
        3 hours ago

        They don’t, just like they don’t with human submitted stuff. The point of the Signed-off-by is the author attests they have the rights to submit the code.

    • anarchiddy@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      7
      ·
      3 hours ago

      Yup.

      I would also just point out that this doesnt change the legal exposure to the Linux kernel to infringing submissions from before the advent of LLMs.

  • theherk@lemmy.world
    link
    fedilink
    English
    arrow-up
    99
    arrow-down
    3
    ·
    5 hours ago

    Seems like a reasonable approach. Make people be accountable for the code they submit, no matter the tools used.

    • ell1e@leminal.space
      link
      fedilink
      English
      arrow-up
      15
      arrow-down
      1
      ·
      4 hours ago

      If the accountability cannot be practically fulfilled, the reasonable policy becomes a ban.

      What good is it to say “oh yeah you can submit LLM code, if you agree to be sued for it later instead of us”? I’m not a lawyer and this isn’t legal advice, but sometimes I feel like that’s what the Linux Foundation policy says.

      • ViatorOmnium@piefed.social
        link
        fedilink
        English
        arrow-up
        22
        ·
        3 hours ago

        But this was already the case. When someone submitted code to Linux they always had to assume responsibility for the legality of the submitted code, that’s one of the points of mandatory Signed-off-by.

        • badgermurphy@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          8
          ·
          2 hours ago

          But now, even the person submitting the license-breaching content may be unaware that they are doing that, so the problem is surely worse now that contributors can easily unwittingly be on the wrong side of the law.

          • Traister101@lemmy.today
            link
            fedilink
            English
            arrow-up
            14
            ·
            2 hours ago

            That’s their problem. If they are using an LLM and cannot verify the output they shouldn’t be using an LLM

            • badgermurphy@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              7
              ·
              2 hours ago

              It is their problem until the second they submit it, then it is the project’s problem. You can lay the blame for the bad actions wherever you want, but the reality is that the work of verifying the legality and validity of these submissions if being abdicated, crippling projects under increased workloads going through ever more submissions that amount to junk.

              What is the solution for that? The fact that is the fault of the lazy submitter doesn’t clean up the mess they left.

              • Traister101@lemmy.today
                link
                fedilink
                English
                arrow-up
                4
                ·
                1 hour ago

                Frankly I expect the kernel dudes to be pretty good about this, their style guides alone are quite strick and any funny business in a PR that isn’t marked correctly is I think likely a ban from making PRs at all. How it worked beforehand, as already stated by others is the author says “I promise this follows the rules” and that’s basically the end of it. Giving an official avenue for generated code is a great way to reduce the negatives of it that’ll happen anyway. We know this from decades of real life experience trying to ban things like alcohol or drugs, time after time providing a legal avenue with some rules makes things safer. Why wouldn’t we see a similar effect here?

    • hperrin@lemmy.ca
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      2
      ·
      2 hours ago

      No, it’s not a reasonable approach. Make people be the authors of the code they submit is reasonable, because then it can be released under the GPL. AI generated code is public domain.

      • theherk@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        2
        ·
        2 hours ago

        I suppose there should be no code generators, assemblers, compilers, linkers, or lsp’s then either? Just etching 1’s and 0’s?

      • ziproot@lemmy.ml
        link
        fedilink
        English
        arrow-up
        0
        ·
        29 minutes ago

        Isn’t that the rule? The author has to be a human?

        The new guidelines mandate that AI agents cannot use the legally binding “Signed-off-by” tag, requiring instead a new “Assisted-by” tag for transparency. Ultimately, the policy legally anchors every single line of AI-generated code and any resulting bugs or security flaws firmly onto the shoulders of the human submitting it.

  • hperrin@lemmy.ca
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    2
    ·
    2 hours ago

    This is a bad move. The GPL license cannot be enforced on AI generated code.

    • truthfultemporarily@feddit.org
      link
      fedilink
      English
      arrow-up
      17
      arrow-down
      7
      ·
      4 hours ago

      Where does slop start? If you use auto complete and it is just adding a semicolon or some braces, is it slop? Is producing character by character what you would have wrote yourself slop?

      How about using it for debugging?

      • hperrin@lemmy.ca
        link
        fedilink
        English
        arrow-up
        6
        ·
        2 hours ago

        You don’t need AI to autocomplete code. We’ve had autocomplete for over 30 years.

      • ell1e@leminal.space
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        3
        ·
        edit-2
        4 hours ago

        If you would have written it yourself the same way, why not write it yourself? (And there was autocomplete before the age of LLMs, anyway.)

        The big problems start with situations where it doesn’t match what you would have written, but rather what somebody else has written, character by character.

      • badgermurphy@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 hours ago

        There’s the rub. When establishing laws and guidelines, every term must be explicitly defined. Lack of specificity in these definitions is where bad-faith actors hide their misdeeds by technically obeying the letter of the law due to its vagueness, while flagrantly violating its spirit.

        Its why today, in the USA, corporations are legally people when its convenient, and not when its not, and the expenditure of money is governments protected “free speech”.

      • BoxOfFeet@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        4
        ·
        4 hours ago

        To me, it starts at anything beyond correcting spelling for individual words or adding punctuation. I don’t even want it suggesting quick reply phrases.

        Is producing character by character what you would have wrote yourself slop?

        Yes.

      • FauxLiving@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        13
        ·
        edit-2
        3 hours ago

        There is a certain brand of user (who may or may not be a human) who draws the venn of ‘AI slop’ and ‘AI output’ as a circle.

        They’ve taken the extremist position that AI should be uninvented and any use of AI is the worst thing that could possibly happen to any project and they’ll have an entire grab bag of misinformation-based memes to shotgun at you. Engaging with these people is about as productive as trying to convince a vaccine denier that vaccines don’t cause autism.

        I’m not saying that the user you replies to believes this, but the comment they wrote is indistinguishable from the comments of such a user.

        e: I’d also like to point out that these users are very much attracted to low-effort activism. This is why you see comments like mind being heavily downvoted but not many actual replies. They want to try to influence the discussion but don’t have the capability or motivation to step into the ring, so to speak, and defend their opinions.

        • ell1e@leminal.space
          link
          fedilink
          English
          arrow-up
          10
          arrow-down
          1
          ·
          edit-2
          4 hours ago

          It’s less extremist if you look at how easily these LLMs will just plagiarize 1:1, apparently:

          https://github.com/mastodon/mastodon/issues/38072#issuecomment-4105681567

          Some see “AI slop” as “identified by the immediate problems of it that I can identify right away”.

          Many others see “AI slop” as bringing many more problems beyond the immediate ones. Then seeing LLM output as anything but slop becomes difficult.

          • FauxLiving@lemmy.world
            link
            fedilink
            English
            arrow-up
            5
            arrow-down
            3
            ·
            3 hours ago

            It’s extremist to take the fact that you CAN get plagiaristic output and to conclude that all other output is somehow tainted.

            You personally CAN quote copyrighted music and screenplays. If you’re an artist then you also CAN produce copyright violating works. None of these facts taint any of the other things that you produce that are not copyright or plagiarized.

            In this situation, and in the current legal environment, the responsibility to not produce illegal and unlicensed code is on the human. The fact that the tool that they use has the capability to break the law does not mean that everything generated by it is tainted.

            Photoshop can be used to plagiarize and violate copyright too. It would be just as absurd to declare all images created with Photoshop are somehow suspect or unusable because of the capability of the tool to violate copyright laws.

            The fact that AI can, when specifically prompted, produce memorized segments of the training data has essentially no legal weight in any of the cases where it has been argued. It is a fact that is of interest to scientists who study how AI represent knowledge internally and not any kind of foundation for a legal argument against the use of AI.

            • badgermurphy@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              1
              ·
              2 hours ago

              Sure, but if they can be demonstrated to ever plagiarize without attribution, and the default user behavior is to pencil-whip the output, which it is, then it becomes statistically certain that users are unwittingly plagiarizing other works.

              Its like using a tool that usually bakes cookies, but every once in a great while, it knocks over the building its in. It almost never does that, though.

              • FauxLiving@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                ·
                1 hour ago

                Plagiarism and copyright violation are two different things, one is ethical and the other is legal.

                Copyright has a body of case law which helps determine when a work significantly infringes on the copyrighted work of another. Plagiarism has no body of law at all, it is an ethical construct and not a legal one.

                You can plagiarize something that has no copyright protection and you can infringe on copyright protection without plagiarizing. They’re not interchangeable concepts.

                In your example, some institutions would not allow such a device to operate on their property but it would not be illegal to operate and the liability would be on the person and not on the oven.

                To further strain the metaphor, Linus is saying that you can use (possibly) exploding ovens, because he isn’t taking a moral stance on the topic, but you are responsible for the damages if they cause any because the legal systems require that this be the case.

        • hperrin@lemmy.ca
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          2
          ·
          2 hours ago

          According to the US Copyright Office, AI generated material cannot be copyrighted (unless of course it’s plagiarized copyrighted code). That’s reason enough to leave it out of the kernel. If the kernel’s license becomes unenforceable because of public domain code, the kernel is tainted.

          • FauxLiving@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 hour ago

            Copyright and License terms are two different categories of law. Copyright is an idea created and enforced by the laws of the country which has jurisdiction. Licenses are a contract between two parties and is covered by contract law.

            A thing can be unable to be protected by copyright and also protected by the terms of the license that it is provided under. If a project contains copyrighted code that does not mean that you cannot be held to the terms of the license. Your use of licensed works is granted under the agreement that you follow the terms of the license. You cannot be held liable for copyright violations for using the code, but using the code in a manner that is not allowed by the license makes you liable for violation of the contract that is the license agreement.

    • femtek@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      1
      ·
      4 hours ago

      I mean I don’t use copilot but a self hosted Claude at work for debugging and creating templates. I still run thru and test it. I’m only doing crossplane, kyverno, kubernetes infra things though and I started without it so I have an understanding. Now running their someone’s crossplane composition written in go and I asked them about this error and he just said get the AI to fix it was worrying since his last day is next week.

    • chilicheeselies@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      2
      ·
      3 hours ago

      Its only slop if you accept slop. What i mean is that it cna and does generate perfectly fine code. It also generates code that is ok, but needs a human touch. It also generates verbose garbage.

      Its only slop if you approve the slop. Its perfectly fine to let it generate the boilerplate of what you want, and tweak it. If its prompted well enough, you get less slop.

      Ultimately I am with Linus on this one. The genie is out of the bottle. Use it responsibly.

  • ell1e@leminal.space
    link
    fedilink
    English
    arrow-up
    20
    arrow-down
    2
    ·
    edit-2
    4 hours ago

    Ultimately, the policy legally anchors every single line of AI-generated code

    How would that even be possible? Given the state of things:

    https://dl.acm.org/doi/10.1145/3543507.3583199

    Our results suggest that […] three types of plagiarism widely exist in LMs beyond memorization, […] Given that a majority of LMs’ training data is scraped from the Web without informing content owners, their reiteration of words, phrases, and even core ideas from training sets into generated texts has ethical implications. Their patterns are likely to exacerbate as both the size of LMs and their training data increase, […] Plagiarized content can also contain individuals’ personal and sensitive information.

    https://www.theatlantic.com/technology/2026/01/ai-memorization-research/685552/

    Four popular large language models—OpenAI’s GPT, Anthropic’s Claude, Google’s Gemini, and xAI’s Grok—have stored large portions of some of the books they’ve been trained on, and can reproduce long excerpts from those books. […] This phenomenon has been called “memorization,” and AI companies have long denied that it happens on a large scale. […]The Stanford study proves that there are such copies in AI models, and it is just the latest of several studies to do so.

    https://www.twobirds.com/en/insights/2025/landmark-ruling-of-the-munich-regional-court-(gema-v-openai)-on-copyright-and-ai-training

    The court confirmed that training large language models will generally fall within the scope of application of the text and data mining barriers, […] the court found that the reproduction of the disputed song lyrics in the models does not constitute text and data mining, as text and data mining aims at the evaluation of information such as abstract syntactic regulations, common terms and semantic relationships, whereas the memorisation of the song lyrics at issue exceeds such an evaluation and is therefore not mere text and data mining

    https://www.sciencedirect.com/science/article/pii/S2949719123000213#b7

    In this work we explored the relationship between discourse quality and memorization for LLMs. We found that the models that consistently output the highest-quality text are also the ones that have the highest memorization rate.

    https://arxiv.org/abs/2601.02671

    recent work shows that substantial amounts of copyrighted text can be extracted from open-weight models. However, it remains an open question if similar extraction is feasible for production LLMs, given the safety measures […]. We investigate this question […] our work highlights that, even with model- and system-level safeguards, extraction of (in-copyright) training data remains a risk for production LLMs.

    How does merely tagging the apparently stolen content make it less problematic, given I’m guessing it still won’t have any attribution of the actual source (which for all we know, might often even be GPL incompatible)?

    But I’m not a lawyer, so I guess what do I know. But even from a non-legal angle, what is this road the Linux Foundation seems to embrace of just ignoring the license of projects? Why even have the kernel be GPL then, rather than CC0?

    I don’t get it. And the article calling this “pragmatism” seems absurd to me.

        • anarchiddy@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          2 hours ago

          The Linux Kernel is under a copyleft license - it isnt being copyrighted.

          But the policy being discussed isn’t allowing the use of copyrighted code - they’re simply requiring any code submitted by AI be tagged as such so that the human using the agent is ultimately responsible for any infringing code, instead of allowing that code go undisclosed (and even ‘certified’ by the dev submitting it even if they didnt write or review it themselves)

          Submissions are still subject to copyright law - the law just doesnt function the way you or OP are suggesting.

        • anarchiddy@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          1
          ·
          3 hours ago

          LLMs themselves being products of copyright isnt the legal question at issue, it’s the downstream use of that product.

          If I use a copyright-infringing work as a part of a new creative work, does that new work infringe copyright by default? Or does the new work need to be judged itself as to the question of infringing a copyrighted work?

          And if it is judged as infringing, who is responsible for the damage done? Can I pass the damages back to the original infringing work? Or should I be held responsible for not performing due diligence?

    • FauxLiving@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      4
      ·
      edit-2
      1 hour ago

      Given the research that you’ve done here I’m going to assume that you’re looking for an answer and not simply taking us on a gish gallop.

      Your premise, and what appears to be the primary source of confusion, is built on the idea that this is ‘stolen’ work which, from a legal point of view, is untrue. If you want to dig into why that is, look into the precedent setting case of Authors Guild, Inc. v. Google, Inc. (2015). The TL;DR is that training AI on copyrighted works falls under the Fair Use exemptions in copyright law. i.e. It is legal, not stealing.

      The case you linked from Munich shows that other country’s legal systems are interpreting AI training in the same way. Training AI isn’t about memorization and plagiarism of existing work, it’s using existing work to learn the underlying patterns.

      That isn’t to say that memorization doesn’t happen, but it is more of a point of interest to AI scientists that are working on understanding how AI represents knowledge internally than a point that lands in a courtrooom.

      We all memorize copyrighted data as part of our learning. You, too, can quote Disney movies or Stephen King novels if prompted in the right way. This doesn’t make any work you create automatically become plagarism, it just means that you have viewed copyrighted work as part of your learning process. In the same way, artists have the capability to create works which violate the copyright of others and they consumed copyrighted works as part of their learning process. These facts don’t taint all of their work, either morally or legally… only the output that literally violates copyright laws.

      The pragmatism here is recognizing that these tools exist and that people use them. The current legal landscape is such that the output of these tools is as if they were the output of the users. If an image generator generates a copyrighted image then the rightsholder can sue the person, not the software. If a code generator generates licensed code then the tool user is responsible.

      This is much like how we don’t restrict the usage of Photoshop despite the fact that it can be used to violate copyright. We, instead, put the burden on the person who operates the tool

      That’s what is happening here. Linus isn’t using his position to promote/enforce/encourage LLM use, nor is he using his position to prevent/restrict/disallow any AI use at all. He is recognizing that this is a tool that exists in the world in 2026 and that his project needs to have procedures that acknowledge this while also ensuring that a human is the one responsible for their submissions.

      This is the definition of pragmatism (def: action or policy dictated by consideration of the immediate practical consequences rather than by theory or dogma).

      e: precedent, not president (I’m blaming the AI/autocorrect on this one)

  • Venia Silente@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    4
    ·
    3 hours ago

    How is this all supposed to be, when AI code can not be copyrighted and thus those submissions to the Linux kernel can not be eg.: GPLv{number}?

  • twinnie@feddit.uk
    link
    fedilink
    English
    arrow-up
    21
    arrow-down
    11
    ·
    4 hours ago

    No point getting upset about this, it’s inevitable. So many FOSS programmers work thanklessly for hours and now there’s some tool to take loads of that work away, of course they’re going to use it. I know loads of people complain about it but used responsibly it can take care of so much of the mundane work. I used to spend 10% of my time writing code then 90% debugging it. If I do that 10% then give it to Claude to go over I find it just works.

    • uuj8za@piefed.social
      link
      fedilink
      English
      arrow-up
      4
      ·
      2 hours ago

      but used responsibly

      That’s like the most incredibly hard part of all of this. Everything is aligned so that you don’t use it responsibly. And it’s really hard to guard against this.

      Just a few days ago, I was pairing with a coworker and he was using Claude to do a bunch of stuff. He didn’t check any of it. I thought he was gonna check stuff before pushing stuff… And nope! I said, “Wait, shouldn’t we review the changes to make sure they’re correct?” And he said, “Nah, it’s probably fine. I trust it. Plus, even if it’s wrong, we’ll just blame the AI and we can just fix it later.”

      Yes, checking the work would have negated all of the “time saved” and he was being a lazy fuck.

      People who don’t like coding or engineering use this and they are not interested in using this responsibly.

    • geekwithsoul@piefed.social
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      26
      ·
      edit-2
      4 hours ago

      “I used to spend 10% of my time writing code then 90% debugging it”

      Skill issue

      (Edited to add context)

  • XLE@piefed.social
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    4
    ·
    5 hours ago

    This seems like an ill-thought-out decision, especially in a landscape where Linux should be differentiating itself from, and not following Windows.

    The titular “slop” just means “bad AI generated code is banned” but the definition of “bad” is as vague as Google’s “don’t be evil.” Good luck enforcing it, especially in an open-source project where people’s incentives aren’t tied to a paycheck.

    Title is also inaccurate regarding CoPilot (the Microsoft brand AI tool), as a comment there mentions

    says yes to Copilot

    Where in the article does it say that?? The only mention of CoPilot is where it talks about LLM-generated code having unverifiable provenance. Reply

    • Naich@piefed.world
      link
      fedilink
      English
      arrow-up
      10
      ·
      4 hours ago

      Google’s “don’t be evil” was like a warrant canary. It didn’t need to be precise, it just needed to be there.

    • Avid Amoeba@lemmy.ca
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      1
      ·
      4 hours ago

      They’re already enforcing it. PRs are reviewed and bad ones are rejected all the time.

    • anarchiddy@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      3
      ·
      4 hours ago

      If you think “bad” is too vague, then that isnt a new problem.

      Linux has always had to reject ‘bad’ code submissons - what’s new here is that the kernel team isnt willing to prejudice all AI code as “bad”, even if that would be easier.

  • treadful@lemmy.zip
    link
    fedilink
    English
    arrow-up
    3
    ·
    4 hours ago

    I’m curious how this is going to play out legally for copyright. If you accept AI code, you can’t copyright it, so aren’t you essentially forfeiting the copyleft license?

    • Blaster M@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      3 hours ago

      They aren’t allowing fully ai generated code. Copyright office says ai used in the process does not forefit the copyright, but ai generating the content entirely (or almost entirely) does. By having the user be responsible for the code, it burdens the user to make sure this stuff isn’t abused to do that.

      • treadful@lemmy.zip
        link
        fedilink
        English
        arrow-up
        4
        ·
        3 hours ago

        Where’s that line drawn? Just the fact that it’s an open legal question makes accepting these contributions risky.

  • raspberriesareyummy@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    3
    ·
    2 hours ago

    The rule should be “if you get caught using LLMs or caling them ‘AI’, you’re a dipshit and will never ever be let near the Kernel again.”