• Ludicrous0251@piefed.zip
    link
    fedilink
    English
    arrow-up
    29
    arrow-down
    2
    ·
    3 days ago

    I don’t understand why this is a bad thing, if AI code can find long overlooked bugs that can be verified and repaired by humans, let them.

    This sudden spike is just a function of having a new tool, these reports and repairs will settle as long as the fixes and new features aren’t just vibe coded into place.

    • ZeDoTelhado@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      ·
      3 days ago

      The problem is a bit more nuanced unfortunately. There has been open source projects that decided to close bug reports because there is just so many of them, and, a good portion of them are either duplicated or straight up not relevant (meaning, in a vacuum you could say there is a bug on place x, but looking at the code more broadly it doesn’t really apply). If the bug reports that came out were mostly good quality and relevant I would for sure be more positive of this.

      • Ludicrous0251@piefed.zip
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 days ago

        I think the AI bug reports should definitely be managed by internal staff. Agree the slop reports and PRs by (well intentioned) 3rd-party people with limited knowledge of the code base is more harmful than helpful, but as a tool for an internal team to use to highlight potential opportunities, it’s not bad.

        I see it as similar to an IDE providing syntax and formatting suggestions.

      • Victor@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        3 days ago

        It’s a balance of course. If you find really critical bugs that need fixing, it’s hard to pass up, but it will of course be weighed with the amount of noise. Let’s hope they manage to bring down the noise.

    • mrnarwall@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      3 days ago

      The real concern is the quality of the patches AI are making. If they are badly trained (i.e. learning with buggy code, which is all of it over time) then there is a possibility that it can introduce bugs that did not exist, or possibly do nothing to patch the bug, while adding incoherent code to an existing codebase

      • FauxLiving@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        2 days ago

        It has nothing to do with vibe coding. It’s an issue of workload.

        Finding a lot of vulnerabilities creates a lot of work.

        If a company has the dev team that is capable of responding to and fixing 5 vulnerabilities in a month and then suddenly they have 75 vulnerabilities then there is less time to devote to each vulnerability which can result in things like additional bugs or stability issues.

        Those issues can make people hesitant to apply patches and having a known vulnerability go unpatched is worse than having an unknown vulnerability that’s unpatched. The short-term effect will be that there will be secondary issues caused by the high workload and that will lead to an increase in the amount of time that known vulnerabilities exist without being patched.

        From the article:

        Now that models have become really good at finding bugs in code, security shops are using AI to scan their own software, hopefully to uncover and fix flaws before the baddies do. And this trickles down to two things: more patches, and more work for admins.

        Zero Day Initiative’s chief vuln finder Dustin Childs agrees with this assessment.

        “At first, yes, this means more patches and thus more work for admins,” he told The Register. “The goal over time would be to eliminate as many as possible, and, over time, that monthly number goes down.”

        What will make this whole AI bug hunting season “really painful,” he continued, is if the patches don’t work or - worse yet - break things.

        “Many customers don’t trust patches as it is, so if AI-related patches break things, they are less likely to apply as time goes on,” Childs added. “This will be true even if AI only finds the bugs and doesn’t make the patches.”