• AA5B@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    8 hours ago

    Code reviews seem like a good opportunity for an LLM. It seems like they would be good at it. I’ve actually spent the last half hour googling for tools.

    I’ve spent literally a month in reviews for this junior guy on one stupid feature, and so much of it has been so basic. It’s a combination of him committing ai slop without understanding or vetting it, and being too junior to consider maintainability or usability. It would have saved so much of my time if ai could have done some of those review cycles without me

    • homura1650@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      6 hours ago

      This has been solved for over a decade. Include a linter and static analysis stage in the build pipeline. No code review until the checkbox goes green (or the developer has a specific argument for why a particular finding is a false positive)

      • AA5B@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 hours ago

        Not really.

        Linter in the build pipeline is generally not useful because most people won’t give results time or priority. You usually can’t fail the build for lint issues so all it does is fill logs. I usually configure a linter and prettifier in a precommit hook, to shift that left. People are more willing to fix their code in small pieces as they try to commit.

        But this is also why SonarQube is a key tool. The scanners are lint-like, and you can even import some lint output. But the important part is it tries to prioritize them, score them, and enforce a quality gate based on them. I usually can’t fail a build for lint errors but SonarQube can if there are too many or too priority, or if they are security related.

        But this is not the same as a code review. If an ai can use the code base as context, it should be able to add checks for consistency and maintainability similar to the rest of the code. For example I had a junior developer blindly follow the AI to use a different mocking framework than the rest of the code, for no reason other than it may have been more common in the training data. A code review ai should be able to notice that. Maybe this is too advanced for current ai, but the same guy blindly followed ai to add classes that already existed. They were just different enough that SonarQube didn’t flag is as duplicate code but ai ought to be able to summarize functionality and realize they were the same. Or I wonder if ai could do code organization? Junior guys spew classes and methods everywhere without any effort in organizing like with like, so someone can maintain it all. Or how about style? I hope yo never revisit style wars but when you’re modifying code you really need to follow style and naming of what’s already there. Maybe ai code review can pick up on that

        • Em Adespoton@lemmy.ca
          link
          fedilink
          English
          arrow-up
          1
          ·
          21 minutes ago

          Yeah, I’ve added AI to my review process. Sure, things take a bit longer, but the end result has been reviewed by me AND compared against a large body of code in the training data.

          It regularly catches stuff I miss or ignore on a first review based on ignoring context that shouldn’t matter (eg, how reliable the person is who wrote the code).