• ell1e@leminal.space
    link
    fedilink
    English
    arrow-up
    16
    ·
    edit-2
    20 hours ago

    Some of us respectfully disagree with LLMs for programming being “appropriate and legitimate”, at least if that involves generating code and not just locating bugs.

    Local LLMs retain significant issues like the one shown in this clip: https://github.com/mastodon/mastodon/issues/38072#issuecomment-4105681567 Unless your model uses 100% properly licensed training data which no code LLM I have found appears to be doing.

    • msage@programming.dev
      link
      fedilink
      arrow-up
      4
      ·
      12 hours ago

      Locating bugs is one of the most important tasks in programming, and if devs can’t do that, not are willing to learn to do so, they are fucked.

      There’s no other way of saying it. Can’t wait for the AI bubble to pop.

      • ell1e@leminal.space
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        3 hours ago

        LLMs can sometimes point out potential trouble spots, which is also one of the uses that may avoid injecting problematic code (if the LLM is prevented from suggesting a fix). But sadly, that doesn’t seem the type of use KDE is currently limiting themselves to.