• Domi@lemmy.secnd.me
    link
    fedilink
    English
    arrow-up
    6
    ·
    9 hours ago

    I am happy if someone uses AI first to come up with a coherent message, bug report, or question.

    LLMs do not add anything of value to bug reports, they add unecessary padding requiring me to filter out the marketing speech to get down to the issue. I would much rather have the raw brain dump of theirs.

    If somebody sends me their ChatGPT text I now ask them to send me their prompt instead so I don’t have to waste my time on their lengthy text that has the same amount of information as the original.

    I am annoyed if it’s ill-researched/understood nonsense, AI assisted or not.

    Being coherent is rarely the problem in bug reports, it’s the user not properly typing out what the actual issue is.

    I have gotten bullet point list bug reports that read like they were written by an insane person that were more useful than a nicely written ChatGPT message with 0 information in it.

    • Joe@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      1
      ·
      6 hours ago

      Heh. I often use LLMs to strip out the unnecessary and help tighten my own points. I fully agree that most people are terrible at writing bug reports (or asking for meaningful help), and LLMs are often GIGO.

      I think the rule applies that if you cannot do it yourself, then you can’t expect an LLM to do it better, simply because you cannot judge the result. In this case, you are more likely to waste other people’s time.

      On the other side, it is possible to have agents give useful feedback on bug reports, request tickets, etc. and guide people (and their personal AI) to provide all the needed info and even automatically resolve issues. So long as the agent isn’t gatekeeping and a human is able to be pulled in easily. And honestly, if someone really wants to speak to a person, that is OK and shouldn’t require jumping through hoops.