Heh. I often use LLMs to strip out the unnecessary and help tighten my own points. I fully agree that most people are terrible at writing bug reports (or asking for meaningful help), and LLMs are often GIGO.
I think the rule applies that if you cannot do it yourself, then you can’t expect an LLM to do it better, simply because you cannot judge the result. In this case, you are more likely to waste other people’s time.
On the other side, it is possible to have agents give useful feedback on bug reports, request tickets, etc. and guide people (and their personal AI) to provide all the needed info and even automatically resolve issues. So long as the agent isn’t gatekeeping and a human is able to be pulled in easily. And honestly, if someone really wants to speak to a person, that is OK and shouldn’t require jumping through hoops.
Heh. I often use LLMs to strip out the unnecessary and help tighten my own points. I fully agree that most people are terrible at writing bug reports (or asking for meaningful help), and LLMs are often GIGO.
I think the rule applies that if you cannot do it yourself, then you can’t expect an LLM to do it better, simply because you cannot judge the result. In this case, you are more likely to waste other people’s time.
On the other side, it is possible to have agents give useful feedback on bug reports, request tickets, etc. and guide people (and their personal AI) to provide all the needed info and even automatically resolve issues. So long as the agent isn’t gatekeeping and a human is able to be pulled in easily. And honestly, if someone really wants to speak to a person, that is OK and shouldn’t require jumping through hoops.