The takeaway from all LLM-based AI is the user needs to be smart enough to do whatever they’re asking anyway. All output needs to be verified before being used or relied upon.
The “AI” is just streamlining the process to save time.
Relying on it otherwise is stupid and just proves instantly that you are incompetent.
the user needs to be smart enough to do whatever they’re asking anyway
I’m gonna say that’s ideal but not quite necessary. What’s needed is that the user is capable of properly verifying the output. Which anyone who could do it themselves definitely can, but it can be done more broadly. It’s an easier skill to verify a result than it is to obtain that result. Think: how film critics don’t necessarily need to be filmmakers, or the P=NP question in computer science.
But if the output has issues, what’re you going to do, prompt it again? If you are only able to verify but not do the task, you cannot correct the AI’s mistakes yourself.
At the risk of sounding like an overly obsequious AI… You know what, you’re completely right. I’m honestly not sure what use case I was imagining when I wrote that last comment.
Making text flow naturally, grouping and ordeeing information, good writing.
You can verify two textst have the same facts and information, yet one reads way better than the other. But writing a text that reads well is quite hard.
I don’t think AI users would say it does reformatting either (if they’re honest): If you tell a chatbot to reformat text without changing it, it will change the text, because it does not understand the concept of not changing text. It should only take one time for someone to get burned for them to learn that lesson.
AIbros: we’re creating God!!!
AI users: it can do translation & reformating pretty well but you got to check it’s not chatting shit
The takeaway from all LLM-based AI is the user needs to be smart enough to do whatever they’re asking anyway. All output needs to be verified before being used or relied upon.
The “AI” is just streamlining the process to save time.
Relying on it otherwise is stupid and just proves instantly that you are incompetent.
I’m gonna say that’s ideal but not quite necessary. What’s needed is that the user is capable of properly verifying the output. Which anyone who could do it themselves definitely can, but it can be done more broadly. It’s an easier skill to verify a result than it is to obtain that result. Think: how film critics don’t necessarily need to be filmmakers, or the P=NP question in computer science.
But if the output has issues, what’re you going to do, prompt it again? If you are only able to verify but not do the task, you cannot correct the AI’s mistakes yourself.
I can’t draw, but I could probably photoshop out some minor issues in an AI-generated image.
At the risk of sounding like an overly obsequious AI… You know what, you’re completely right. I’m honestly not sure what use case I was imagining when I wrote that last comment.
Making text flow naturally, grouping and ordeeing information, good writing.
You can verify two textst have the same facts and information, yet one reads way better than the other. But writing a text that reads well is quite hard.
If you don’t habe the ability then you would do what you would have 5 years ago: not do it
Either submit without, or not submit at all.
Fucking hate those anti human filth pushing slop into everything. I want to take one apart with power tools.
Damn that movie was funny. I need to rewatch it.
It holds up better than any movie from the late 90s that I can think of.
I don’t think AI users would say it does reformatting either (if they’re honest): If you tell a chatbot to reformat text without changing it, it will change the text, because it does not understand the concept of not changing text. It should only take one time for someone to get burned for them to learn that lesson.