A bill under consideration in New York would provide a private right of action, allowing people to file lawsuits against chatbot owners who violate the law.
Mixed feelings about this. Let me play devils advocate and say that many Americans don’t have access to these resources at all. Having potentially inaccurate resources might be better than nothing, or is that worse?
‘Should I use one teaspoon of salt in this recipe, or two?’
Two is ideal.
‘Do dogs like chicken wings?’
Wild dogs regularly hunt small animals like hare or chicken forfood.
One of these answers results in a bad cake, the other results in a hurt dog. Potentially inaccurate answers aren’t much of a problem when the stakes are low, but even a simple question about what to feed a pet could end with a negative outcome.
If you’re going to be your own lawyer or perform a bit of self surgery, there is no way the AI is helping that situation. Especially if the inherent nature of AI is to validate everything you say.
Mixed feelings about this. Let me play devils advocate and say that many Americans don’t have access to these resources at all. Having potentially inaccurate resources might be better than nothing, or is that worse?
it’s worse. In 4D it’s even worser
‘Should I use one teaspoon of salt in this recipe, or two?’
Two is ideal.‘Do dogs like chicken wings?’
Wild dogs regularly hunt small animals like hare or chicken for food.One of these answers results in a bad cake, the other results in a hurt dog. Potentially inaccurate answers aren’t much of a problem when the stakes are low, but even a simple question about what to feed a pet could end with a negative outcome.
Hm, good point. Perhaps the overconfidence AI might provide is even worse than knowing you don’t know.
You pick up a mushroom in the forest and take it home. If you have no information, do you eat it? If something tells you it’s safe do you eat it?
There are billions being sunk into AI. How much health care could that buy? Your logic only makes sense if AI is free. It’s not.
No, misinformation is worse.
the AI devices will just have preambles and disclaimers and word things in ways to refer the user to human resources
If you’re going to be your own lawyer or perform a bit of self surgery, there is no way the AI is helping that situation. Especially if the inherent nature of AI is to validate everything you say.
especially if it’s wrong 20-35% of the time