A bill under consideration in New York would provide a private right of action, allowing people to file lawsuits against chatbot owners who violate the law.
I’m your example, say you go to a lawyer and ask legal questions. If the lawyer is not providing legal advise (I. e. taking on the role of being your lawyer and representing you in that matter), they are required by law to express that at the begining so that they will not be held liable because they are a legal professional.
Wikipedia, Google, chatgpt etc are not legal authorities or legal professionals.
There is also no human entity to hold legally responsible if the LLM hallucinates or sites a source that is not factual (satire for instance).
We also know that the vast majority of people who use chatbots do not get the sources they come from.
So. When Wikipedia presents information it is not giving legal advice. That is born out in case law.
The reason it’s dangerous to get legal or health information from a chatbot is the same reason you wouldn’t want to randomly trust reddit.
No lawyers are going to reddit to get help writing legal briefs. We have seen lawyers using LLM’S for that though.
Wikipedia, Google, chatgpt etc are not legal authorities or legal professionals.
Yes. And neither are LLMs or their derivatives.
The reason it’s dangerous to get legal or health information from a chatbot is the same reason you wouldn’t want to randomly trust reddit.
And yet people do, and we accept that as a necessary consequence of maintaining free speech as a principal.
The exact arguments being accepted in this thread are the same which led directly to crackdowns in Hungary, China, and Russia.
If you are okay with limiting and regulating LLMs as a form of speech, I promise it’s your speech which will end up limited, and a very small number of companies will control all speech on the internet. You should stop.
I’m your example, say you go to a lawyer and ask legal questions. If the lawyer is not providing legal advise (I. e. taking on the role of being your lawyer and representing you in that matter), they are required by law to express that at the begining so that they will not be held liable because they are a legal professional.
Wikipedia, Google, chatgpt etc are not legal authorities or legal professionals.
There is also no human entity to hold legally responsible if the LLM hallucinates or sites a source that is not factual (satire for instance).
We also know that the vast majority of people who use chatbots do not get the sources they come from.
So. When Wikipedia presents information it is not giving legal advice. That is born out in case law.
The reason it’s dangerous to get legal or health information from a chatbot is the same reason you wouldn’t want to randomly trust reddit.
No lawyers are going to reddit to get help writing legal briefs. We have seen lawyers using LLM’S for that though.
Yes. And neither are LLMs or their derivatives.
And yet people do, and we accept that as a necessary consequence of maintaining free speech as a principal.
The exact arguments being accepted in this thread are the same which led directly to crackdowns in Hungary, China, and Russia.
If you are okay with limiting and regulating LLMs as a form of speech, I promise it’s your speech which will end up limited, and a very small number of companies will control all speech on the internet. You should stop.