I’ll have to take your word for it! “figuring out” sounds like a higher-order process than a large language model is capable of to me, but if what they do is as good, then great.
I think I’m just skeptical because of how horrendously bad LLM output is in my field of expertise (despite looking fine to a lay person), so I immediately analogize that to other areas. The output of law and coding are both really about language, and the process of creating that output on the part of a lawyer or coder are really about language, so I can see how one might think LLMs would be able to recreate what lawyers and coders do. But boy it doesn’t strike me as remotely plausible that LLMs will ever get there, at least for law. I have no doubt some yet-unimagined technology could get us there, but “next word prediction” just isn’t gonna be it.
The more specialized and less public the knowledge is that’s needed to train an LLM, the worse its output will be. In addition, explainability is an absolute necessity where safety is a concern, but LLMs are not good at explaining how they got their results (because the results are derived from a statistical process, not logical steps originating from first principles). I suspect that explainability and verifiability are also essential in law.
You don’t have to take my word for it. You can get a subscription to Claude for $20 and install the CLI tool. Ask it to start building something basic. Give it something small first and then expand what your asking for in the next request.
I have to take your word for it because I don’t know what good code looks like lol. Again, to compare to what I’m familiar with, you can also ask an LLM to draft you a purchase agreement for shares of a private company, and if you’re not a lawyer it’ll look good…and it’ll be able to sound like it’s explaining to you why it’s good…but it will not be good haha
You use software though? You don’t need to even look at the code, lol. I’m downloading open source projects and modifying their functionality for my personal use with Claude and I don’t even know how they work. Don’t even open the code in an editor, I don’t need to know what it looks like.
Ah, well in that case I won’t take your word for it that it’s good. I’ll take your word for it that it’s working for you for now… Again in a legal context, that’s like “I got chat GPT to write this contract, and it’s working great,” but of course…it won’t be when things go wrong haha!
I’ll have to take your word for it! “figuring out” sounds like a higher-order process than a large language model is capable of to me, but if what they do is as good, then great.
I think I’m just skeptical because of how horrendously bad LLM output is in my field of expertise (despite looking fine to a lay person), so I immediately analogize that to other areas. The output of law and coding are both really about language, and the process of creating that output on the part of a lawyer or coder are really about language, so I can see how one might think LLMs would be able to recreate what lawyers and coders do. But boy it doesn’t strike me as remotely plausible that LLMs will ever get there, at least for law. I have no doubt some yet-unimagined technology could get us there, but “next word prediction” just isn’t gonna be it.
The more specialized and less public the knowledge is that’s needed to train an LLM, the worse its output will be. In addition, explainability is an absolute necessity where safety is a concern, but LLMs are not good at explaining how they got their results (because the results are derived from a statistical process, not logical steps originating from first principles). I suspect that explainability and verifiability are also essential in law.
They are, for sure. I mean in some sense, the explain-ability is why it’s correct…you might need to explain why it’s correct to a judge one day!
You don’t have to take my word for it. You can get a subscription to Claude for $20 and install the CLI tool. Ask it to start building something basic. Give it something small first and then expand what your asking for in the next request.
https://code.claude.com/docs/en/setup
Claude can also help explain how to set it up if your unfamiliar with things like the terminal or git.
I have to take your word for it because I don’t know what good code looks like lol. Again, to compare to what I’m familiar with, you can also ask an LLM to draft you a purchase agreement for shares of a private company, and if you’re not a lawyer it’ll look good…and it’ll be able to sound like it’s explaining to you why it’s good…but it will not be good haha
You use software though? You don’t need to even look at the code, lol. I’m downloading open source projects and modifying their functionality for my personal use with Claude and I don’t even know how they work. Don’t even open the code in an editor, I don’t need to know what it looks like.
I was suspicious the whole time, reading your replies. This finally seals it. Troll confirmed. Well played.
Ah, well in that case I won’t take your word for it that it’s good. I’ll take your word for it that it’s working for you for now… Again in a legal context, that’s like “I got chat GPT to write this contract, and it’s working great,” but of course…it won’t be when things go wrong haha!