Like…it’ll obviously be somewhat better because there is a lot of testing you can actually do
If your code can cost someone their life savings or get them maimed or killed, there’s even more testing to do when using an LLM, since there’s no demonstrable basis for the way the code is that it recommends.
I’ve been coding for a very long time. Now I’m mainly in software tech management, but I still code (proofs of concept, new visualizations, that sort of thing). In the field I’m in, we’ve put in a lot of effort to assess the value of large language models (LLMs) to assist in our coding. We’re in a highly technical field. Because our use cases are not common, and some of our requirements are extreme, there are no good code examples to train an LLM on. Consequently, we have found that the LLM’s recommendations in those cases are worthless time-wasting crap.
If you’re doing something in a well-known language, in a well-known framework, with non-safety-critical requirements and with volumes, response times and reliability within moderate bounds, the training set will be much bigger and you’ll probably have better luck with LLMs. But that means you could also just do a web search or look on something like StackOverflow.
We do have active machine learning (ML) efforts underway, and some of those look very promising for certain tricky problems within our domain. But ML is a whole different kettle of fish than LLMs.
Your observations on Go are regarding the size of the state space of the game, which is 19 times 19 times a few more dimensions and constraints that reflect the allowable state combinations and the transition rules from one position to the next. The 4th or 5th power of something (to be conservative) gets big really damn fast. Some problems are intrinsically intractable, and AI won’t help with those, though quantum computing might in at least some cases.
Yes.
If your code can cost someone their life savings or get them maimed or killed, there’s even more testing to do when using an LLM, since there’s no demonstrable basis for the way the code is that it recommends.
I’ve been coding for a very long time. Now I’m mainly in software tech management, but I still code (proofs of concept, new visualizations, that sort of thing). In the field I’m in, we’ve put in a lot of effort to assess the value of large language models (LLMs) to assist in our coding. We’re in a highly technical field. Because our use cases are not common, and some of our requirements are extreme, there are no good code examples to train an LLM on. Consequently, we have found that the LLM’s recommendations in those cases are worthless time-wasting crap.
If you’re doing something in a well-known language, in a well-known framework, with non-safety-critical requirements and with volumes, response times and reliability within moderate bounds, the training set will be much bigger and you’ll probably have better luck with LLMs. But that means you could also just do a web search or look on something like StackOverflow.
We do have active machine learning (ML) efforts underway, and some of those look very promising for certain tricky problems within our domain. But ML is a whole different kettle of fish than LLMs.
Your observations on Go are regarding the size of the state space of the game, which is 19 times 19 times a few more dimensions and constraints that reflect the allowable state combinations and the transition rules from one position to the next. The 4th or 5th power of something (to be conservative) gets big really damn fast. Some problems are intrinsically intractable, and AI won’t help with those, though quantum computing might in at least some cases.