The difficulty with general models designed to sound human and trained on a wide range of human input is that it’ll also imitate human error. In non-critical contexts, that’s fine and perhaps even desirable. But when it comes to finding facts, you’d be better served with expert models trained on curated, subject-specific input for a given topic.
I can see an argument for general models to act as “first level” in classifying the topic before passing the request on to specialised models, but that is more expensive and more energy-consuming because you have to spend more time and effort preparing and training multiple different expert models, on top of also training the initial classifier.
Even then, that’s still no guarantee that the expert models will be able to infer context and nuance the same way that an actual human expert would. They might be more useful than a set of static articles in terms of tailoring the response to specific questions, but the layperson will have no way of telling whether its output is accurate.
All in all, I think we’d be better served investing in teaching critical reading than spending piles of money on planet-boilers with hard-to-measure utility.
(A shame, really, since I like the concept. Alas, reality gets in the way of a good time.)
The difficulty with general models designed to sound human and trained on a wide range of human input is that it’ll also imitate human error. In non-critical contexts, that’s fine and perhaps even desirable. But when it comes to finding facts, you’d be better served with expert models trained on curated, subject-specific input for a given topic.
I can see an argument for general models to act as “first level” in classifying the topic before passing the request on to specialised models, but that is more expensive and more energy-consuming because you have to spend more time and effort preparing and training multiple different expert models, on top of also training the initial classifier.
Even then, that’s still no guarantee that the expert models will be able to infer context and nuance the same way that an actual human expert would. They might be more useful than a set of static articles in terms of tailoring the response to specific questions, but the layperson will have no way of telling whether its output is accurate.
All in all, I think we’d be better served investing in teaching critical reading than spending piles of money on planet-boilers with hard-to-measure utility.
(A shame, really, since I like the concept. Alas, reality gets in the way of a good time.)