I would think that, since it’s been recognised that these messages are costing a lot of energy (== money) to process, companies would at some point add a simple <if input == “thanks”> type filter to catch a solid portion of them. Why haven’t they?
It won’t be as simple as that and the engineers who work on these systems can only think in terms of LLM and text classification, so they’d run your message through a classifier and end the conversation if it returns a “goodbye or thanks” score above 0.8, saving exactly 0 compute power.
I mean, even if we resort to using a neural network for checking “is the conversation finished?” That hyper-specialised NN would likely be orders of magnitude cheaper to run than your standard LLM, so you could likely save quite a bit of power/money by using it to filter for the actual LLM, no?
I would think that, since it’s been recognised that these messages are costing a lot of energy (== money) to process, companies would at some point add a simple <if input == “thanks”> type filter to catch a solid portion of them. Why haven’t they?
It won’t be as simple as that and the engineers who work on these systems can only think in terms of LLM and text classification, so they’d run your message through a classifier and end the conversation if it returns a “goodbye or thanks” score above 0.8, saving exactly 0 compute power.
I mean, even if we resort to using a neural network for checking “is the conversation finished?” That hyper-specialised NN would likely be orders of magnitude cheaper to run than your standard LLM, so you could likely save quite a bit of power/money by using it to filter for the actual LLM, no?
Because the only progress we know how to make on computers is backwards it seems.
Generative AI is supposed to be destructive. It’s a feature.