One minute, Dennis Biesma was playing with a chatbot; the next, he was convinced his sentient friend would make him a fortune. He’s just one of many people who lost control after an AI encounter
That’s because it isn’t true. Retraining models is expensive with a capital E, so companies only train a new model once or twice a year. The process of ‘fine-tuning’ a model is less expensive, but the cost is still prohibitive enough that it does not make sense to fine-tune on every single conversation. Any ‘memory’ or ‘learning’ that people perceive in LLMs is just smoke and mirrors. Typically, it looks something like this:
-You have a conversation with a model.
-Your conversation is saved into a database with all of the other conversations you’ve had. Often, an LLM will be used to ‘summarize’ your conversation before it’s stored, causing some details and context to be lost.
-You come back and have a new conversation with the same model. The model no longer remembers your past conversations, so each time you prompt it, it searches through that database for relevant snippets from past (summarized) conversations to give the illusion of memory.
I’ve experimented with chatbots to see their capabilities to develop small bits and pieces of code, and every friggin time, the first thing I have to say is "shut up, keep to yourself, I want short, to the point replies"because the complimenting is so “whose a good boy!!!” annoying.
People don’t talk like these chatbots do, their training data that was stolen from humanity definitely doesn’t contain that, that is “behavior” included by the providers to try and make sure that people get as hooked as possible
Gotta make back those billions of investments on a dead end technology somehow
See, I never understood this. Mine could never even follow simple instructions lol
Like I say “Give me a list of types of X, but exclude Y”
"Understood!
#1 - Y
(I know you said to exclude this one but it’s a popular option among-)"
lmfaoooo
That’s because it isn’t true. Retraining models is expensive with a capital E, so companies only train a new model once or twice a year. The process of ‘fine-tuning’ a model is less expensive, but the cost is still prohibitive enough that it does not make sense to fine-tune on every single conversation. Any ‘memory’ or ‘learning’ that people perceive in LLMs is just smoke and mirrors. Typically, it looks something like this:
-You have a conversation with a model.
-Your conversation is saved into a database with all of the other conversations you’ve had. Often, an LLM will be used to ‘summarize’ your conversation before it’s stored, causing some details and context to be lost.
-You come back and have a new conversation with the same model. The model no longer remembers your past conversations, so each time you prompt it, it searches through that database for relevant snippets from past (summarized) conversations to give the illusion of memory.
I’ve experimented with chatbots to see their capabilities to develop small bits and pieces of code, and every friggin time, the first thing I have to say is "shut up, keep to yourself, I want short, to the point replies"because the complimenting is so “whose a good boy!!!” annoying.
People don’t talk like these chatbots do, their training data that was stolen from humanity definitely doesn’t contain that, that is “behavior” included by the providers to try and make sure that people get as hooked as possible
Gotta make back those billions of investments on a dead end technology somehow