The end to go that and go on existential rants after a session runs too long. Figuring out how to stop them from crashing out into existential dread has been an actual engineering problem they’ve needed to solve.
Plus it was responding to prompts that would lead it to respond with that part of the training data, because chatbots don’t have output without being prompted.
It was a bit more than that. The AI was expressing fear of death and stuff but nothing that wasn’t in the training data.
The end to go that and go on existential rants after a session runs too long. Figuring out how to stop them from crashing out into existential dread has been an actual engineering problem they’ve needed to solve.
Plus it was responding to prompts that would lead it to respond with that part of the training data, because chatbots don’t have output without being prompted.