Wasn’t there a guy at Google that claimed that they had a conscious AGI, and his proof was him asking the chatbot if it was conscious, and the answer was “yes”.
i mean, consciousness is hard to prove. how do we test for awareness? a being can be a complete idiot and still aware, conscious, sentient, all that bullshit.
my standard for LLMs is probably too high because they give me erroneous data a lot, but the shit that i ask the search engine comes back wrong in the LLM bullshit almost every time (GIGO tho). it takes me back to some of my favorite fiction on the subject. where do we draw the line? i’m just glad i’m not a computer ethicist.
I don’t think we are anywhere near it being “true conciseness” but I think we are dangerously close to the average person not really caring that it isn’t.
The end to go that and go on existential rants after a session runs too long. Figuring out how to stop them from crashing out into existential dread has been an actual engineering problem they’ve needed to solve.
Plus it was responding to prompts that would lead it to respond with that part of the training data, because chatbots don’t have output without being prompted.
Wasn’t there a guy at Google that claimed that they had a conscious AGI, and his proof was him asking the chatbot if it was conscious, and the answer was “yes”.
i mean, consciousness is hard to prove. how do we test for awareness? a being can be a complete idiot and still aware, conscious, sentient, all that bullshit.
my standard for LLMs is probably too high because they give me erroneous data a lot, but the shit that i ask the search engine comes back wrong in the LLM bullshit almost every time (GIGO tho). it takes me back to some of my favorite fiction on the subject. where do we draw the line? i’m just glad i’m not a computer ethicist.
I don’t think we are anywhere near it being “true conciseness” but I think we are dangerously close to the average person not really caring that it isn’t.
It was a bit more than that. The AI was expressing fear of death and stuff but nothing that wasn’t in the training data.
The end to go that and go on existential rants after a session runs too long. Figuring out how to stop them from crashing out into existential dread has been an actual engineering problem they’ve needed to solve.
Plus it was responding to prompts that would lead it to respond with that part of the training data, because chatbots don’t have output without being prompted.
Pretty much. It was sad.