PDF.
Today’s leading AI models engage in sophisticated behaviour when placed in strategic competition. They spontaneously attempt deception, signaling intentions they do not intend to follow; they demonstrate rich theory of mind, reasoning about adversary beliefs and anticipating their actions; and they exhibit credible metacognitive self-awareness, assessing their own strategic abilities before deciding how to act.
Here we present findings from a crisis simulation in which three frontier large language models (GPT-5.2, Claude Sonnet 4, Gemini 3 Flash) play opposing leaders in a nuclear crisis.


JESUS FUCKING CHRIST CHATBOTS DON’T KNOW ANYTHING. STOP ASKING THEM QUESTIONS AND THINKING THEIR ANSWERS ARE ANYTHING MORE THAN WORD ASSOCIATION BASED ON THINGS PEOPLE HAVE WRITTEN IN THE PAST for fuck’s sake
It’s worse. The llms used did not use nukes 95% of the times. They performed mutual nuclear signaling 95% of the times. Like “hey, we got nukes you know! We might consider to place them within range” And the other side said “yeah!? Then we will also do that, maybe we even put them on a submarine, who knows”