Screenshot of this question was making the rounds last week. But this article covers testing against all the well-known models out there.

Also includes outtakes on the ‘reasoning’ models.

  • zalgotext@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    2
    ·
    6 hours ago

    No, they cannot reason, by any definition of the word. LLMs are statistics-based autocomplete tools. They don’t understand what they generate, they’re just really good at guessing how words should be strung together based on complicated statistics.