Screenshot of this question was making the rounds last week. But this article covers testing against all the well-known models out there.

Also includes outtakes on the ‘reasoning’ models.

  • FaceDeer@fedia.io
    link
    fedilink
    arrow-up
    1
    arrow-down
    2
    ·
    2 hours ago

    Alright, so swap in some different words if you don’t like those. The basic point is the same - there’s a bunch of models from different sources that can solve this, it’s not just some weird one-off fluke.

    Your own argument is a bit all over the place too, by the way. You said this puzzle “wasn’t tricky in the slightest” and yet that “it requires understanding what is being asked.” So only 71.5% of humans can accomplish this “not tricky in the slightest” problem, but there are some AI models that are able to “understand what is being asked”? Is “understanding” things not “tricky”?

    • CileTheSane@lemmy.ca
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 hour ago

      You said this puzzle “wasn’t tricky in the slightest” and yet that “it requires understanding what is being asked.”

      Correct. Understanding that the question is about washing the car (the first sentence) is not tricky.

      So only 71.5% of humans can accomplish this “not tricky in the slightest” problem

      30% of people are fucking idiots. This keeps being proven. My argument is in no way changed by this fact.

      Is “understanding” things not “tricky”?

      No. Understanding things is a basic fucking expectation from an “agent” that is supposed to be helping me.