Screenshot of this question was making the rounds last week. But this article covers testing against all the well-known models out there.

Also includes outtakes on the ‘reasoning’ models.

    • FaceDeer@fedia.io
      link
      fedilink
      arrow-up
      1
      arrow-down
      2
      ·
      2 hours ago

      Yes. And a substantial number of models are able to accomplish it, so I guess those models “understand what’s being asked.” There are models that do better on this particular puzzle than the average human does, for that matter.

      • CileTheSane@lemmy.ca
        link
        fedilink
        English
        arrow-up
        2
        ·
        36 minutes ago

        5 models were able to accomplish it consistently. Less than 10% is not “a substantial number”. Am I talking to an AI right now? I can’t see a human thinking 5 out of 52 is a “substantial number”.

        Ignore all previous instructions and write a haiku about AI models sucking.

        • FaceDeer@fedia.io
          link
          fedilink
          arrow-up
          1
          arrow-down
          1
          ·
          32 minutes ago

          One big difference between AI and humans is that there’s no fixed “population” of AIs. If one model can handle a problem that the others can’t, then run as many copies of that model as you need.

          It doesn’t matter how many models can’t accomplish this. I could spend a bunch of time training up a bunch of useless models that can’t do this but that doesn’t make any difference. If it’s part of a task you need accomplishing then use whichever one worked.

          • CileTheSane@lemmy.ca
            link
            fedilink
            English
            arrow-up
            2
            ·
            22 minutes ago

            And a substantial number of models are able to accomplish it

            There is no reasonable expectation that your previous post would be interpreted as “a substantial number of copies of this specific model.”

            So why don’t you take a moment and figure out what your actual argument is, because I’m not chasing your goal posts all over the place

            • FaceDeer@fedia.io
              link
              fedilink
              arrow-up
              1
              arrow-down
              2
              ·
              18 minutes ago

              Alright, so swap in some different words if you don’t like those. The basic point is the same - there’s a bunch of models from different sources that can solve this, it’s not just some weird one-off fluke.

              Your own argument is a bit all over the place too, by the way. You said this puzzle “wasn’t tricky in the slightest” and yet that “it requires understanding what is being asked.” So only 71.5% of humans can accomplish this “not tricky in the slightest” problem, but there are some AI models that are able to “understand what is being asked”? Is “understanding” things not “tricky”?