Screenshot of this question was making the rounds last week. But this article covers testing against all the well-known models out there.

Also includes outtakes on the ‘reasoning’ models.

    • kescusay@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      2
      ·
      17 hours ago

      It’s already happening. GPT 5.2 is noticeably worse than previous versions.

      It’s called model collapse.

      • Zos_Kia@jlai.lu
        link
        fedilink
        English
        arrow-up
        5
        ·
        12 hours ago

        To clarify : model collapse is a hypothetical phenomenon that has only been observed in toy models under extreme circumstances. This is not related in any way to what is happening at OpenAI.

        OpenAI made a bunch of choices in their product design which basically boil down to “what if we used a cheaper, dumber model to reply to you once in a while”.

        • XLE@piefed.social
          link
          fedilink
          English
          arrow-up
          2
          ·
          5 hours ago

          The funny thing is, in order to get it to the dumber model, they have to run people’s queries through a model that selects the appropriate model first. This is resulted in new headaches for AI fans

          • Zos_Kia@jlai.lu
            link
            fedilink
            English
            arrow-up
            2
            ·
            4 hours ago

            Yeah that’s also something that you have to train for, i’m not super aware of the technicals but model routing is definitely important to the AI companies. I suspect that’s part of why they can pretend that “inference is profitable” as they are already trying to squeeze it down as much as possible.

              • Zos_Kia@jlai.lu
                link
                fedilink
                English
                arrow-up
                1
                ·
                3 hours ago

                Yeah i remember that Ed article ! I don’t think the technical aspects are relevant to the newer generation of models, but yeah of course any attempt to compress inference costs can have side effects : either response quality will degrade for using dumber models, or you’ll have re-inference costs when the dumb model shits its pants. In fact the re-inference can become super costly as dumber models tend to get lost in reasoning loops more easily.