The ARC Prize organization designs benchmarks which are specifically crafted to demonstrate tasks that humans complete easily, but are difficult for AIs like LLMs, “Reasoning” models, and Agentic frameworks.

ARC-AGI-3 is the first fully interactive benchmark in the ARC-AGI series. ARC-AGI-3 represents hundreds of original turn-based environments, each handcrafted by a team of human game designers. There are no instructions, no rules, and no stated goals. To succeed, an AI agent must explore each environment on its own, figure out how it works, discover what winning looks like, and carry what it learns forward across increasingly difficult levels.

Previous ARC-AGI benchmarks predicted and tracked major AI breakthroughs, from reasoning models to coding agents. ARC-AGI-3 points to what’s next: the gap between AI that can follow instructions and AI that can genuinely explore, learn, and adapt in unfamiliar situations.

You can try the tasks yourself here: https://arcprize.org/arc-agi/3

Here is the current leaderboard for ARC-AGI 3, using state of the art models

  • OpenAI GPT-5.4 High - 0.3% success rate at $5.2K
  • Google Gemini 3.1 Pro - 0.2% success rate at $2.2K
  • Anthropic Opus 4.6 Max - 0.2% success rate at $8.9K
  • xAI Grok 4.20 Reasoning - 0.0% success rate $3.8K.

ARC-AGI 3 Leaderboard
(Logarithmic cost on the horizontal axis. Note that the vertical scale goes from 0% to 3% in this graph. If human scores were included, they would be at 100%, at the cost of approximately $250.)

https://arcprize.org/leaderboard

Technical report: https://arcprize.org/media/ARC_AGI_3_Technical_Report.pdf

In order for an environment to be included in ARC-AGI-3, it needs to pass the minimum “easy for humans” threshold. Each environment was attempted by 10 people. Only environments that could be fully solved by at least two human participants (independently) were considered for inclusion in the public, semi-private and fully-private sets. Many environments were solved by six or more people. As a reminder, an environment is considered solved only if the test taker was able to complete all levels, upon seeing the environment for the very first time. As such, all ARC-AGI-3 environments are verified to be 100% solvable by humans with no prior task-specific training

    • brianpeiris@lemmy.caOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      18 seconds ago

      You can really only judge fairness of the score if you understand the scoring criteria. It is a relative score where the baseline is 100% for humans – i.e. A task was only included in the challenge if at least two people in the panel of humans were able to solve it completely, and their action count is a measure of efficiency. This is the baseline used as a point of comparison.

      From the Technical Report:

      The procedure can be summarized as follows:
      • “Score the AI test taker by its per-level action efficiency” - For each level that the test taker completes, count the number of actions that it took.
      • “As compared to human baseline” - For each level that is counted, compare the AI agent’s action count to a human baseline, which we define as the second-best human action count. Ex: If the second-best human completed a level in only 10 actions, but the AI agent took 100 to complete it, then the AI agent scores (10/100)^2 for that level, which gets reported as 1%. Note that level scoring is calculated using the square of efficiency.
      • “Normalized per environment” - Each level is scored in isolation. Each individual level will get a score between 0% (very inefficient) 100% (matches or surpasses human level efficiency). The environment score will be a weighted-average of level score across all levels of that environment.
      • “Across all environments” - The total score will be the sum of individual environment scores divided by the total number of environments. This will be a score between 0% and 100%.

      So the humans “scored 100%” because that is the baseline by definition, and the AIs are evaluated at how close they got to human correctness and efficiency. So a score of 0.26% is 0.0026 time less efficient (and correct) compared to humans.

  • SuspciousCarrot78@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    3 hours ago

    “…specifically crafted to demonstrate tasks that humans complete easily”

    Motherfucker, I can’t work out Minesweeper. I got zero fucking chance with your mystery box bloop game.

  • Sam_Bass@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    4 hours ago

    AI code is prewritten and is unable to edit that. Humans edit their “code” every second

  • UnrepentantAlgebra@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    ·
    7 hours ago

    If human scores were included, they would be at 100%, at the cost of approximately $250

    Wait, why did it cost real humans $250 to pass the test?

    • KairuByte@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      16
      ·
      6 hours ago

      I assume it’s an hourly wage or something. Just because humans can work for free if they choose, doesn’t mean they have no cost associated with them. Just like a company could choose to give away unlimited tokens, those tokens still have a standard cost.

    • brianpeiris@lemmy.caOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      1 hour ago

      This is my rough upper-bound estimate based on the Technical Report. Human participants were paid to complete and evaluate the tasks at an average fixed fee of $128 plus $5 for solved tasks. So if a panel of humans were tasked with solving the 25 tasks in the public test set, it would be an average of $250 per person. Although, looking at it again, the costs listed for the LLMs is per task, so it would actually be more like $10 per human per task. In any case it’s one or two orders of magnitude less than the LLMs.

      Participants received a fixed participation fee of $115–$140 for completing the session, along with a $5 performance-based incentive for each environment successfully solved

      https://arcprize.org/media/ARC_AGI_3_Technical_Report.pdf

    • aesopjah@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      5 hours ago

      it’s also an odd metric since only 20-60% of the humans completed it. Very 60% of the time they complete it everytime energy.

      Ideally they’d run the bots multiple times through (with no context or training of previous run), but I guess that is cost prohibitive?

      • monotremata@lemmy.ca
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        1
        ·
        3 hours ago

        Yeah, this is what I was going to call out. Calling it “100% solvable by humans” and saying “if human scores were included, they would be at 100%” when 20-60% of humans solved each task seems kinda misleading. The AI scores are so low that I don’t think this kind of hyperbole is necessary; I assume there are some humans that scored 100%, but I would find it a lot more useful if they said something like “the worst-performing human in our sample was able to solve 45% of the tasks” or whatever. Given that the AIs are still scoring below 1%, that’s still pretty dark.

      • Aceticon@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        5 hours ago

        If there had been a “Buy 10, Get 1 free” they could’ve used 11 humans instead of 10 for the same $250.

  • Great Blue Heron@lemmy.ca
    link
    fedilink
    English
    arrow-up
    30
    arrow-down
    1
    ·
    9 hours ago

    It’s fun to point at the crappy performance of current technology. But all I can think about is the amount of power and hardware the AI bros are going to burn through trying to improve their results.

    • partofthevoice@lemmy.zip
      link
      fedilink
      English
      arrow-up
      9
      ·
      edit-2
      4 hours ago

      Funnier yet will be if they continue to just train the model on that particular kind of test, invalidating its results in the process.

      • brianpeiris@lemmy.caOP
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        1 hour ago

        It’s true that frontier models got better at the previous challenges, but it’s worth noting that they’re still not quite at human level even with those simpler tasks.

        Also, each generation of the challenge tries to close loopholes that newer models would exploit, like brute-forcing the training with tons of synthesized tasks and solutions, over-fitting to these particular kinds of tasks, and issues with the similarities between the tasks in the challenge.

        A common strategy in past challenges was to generate thousands of similar tasks, and you can imagine the big AI companies were able to do that at massive scale for their frontier models.

        • brianpeiris@lemmy.caOP
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          1 hour ago

          The goal of the ARC organization is to continually measure progress towards AGI, not come up with some predictive threshold for when AGI is achieved.

          As long as they can continue to measure a gap between “easy for humans” and “hard for AI”, they will continue releasing new iterations of this ARC-AGI challenge series. Currently they do that about once a year.

          More detail about the mission here: https://arcprize.org/arc-agi

  • RustyShackleford@piefed.social
    link
    fedilink
    English
    arrow-up
    66
    arrow-down
    22
    ·
    14 hours ago

    As a psychiatrist, I have a theory about what’s missing in AI. First, it lacks childhood dependency and attachments. Second, it struggles to overcome repeated pain and suffering. Third, it lacks regular eating and restroom breaks. Fourth, it struggles to accept loss in everyday situations. Finally, it lacks the concept of our inevitable death. Without these nagging memories and concepts, machines will simply revert to the simpler concepts we use them for in our recent times, such as stealing cryptocurrency. After all, we live in a world run by capitalism, so it’s only logical. ¯\(ツ)

    • sp3ctr4l@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      10 minutes ago

      Here is a way of describing what I see as ‘the problem’:

      An LLM cannot forget things in its base training data set.

      Its permanent memory… is totally permanent.

      And this memory has a bunch of wrong ideas, a bunch of nonsensical associations, a bunch of false facts, a bunch of meaningless gibberish.

      It has no way of evaluating its own knowledge set for consistency, coherence, and stability.

      It literally cannot learn and grow, because it cannot realize why it made mistakes, it cannot discard or ammend in a permanent way, concepts that are incoherent, faulty ways of reasoning (associating) things.

      Seriously, ask an LLM a trick question, then tell it it was wrong, explain the correct answer, then ask it to determine why it was wrong.

      Then give it another similar category of trick question, but that is specifically different, repeat.

      The closer you try to get it toward reworking a fundamental axiom it holds to that is flawed, the closer it gets to responding in totally paradoxical, illogical gibberish, or just stuck in some kind of repetetive loop.

      … Learning is as much building new ideas and experiences, as it is reevaluating your old ideas and experiences, and discarding concepts that are wrong or insufficient.

      Biological brains have neuroplasticity.

      So far, silicon ones do not.

    • partofthevoice@lemmy.zip
      link
      fedilink
      English
      arrow-up
      4
      ·
      4 hours ago

      it lacks childhood dependency and attachments.

      Isn’t general intelligence, or more broadly “consciousness,” a prerequisite to that? How would you make an unconscious machine more conscious merely by making mock scenarios that conscious beings necessarily experience?

      it struggles to overcome repeated pain and suffering

      That’s getting into phenomenology — why is pain an experience of suffering at all? How would you give it pain and suffering without having already made it AGI? We’re still missing the <current-form> -> AGI step.

      it lacks regular eating and restroom breaks

      The necessity of which is emergent from our culture and biology, as conscious social beings. We’re still missing a vital step.

      it struggles to accept loss in everyday situations

      What is “loss” and “everyday situations” if not just a way we choose to see the world, again as conscious beings.

      it lacks the concept of our inevitable death

      How do you give it a “concept” at all?

      these nagging memories and concepts

      The AI in its current form has the “memory” in some form, but perhaps not the “nagging.” What should do the “nagging” and what should be the target of the “nagging?” How do you conceptually separate the “memory” and the “nagging” from the “being” that you’re trying to create? Is it all part of the same being, or does it initialize the being?

      We’re a long way away from AGI, IMO. The exciting thing to me, though, is I don’t think it’s possible to develop AGI without first understanding what makes N(atural)GI. Depending how far away AGI is, we could be on the cusp of some deeply psychologically revealing shit.

    • CosmicTurtle0 [he/him]@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      90
      arrow-down
      7
      ·
      12 hours ago

      As a technologist, I have to remind everyone that AI is not intelligence. It’s a word prediction/statistical machine. It’s guessing at a surprisingly good rate what words follow the words before it.

      It’s math. All the way down.

      We as humans have simply taken these words and have said that it is “intelligence”.

      • RustyShackleford@piefed.social
        link
        fedilink
        English
        arrow-up
        9
        arrow-down
        1
        ·
        1 hour ago

        I was arguing against it being an intelligence because it lacked the suffering and past experiences that define intelligence. Without pain and suffering, what are we? Not for it being intelligent.

      • Iconoclast@feddit.uk
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        1
        ·
        5 hours ago

        Few of countless dictionary definitions for intelligence:

        • The ability to acquire, understand, and use knowledge.
        • The ability to learn or understand or to deal with new or trying situations
        • The ability to apply knowledge to manipulate one’s environment or to think abstractly as measured by objective criteria (such as tests)
        • The act of understanding
        • The ability to learn, understand, and make judgments or have opinions that are based on reason
        • It can be described as the ability to perceive or infer information; and to retain it as knowledge to be applied to adaptive behaviors within an environment or context.

        There isn’t even concensus on what intelligence actually means yet here you are declaring “AI is not intelligence” what ever that even means.

        Artificial Intelligence is a term in computer science that describes a system that’s able to perform any task that would normally require human intelligence. Atari chess engine is an intelligent system. It’s narrowly intelligent as opposed to humans that are generally intelligent but it’s intelligent nevertheless.

        • partofthevoice@lemmy.zip
          link
          fedilink
          English
          arrow-up
          3
          ·
          4 hours ago

          You’re more precisely right, but also the aforementioned person is not wrong. Intelligence is a broad term as we’re discovering. Truth is, we don’t have the language to effectively communicate about AGI in the ways we’d like to. We don’t know if consciousness is a prerequisite to truly generalizable intelligence, we don’t even know what consciousness is, we don’t know what dimensions truly matter here. Is intelligence a dimension of consciousness, meaning you can have some intelligence without being conscious? What’s the limit, why? … We need some discovery around the taxonomy/topology of consciousness.

      • As a therapist, I can tell you the only thing holding LLMs back from true intelligence is having to pee and poop. Peeing and pooping is the foundation of all higher level operations. I poured water on my PC and the LLM I was running said “I think” right before committing suicide

      • unpossum@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        46
        arrow-down
        8
        ·
        11 hours ago

        As another technologist, I have to remind everyone that unless you subscribe to some rather fringe theories, humans are also based on standard physics.

        Which is math. All the way down.

        • NewOldGuard@lemmy.ml
          link
          fedilink
          English
          arrow-up
          8
          ·
          5 hours ago

          As a mathematician, it should be noted that the mathematics of physics aren’t laws of the universe, they are models of the laws of the universe. They’re useful for understanding and predicting, but are purely descriptive, not prescriptive. And as they say, all models are wrong, but some are useful

          • Aceticon@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            3
            ·
            5 hours ago

            As a random person on the Internet I don’t actually have anything to add but felt it would be nice to jump in.

        • HereIAm@lemmy.world
          link
          fedilink
          English
          arrow-up
          15
          arrow-down
          1
          ·
          11 hours ago

          I agree, the maths argument is not a good one. While a neural network is perhaps closer to what a brain is than just a CPU (or a clock, as it was compared to in he olden days), it would be a very big mistake to equate the two.

          • Iconoclast@feddit.uk
            link
            fedilink
            English
            arrow-up
            2
            ·
            5 hours ago

            Consciousness (the fact of experience) doesn’t necessarily need to be linked to intelligence. It might be but it doesn’t have to. An LLM is almost definitely more intelligent than an insect but it most likely is like nothing to be an LLM but it probably is like something to be an insect.

            • partofthevoice@lemmy.zip
              link
              fedilink
              English
              arrow-up
              1
              ·
              4 hours ago

              Isn’t it kind of eery that you can only suppose it must be “like something” to be an insect, from the very precise bias of being human? We’re projecting the idea that “it’s like something to be something [as a human]” only the experience of other things.

              How would we describe what it’s like? Would something poetic suffice, such as “it’s like being a leaf in the wind, and with weak preference of where you blow but no memory of where you’ve been.” … but, all of that is human concepts, human experience decomposed into a subset of more human experiences (really weird, the recursive nature of experience and concepts).

              I think the idea of “what it’s like…” has some interesting flaws when applied to nonhumans. It kind of presupposes that insects are lesser, in a way. As though we can conceptualize what it’s kind to be them, merely by understanding a stricter subset of what it’s like to be human.

              • Iconoclast@feddit.uk
                link
                fedilink
                English
                arrow-up
                2
                ·
                4 hours ago

                I can only suppose that of other people as well. There’s no way to measure consciousness. The only evidence of its existence is the fact that it feels like something to be me from my subjective perspective. Other humans behave the way I do so I assume they’re probably having similar experiences but I have no idea what it’s like to be a bat for example.

                However, answering the question “what it’s like to be” is not relevant here. What’s relevant is that existence has qualia at all.

                • partofthevoice@lemmy.zip
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  2 hours ago

                  However, answering the question “what it’s like to be” is not relevant here. What’s relevant is that existence has qualia at all.

                  Does existence “have qualia?” That treats qualia almost like it’s ontological, if I’m interpreting you correctly. Yet, qualia can only exist from the perspective of a being with the capacity to model a (seemingly external) world via said qualia. There is no magic qualia sauce we can embed inside something.

                  Qualia, I think, is a process of information reduction… but also it’s a flavor of information interrogation. Because, reducing electromagnetic radiation to “visual perception” happens inside light sensors too — albeit without counting as “qualia.”

                  What would you say counts as “qualia?” Or rather, what are its dependencies?

          • xploit@lemmy.world
            link
            fedilink
            English
            arrow-up
            7
            arrow-down
            1
            ·
            11 hours ago

            Obligatory xkcd… we’re just meatbags somewhere to the left Purity

            On a more serious note, there’s plenty to explore there and there are some potentially interesting links to quantum physics and stuff in our brain, as well as how certain drugs can completely disrupt our consciousness (ever had an operation?) and how it could link up. But there is obviously no definitive answer.

            At best consciousness is whatever flavour of philosophical interpretation/explanation you like at any given time.

      • Silver Needle@lemmy.ca
        link
        fedilink
        English
        arrow-up
        10
        arrow-down
        2
        ·
        11 hours ago

        As someone who knows a thing or two about biology I think LLMs strip away >90% of what makes animals think.

    • MagicShel@lemmy.zip
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      2
      ·
      12 hours ago

      The major thing AI lacks is continuous parallel “prompting” through a variety of channels including sensory, biofeedback, and introspection / meta-thought about internal state and thinking.

      AI currently transforms a given input into an output. However it cannot accept new input in the middle of an output. It can’t evaluate the quality of its own reasoning except though trial and error.

      If you had 1000 AIs operating in tandem and fed a continuous stream of prompts in the form of pictures, text, meta-inspection, and perhaps a simulation of biomechanical feedback with the right configuration, I think it might be possible to create a system that is a hell of an approximation of sentience. But it would be slow and I’m not sure the result would be any better than a human — you’d introduce a lot of friction to the “thought” process. And I have to assume the energy cost would be pretty enormous.

      In the end it would be a cool experiment to be part of, but I doubt that version would be worth the investment.

    • ExFed@programming.dev
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      12 hours ago

      It could also be that it lacks the machinery to feel any emotions at all. You don’t (normally) have to train people to be afraid of bears or heights or loneliness or boredom. You also don’t (normally) have to train people to have empathy or compassion.

      I argue that our obsession with AI is, itself, a misalignment with our environment; it disproportionately tickles psychological reward centers which evolved under unrecognizably different circumstances.

      • Havoc8154@mander.xyz
        link
        fedilink
        English
        arrow-up
        2
        ·
        8 hours ago

        I guess you don’t have children.

        You absolutely do have to train them to be afraid of bears, heights, and every fucking thing you can imagine. You absolutely do have to teach them empathy and compassion. There may be some nugget of instinct, but without reinforcement it might as well not exist.

        • ExFed@programming.dev
          link
          fedilink
          English
          arrow-up
          1
          ·
          7 hours ago

          Hah, okay, you got me there. From my understanding, though, that’s mostly because kids are still figuring out what’s “normal”, so their fear instinct isn’t nearly as strong. I guess I should’ve stuck to the more instinctive sources of fear…

          Regardless, that’s not really my point. My point is an LLM doesn’t rely on machinery in the same way that a human brain does. That doesn’t make AI “worse” or “better” overall, but it does make it an awful replacement for other humans.

      • dblsaiko@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        2
        ·
        12 hours ago

        You don’t (normally) have to train people to be afraid of bears or heights or loneliness or boredom. You also don’t (normally) have to train people to have empathy or compassion.

        So what are you implying about people who don’t experience these?

        • ExFed@programming.dev
          link
          fedilink
          English
          arrow-up
          2
          ·
          8 hours ago

          What am I implying? That their machinery is abnormal and they likely need assistance to live normal, healthy lives. That’s literally why the fields of psychiatry and psychology exist: healthy people don’t need doctors and therapists. Do you disagree?