" …
What exactly is AMI building? The short answer is world models, a category of AI system that LeCun has been arguing for, and working on, for years. The longer answer requires understanding why he thinks the industry has taken a wrong turn.

Large language models learn by predicting which word comes next in a sequence. They have been trained on vast quantities of human-generated text, and the results have been remarkable, ChatGPT, Claude, and Gemini have demonstrated an ability to generate fluent, plausible language across an enormous range of subjects. But LeCun has spent years arguing, loudly and repeatedly, that this approach has fundamental limits.

His alternative is JEPA: the Joint Embedding Predictive Architecture, a framework he first proposed in 2022. Rather than predicting the future state of the world in pixel-perfect or word-by-word detail, the approach that makes generative AI both powerful and prone to hallucination, JEPA learns abstract representations of how the world works, ignoring unpredictable surface detail. The idea is to build systems that understand physical reality the way humans and animals do: not through language, but through embodied experience."

  • minorkeys@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    1
    ·
    18 hours ago

    Still gonna use it to enslave and possibly enable a culling of the world population.

  • melfie@lemy.lol
    link
    fedilink
    English
    arrow-up
    5
    ·
    18 hours ago

    I think the “AI industry” is already doing a fantastic job proving they got it wrong.

  • merc@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    18
    arrow-down
    1
    ·
    1 day ago

    LLMs are an obvious dead end when it comes to actual “intelligence” or understanding how the world works.

    But, this sounds like a “draw the rest of the owl” situation.

    “JEPA learns abstract representations of how the world works, ignoring unpredictable surface detail.”

    Oh, it’s that simple is it? Just have it “learn abstract representations of how the world works”. Amazing how nobody thought to do that before!

    I think I understand the distinction they’re trying to draw. Current models are trained on billions of pictures of cats and billions of pictures of dogs. You feed it an image of Fido and it finds a point in 2500 dimensional space and knows whether that point is in the “cat space” or “dog space”. It can be very good, but it doesn’t have any “understanding” of what makes something a cat vs. a dog. Humans, OTOH, aren’t trained on billions of images. But, they learn about things like “teeth” and “whiskers” and “snouts” and “eyes”. Within their knowledge of eyes, they spot that vertical slit pupils are unusual and different, and part of what makes something “catlike”. AFAIK, nobody has ever managed to create a system that learns abstract features without intensive human training.

    I like that they’re trying something new. But, are they counting on a massive breakthrough on a problem that has existed since people first started theorizing about AI? Or, is it just a matter of refining a known process?

  • davidgro@lemmy.world
    link
    fedilink
    English
    arrow-up
    63
    ·
    2 days ago

    I’m overall still skeptical, but this does sound a lot more like how I imagine a true AI would work. I’ve also thought LLMs were a dead end for a while now.

  • Pennomi@lemmy.world
    link
    fedilink
    English
    arrow-up
    38
    arrow-down
    8
    ·
    2 days ago

    Good luck getting your model to learn how to code through physical experience instead of through text.

    • Cethin@lemmy.zip
      link
      fedilink
      English
      arrow-up
      9
      ·
      1 day ago

      I’m skeptical, but it makes a lot more sense. You don’t just “learn to code.” Writing the text is the easy part. It’s about solving problems. This is next to impossible to do reasonably without actually understanding what the solution needs to do and what capabilities you have to do it. That’s why the LLM method has produced such shit code. It’s just reproducing text. It doesn’t actually understand the problem or what it can use to get it done.

    • Tim@lemmy.snowgoons.ro
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      17 hours ago

      Coding is a solved problem; people with zero understanding can do it by copypasta from stack overflow, and similarly skilled LLMs can do it right now, cheaper. If you’re a “coder”, you have a lovely hobby but no career. Sorry.

      If you’re a software engineer though, you have nothing to fear from current LLMs. But there is much more chance of LeCun’s models learning engineering - i.e. problem solving, in which writing code is just one of the tools, and not even the most important one - through physical experience and not just text. It is, after all, how all the software engineers today did the vast majority of their learning.

    • SuspciousCarrot78@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      87
      ·
      edit-2
      2 days ago

      Tell it to Lecun. He won the Turing prize. I figure he knows what he’s doing. Let him cook I sez.

      PS: I didn’t down vote you. It’s good to be skeptical.

      • Pennomi@lemmy.world
        link
        fedilink
        English
        arrow-up
        23
        arrow-down
        1
        ·
        2 days ago

        I dunno, the I-JEPA paper only dealt with image classification, and it looks like it isn’t scaling with larger model sizes like the other techniques.

        Besides, Meta was one of the biggest failures in AI model building while he was there. Not exactly a confidence booster.

        I’m extremely skeptical if he’s truly raising money off of name recognition alone instead of a real demo frontier model that just needs scaling.

        • SuspciousCarrot78@lemmy.worldOP
          link
          fedilink
          English
          arrow-up
          41
          ·
          2 days ago

          Yep. And per the article’s conclusion -

          “…The question is whether being right about the problem is the same as being right about the solution.”

  • arcine@jlai.lu
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    5
    ·
    1 day ago

    If he thinks there is any promise in any sort of AI at all, he is as idiotic as the lot of them.

    Switching the sauce doesn’t make a shit sandwich any more edible than it was before…

  • xerxes@piefed.social
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    1
    ·
    2 days ago

    It’s pretty crazy to me that zuck let an actual academic like Yann LeCun go for a kid like Alex Wang. Seems like some very short term thinking.

    • Pokexpert30 🌓@jlai.lu
      link
      fedilink
      English
      arrow-up
      23
      ·
      2 days ago

      Yann is the annoying nerd that tells you the truth. Alex is the cool kid that tells you what you want to hear.

      • jj4211@lemmy.world
        link
        fedilink
        English
        arrow-up
        11
        ·
        1 day ago

        To be fair, the financial market is deeply rewarding the “tell us what we want to hear” approach.

        Even if the time should come where the chickens come home to roost, the key players will have gotten billions out of the mania in the meantime.

        So on one hand you have someone making a fair pessimistic assessment of current approaches that isn’t attractive to investors and his suggestion is very unproven. On the other hand you have someone that agrees with whatever the investors want to believe. The latter is, in this situation, an easy payday.

  • trolololol@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    22 hours ago

    This page is broken. I accepted the cookies and instead of letting me read the article it shows me a full page about cookies that I can’t close.

  • Bazell@lemmy.zip
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    edit-2
    2 days ago

    I believe only in success of AI systems based on real neurons(living tissue), not just “the models”. The problem with all current AI system is that they are just modelling how real AI would look and behave like. I appreciate his attempts to turn AI slop into something more meaningful, but I do not comprehend how he is going to achieve this without creating some completely new and revolutionary approach to resemble neurons in computers.

    We are not modelling real neurons even. What we have are just big functions with lots of parameters that calculate the output number based on input. That’s all.

    • SuspciousCarrot78@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      2 days ago

      Different approach, yeah. JEPA learns world models instead of predicting text. Whether that closes the gap with how biology actually works…that’s what he’s spending the billion to find out.

    • eleitl@lemmy.zip
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 day ago

      They don’t even have state in the weights blob. It’s all tokens in an input vector.

  • Not_mikey@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    2 days ago

    What exactly is this for? I understand LLMs have there limits with understanding physical reality, but at least they have a use case of theoretically automating the “symbolic work” ie moving symbols around on a screen or piece of paper, that white collar workers do.

    Yes it’ll never be able to cook a meal or change a lightbulb, but neither will this without a significant enhancement in robotics to embody this AI. What’s the use case? Being able to better tell you how to throw a ball then a person?

    • SuspciousCarrot78@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      15
      ·
      edit-2
      2 days ago

      World models aren’t just for robotics (though they definitely WILL be used for that). They’re for reasoning under uncertainty in domains where you can’t see the outcome in advance. Eg:

      Medical diagnosis: you can’t physically “embody” whether a treatment will work. But a system that understands disease progression, drug interactions, and physiological constraints (not by pattern-matching text, but by learning causal structure) - well, that’s fundamentally different from an LLM hallucinating plausible-sounding symptoms.

      Financial modeling, engineering simulations, climate prediction…all domains where the “embodied experience” is simulation, not physical interaction. You learn how the world actually works by understanding constraint and causality, not by predicting the next token in a Bloomberg article.

      The point isn’t “robots will finally work.” The point is: understanding causality is cheaper in the long run and more reliable than memorizing correlations. Embodiment is just the training signal that forces you to learn causality instead of surface patterns.

      My read is that LeCun’s betting that a system trained to predict abstract state transitions in any domain (be that medical, financial, physical) will generalize better / hallucinate less than one trained to predict text.

      Whether that’s true? Fucked if I know - that’s why it’s (literally) the billion-dollar question. If he cracks it…it’s big.

      But “it won’t cook dinner” misses the point (and besides which, it might actually cook dinner and change lightbulbs, so…)