TL;DR: Tesla self-driving tech is becoming less safe per mile, according to Tesla’s own data.

Q1 2025 was 2.5% worse than Q1 2024.

Q2 2025 was 2.8% worse than Q2 2024.

Not a great look.

    • KayLeadfoot@fedia.ioOP
      link
      fedilink
      arrow-up
      14
      ·
      1 day ago

      If you ask Tesla drivers, they often purchased the car because someone told them Teslas are safe, despite them being the statistically most lethal car you could drive in America.

      We live in a dystopian information environment.

      • slaneesh_is_right@lemmy.org
        link
        fedilink
        English
        arrow-up
        3
        ·
        9 hours ago

        I remember back when elon bragged how save they are and how they smashed the safety tests (american, not real ones) and i though wow interesting and looked at the tests. And literally every test they smashed was because the car is super bottom heavy because that’s where the battery is. You could weld some railroads under every car and they would do just as well. I even remember thinking: oh he’s gonna get roasted once people realise that. Still waiting

        • KayLeadfoot@fedia.ioOP
          link
          fedilink
          arrow-up
          1
          ·
          2 hours ago

          “It broke the test equipment” is not NEARLY the clapback Elon thinks it is. That’s actually really bad, and probably demonstrates the lethality of the vehicle (it’s just too heavy)

          Also I love your username, show me the lie

  • AusatKeyboardPremi@lemmy.world
    link
    fedilink
    English
    arrow-up
    41
    ·
    2 days ago

    This is taking “testing in production” to a whole new level. How did this get past the regulations?

    On second thoughts, does any country have concrete regulations for self driving vehicles? I am curious what they would be, and how they would quantify the thresholds since no self driving solution would be 100% accident-free.

  • FishFace@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    ·
    1 day ago

    End-to-end ML can be much better than hybrid (or fully rules-based) systems. But there’s no guarantee and you have to actually measure the difference to be sure.

    For safety-critical systems, I would also not want to commit fully to an e2e system because the worse explainability means it’s much harder to be confident that there is no strange failure mode that you haven’t spotted but may be, or may become, unacceptable common. In that case, you would want to be able to revert to a rules-based fallaback that may once have looked worse-performing but which has turned out to be better. That means that you can’t just delete and stop maintaining that rules-based code if you have any type of long-term thinking. Hmm.

    • xthexder@l.sw0.com
      link
      fedilink
      English
      arrow-up
      2
      ·
      23 hours ago

      I’ve thought about it in the past… what if there was a bug in an update and under some specific conditions the car will just vere to the side and crash. There’s a possibility that every self-driving Tesla travelling west into a sunset suddenly slams on the brakes causing a pile up. Who knows what kind of edge cases could exist?
      Even worse, what if someone hacks the wireless update and does something like this intentionally?

    • Match!!@pawb.social
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 day ago

      yeah i wanna see what the fuck metrics made them think this was a good idea. what is their mean average precision. did they recall@1 for humans on the road

  • yes_this_time@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    1
    ·
    1 day ago

    Could this be attributed to the driver mix changing?

    It’s quite possible tesla drivers are worse in 2025 than 2024

  • Buffalox@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    ·
    edit-2
    2 days ago

    Ha funny. I highly suspected this could happen when I heard they have quotas for how many changes the people training the AI make to the AI behavior. That’s a recipe for flooding the system with bad data.
    No AI can be better than the info it is given, and if X is an indicator on that, it’s just about a certainty that Tesla AI will rot in misinformation.

  • madcaesar@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    2 days ago

    This is actually in line with ai I’ve used… For some reason it just turns to shit after a while, I’m not sure why

    • KayLeadfoot@fedia.ioOP
      link
      fedilink
      arrow-up
      3
      ·
      1 day ago

      I’ve also noticed that.

      Intrinsic to the tech, I think. It’s not that it gets worse, it just gets different (intentionally, as a feature).

      The teams that ensure “different” trends in the direction of “better,” those teams are universally very new at their jobs and at the technology.

      So, serious organizations are figuring out how to test and deploy consistently better AIs. I don’t think Elon Musk runs a single serious organization, other than arguably SpaceX.

  • Visstix@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    2 days ago

    It will be fine they said, it will get better they said.

    Somehow it gets even worse.

  • Gladaed@feddit.org
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    6
    ·
    1 day ago

    Replacing code with networks has the potential to be much faster with similar quality. The idea is good.

      • Gladaed@feddit.org
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 day ago

        No, in general. You want to do as little pre and post processing as possible for neural networks.

        • 3abas@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 day ago

          People in this thread don’t understand what machine learning is, and they think Tesla’s FSD is chat gpt.

          I’m an early Tesla enthusiast and I’ve purchased FSD when it was cheap, I still don’t have what I purchased, they no longer claim the things they used to claim it will do on the website, Elon is a Nazi con man who took a great product and hired brilliant engineers to build amazing tech only to taint it by manipulating an election to install a dictator and Seig Heiled in celebration.

          But Elon is just the rich asshole that runs the company, the brilliant engineers made amazing software that is still amazing despite not fulfilling Elon’s fraudulent sales pitches.

          This is not an endorsement of Tesla, I hope it crashes and burns as long as he benefits from it, I wish we can nationalize it and all its very valuable assets. But my “supervised” FSD handles all my driving and I haven’t had a single disengagement in many months. It takes me from my driveway to any address with ease, and my passengers don’t even realize I’m not driving.

          Sticking your head in the sand and pretending it’s not a real product because the CEO turned out to be a Nazi isn’t intellectually honest or useful. Normalize the idea of nationalizing Tesla, it was heavily funded by our tax dollars after all.

          FSD is improving at an incredible pace, and it would be very beneficial to society to nationalize and open source it, otherwise Elon the Nazi capitalist gets to benefit from it alone. China has incredible capacity to collect the training data in a short time that Tesla spent a decade collecting, and I have no doubt they’ll have an FSD comparable product soon, I do doubt they’ll open source it.

          inb4 someone calls me a Republican or a Russian because I said FSD is real. Find someone who has it and ask them for a demo, and judge for yourself.