Elon Musk Livestream of Version 12 FSD

Elon Musk livestreamed his drive with version 12 of FSD.

It pulled over to the side and parked.

It handled round about and new road work.

There was one intervention in a 50 minute drive. The car was stopped at an intersection and then started to move when the left turn lane light turned green.

They will release FSD v 12 alpha in shadow mode. It will then move to beta release to the public.

14 thoughts on “Elon Musk Livestream of Version 12 FSD”

  1. I’ve never tried a self-driving car, but I can’t imagine being able to relax when I have to constantly be poised to step in when it screws up. Seems more stressful than driving the car myself.

    • I would not mind if it handles stop and go traffic on the freeway, or long freeway drives, but I would not trust it with complex stuff. I don’t have to commute with traffic most of the time, and I rarely go on long road trips…though I might at some point. Needless to say, it is not worth many thousands of dollars to me.

  2. Its a amazing demo of just how well a neural network can move in the real world using vision in mid 2023 … putting aside Tesla / FSD debate it shows that NN technology is fully viable for many applications from robotics to weapons systems, patch this forwards a number of years (imagine such systems in say 2030) with many players entering the space it demonstrates profound change in real world movement for our machines.

  3. Well, one intervention in 40 minutes, that is one intervention in… guess 15-25 miles? Which OS about the same rate of interventions that the current version 11.4.4 has. So no dramatic improvement…

    Brian, could you take a look at the community fsd tracker? It seems that no Tesla blogger want to address the issue that FSD distance to intervention increases extremely slowly. Now, the percentage of rides without any intervention is levelling out at about 50%.

    • V12 is a new architecture. We don’t know how fast it’s improving. We just know that Tesla chose to release a demo at the point when it got about as good as the current beta.

      • Exactly. This isn’t another incremental improvement, it’s an entirely new system. That’s completely self-taught, they didn’t even tell it what a stop sign or stop light is. It literally learned all that on its own, from zero just by observing real drivers.

        (It’s funny but really instructive that they had to force-teach it to be a ‘bad’ driver at stoplights! I also noticed on the protected left turn it swung into the outside lane instead of the inside one–just like the bad human drivers it undoubtedly learned that from!)

        Of course there will still be some imperfections but that issue was nothing. If it was a particularly hard or challenging edge case it would be worrisome, but it wasn’t, it’s fairly simple. It must not have seen that exact scenario enough times but the ‘solution’ is literally nothing but a matter of time.

        That was an amazing, and almost perfect, demonstration of an entirely new software paradigm. I’ve totally flipflopped from thinking they’ll never solve autonomy to thinking they’ve essentially done it. At this point it’s literally just a matter of giving the brand new 15 year-old driver more experience. The technology part looks solved, and I can’t believe I’m saying that.

    • “…no Tesla blogger want to address the issue that FSD distance to intervention increases extremely slowly…”

      I heard Musk say exactly this himself in a interview on self-driving. He said they would get great gains from some change and then it would plateau off.

      I wonder if there is not some need for an underlying hard, fast rule based system that would keep you safe no matter what gibberish the AI spit out.

  4. I don’t think you can really call it “full” self driving if it requires interventions that frequently. The whole POINT of FSD is lost if you have to pay attention the whole while, poised to snatch back control.

    It won’t really be good enough until it requires interventions no more often than a new driver who’s qualified to drive on their own. Which is far from perfection, mind you, teens who have recently learned to drive have very high accident rates.

    But if the “FSD” system requires somebody supervising, it hasn’t learned to drive yet.

    • Maybe there will be a halfway point to FSD where Tesla can do FSD on freeways (where driving is simpler).

      • There is a point of automation where the supervising human will be to distracted to be able to take over control efficiently. At this point the drivers will either take the risk , albeit illegally, and stop really monitoring. This week be a de facto mass acceptance. Or there will be mandates to force more active human attention either by intentionally submitting errors or some attention monitoring system. The second approach effectively makes SDC useless. Hence it will be in eternal beta and the drivers would be forced to “opt-in” and bear the responsibility

        • Yet there are already fully driverless cars on the road in San Francisco, even though they are not perfect.

      • What’s needed is for it to fully handle some significant subset of driving, highway driving would be good, and, critically: To recognize when exceptions are coming up far enough in advance to warn you! So that you don’t have to be continually on alert with nothing to actually do.

    • Only Tesla uses “FSD” it means a system that requires a driver and is not autonomius. This FSD is incredibly bad,an intervention in 50 milers, is completely useless,GM goes90,000 miles, and then does not cause an accident, and this without a driver. FSD means it always has to haver a driver with insurance.

Comments are closed.