Tesla FSD For Safer Driving and Lower Insurance Costs

There are reports that Tesla FSD Supervised is enabling drivers scores to be increased from 90 to 100 which reduces the Tesla insurance premium by $70 per month. This nearly offsets the $99 per month Tesla insurance cost.

For two years, I have predicted that Tesla FSD Supervised becoming safer than human and enabling increased safety would lower Tesla insurance costs. This will drive adoption towards 50% in the USA where insurance costs are higher. It will increase adoption of FSD.

There is still the problem that Tesla does not insure non-Tesla cars. People who own a Tesla and a non-Tesla would have to get separate insurance for each and this might not work out financially.

If there is a 10% increase in FSD adoption in the USA then this would be about 200,000 more FSD sold. If this was 30% purchased and 70% subscription then this would be $480 Million and $168 million in the year with $168 million annual recurring revenue.

If there is a 30% increase in FSD adoption (new and used cars) in the USA then this would be about 600,000 more FSD sold. If this was 30% purchased and 70% subscription then this would be $1.5 billion and ~$500 million in the year with $500 million annual recurring revenue.

If there is a 10% increase in FSD adoption in the USA, China and Europe then this would be about 800,000 more FSD sold by Q1 2025. If this was 30% purchased and 70% subscription then this would be $1.9 billion and $960 million in the year with $960 million annual recurring revenue.

If there is a 50% FSD adoption in the USA, China and Europe then this would be about 4 million more FSD sold by Q1 2025. If this was 30% purchased and 70% subscription then this would be $9.6 billion and ~$3.3. billion in the year with $3.3 billion annual recurring revenue.

6 thoughts on “Tesla FSD For Safer Driving and Lower Insurance Costs”

  1. I drove a Hyundai with driver assist 4 hours yesterday, from upstate Connecticut back home to midtown Manhattan. Driver assist beeps and chimes at you if you vear into another driver’s lane. It gently co-steers you in the proper lane and beeps if you go over the line without signaling first (something that might happen if you think you are alone on the road).
    These things are helpful, but the driver is still in control.
    I really don’t like the idea of having to “take over” in a microsecond when the FSD fails to anticipate a dangerous situation. In NYC, that happens even with driver assist on crowded streets and the FDR Drive routinely. The car sense when there is unsafe driving distance, even if it’s just for a few seconds, and beeps at you and you’re turning or accelerating or braking out of the way already. What would FSD do in such situations? It might not be able to drive at all.
    But I really, really, dislike the idea of having to jump in in an instant to prevent a crash. Better to plan, execute, and adjust as situational driving requires, not to be on alert for failures of the FSD all the time. That’s one step removed, without the engagement necessary for smooth uneventful driving. It actually promises LOTS of eventful driving. I don’t see how this can be safer.

    For insurance purposes, all that matters is number of accidents per mile and their severity. The last point should not be underestimated. If there are fewer accidents, but the ones there are tend to be fatal – like FSD running the car at full speed into a building or truck, which has happened – that matters. A skidded stop and a minor crash is a far different thing than a full speed fatal crash.

  2. I mean where are they with 12.5? At 140 – 160 miles till critical disengagement.

    20 000 miles per critical disengagement is not enough to be safe enough. Even 200.000 miles not.

    The only way to make it happen is to get to real AI, not just trained neural net.

    He will need to delay robotaxi once more and then take 5 years till releasing it. I mean good that they are trying. That could increase the safety by a lot. Trying and developing such tech is great. Marketing it with false statements is not.

    • Oh, come on. I’ll agree that at 140-160 miles until critical disengagement you can’t realistically call it full self driving; It’s not FULL self driving until you can safely take a nap while the car is in motion, and that’s only about three hours between occasions for the driver having to take over. You could call that “emergency self driving”; It’s still good enough to take you to an emergency room if you have a medical emergency while driving.

      At 20,000 miles, you’re at 400 hours between engagements, and it’s probably a lot safer than a driver who is impaired by lack of sleep.

      At 200,000 miles? We routinely let humans less safe than that drive unsupervised.

      Teens with their full license typically run to 70,000 miles between accidents. Young adults are at about 137K miles between accidents. 200k is about where 21-29 year olds end up.

      And, remember, just because the car ‘thought’ it was better that a human take over, doesn’t mean that you’d have gotten an accident otherwise. So miles to intervention is probably considerably shorter than miles to accident.

      • Robotaxis most likely won’t have any steering wheels. So how would a driver who is not attentive correct a potentially life-threatening situation? The system needs to be very good so that chance is minimalized. 

        Now drivers need to be attentive and be prepared to take over. But I would really want to see someone driving 20 k miles with hands near the steering wheel and then reacting in time when the situation arises. He would get so used to car driving itself that he wouldn’t react properly.

        • That’s why I say that, except for emergency situations like driving me to the hospital if I have a heart attack on the road, FSD isn’t worth it until you can go to sleep safely behind the wheel.

          Not driving, but having to follow everything going on and be ready to take over at an instant’s notice? That’s my idea of hell.

Comments are closed.