Real Robotaxi Competition Won’t Be Coming After Tesla – BestinTesla $TSLA

I was interviewed by Lars at BestinTesla. I talked about why robotaxi at scale competition won’t be coming once Tesla solves robotaxi at scale.

Tesla has over 160,000 people currently using FSD and driving 10 million miles per quarter on surface roads with it. The new version 11 is being deployed and switches navigate on Autopilot to FSD software. Highway miles will be included in FSD mileage. First quarter 2023 should see wider usage and highway miles included. This should see total FSD miles increase to 50 million in just the first quarter and total cumulative FSD miles over 100 million.

If Autopilot for highways is fully replaced with FSD for highways, then the billions of miles per year driven using Autopilot will convert to FSD miles.

By mid-2023 there will nearly 5 million Tesla’s on the road. The total miles driven will be over 6 billion miles per month.

Potentially, Tesla FSD could be fully operating or operating in shadow mode for all of those miles.

The increase in data will increase the rate of improvement of the AI.

About 6 billion miles of safer than human driving data is the estimate for how much is needed to convince regulators that a system is safer than human drivers.

Competitors must deploy at least 30,000 robotaxis and drive each about 20,000 miles or some other combination to get to 6 bilion miles.

8 thoughts on “Real Robotaxi Competition Won’t Be Coming After Tesla – BestinTesla $TSLA”

  1. About a year ago, GM forecast they would have AVs commercial in 2023. To make this prediction, surely GM had confirmed to themselves that they could indeed manufacture commercial AVs, software excepted.

  2. “About 6 billion miles of safer than human driving data is the estimate for how much is needed to convince regulators that a system is safer than human drivers.

    Competitors must deploy at least 30,000 robotaxis and drive each about 20,000 miles or some other combination to get to 6 billion miles.”
    _____________
    Tesla currently has 0 robo-taxis accruing any miles towards the 6 billion figure.

    While I am unaware of any meaningful statistical proof of the dangers suggested by Mr. Bedichek, above, I don’t believe Tesla has a viable plan for getting any of their software approved, beyond the “Level 2”, legally in the USA.

    They can likely create a very successful Level 2 ADAS system, but that software has been participating in an unregulated and un-scientific trial on public streets, if you begin to suggest it is more than that. Submitting that software for approval at “Level 5” would require admitting to an unregulated, non-scientific process conducted on public streets, with no oversight of data and results by any independent examiner.

    Or, they could submit something “new” for approval to a “regulated” testing process, and re-start a timer, while making some claim that the software is substantially different than what they have been testing on the roads as “Level 2” for years.

    I expect that Tesla will not submit the so-called “FSD” software for approval at higher than Level 2, ever. Also, human drivers will abuse their responsibilities as the drivers of that overly-ambitious Level 2 system.

  3. Why does Brian label Tesla as having anything to do with autonomy? As FSD improves it will kill at a higher rate than the elevated rate it already does, the system will get more people to trust it, leading to their demise.
    Elon should be prosecuted as Holmes was,a massive fraud.
    Waymo and GM make real progress.

    • Lol no. Waymo and GM aren’t hurting anyone because nothing uses their hardware/software.

      In general if the answer is “GM” then you have asked the wrong question.

  4. Of course. But, there are 2 quick points to keep in mind:
    1. Good researchers also have common sense. I can quickly look up 50 papers basically showing that a key area being actively pursued is unsupervised learning with few examples (just what you point out). This is not just a fringe area with just one or two researchers, but a key area, pursued by almost all large groups working on AI/ML. The difference is that they don’t just say: “it would be good if..” but they are actually doing it. Sometimes people may be mistaken, thinking that researchers are stupid and don’t have common sense. When typically what happens is that people simply don’t know what the state of the art is, or what the focus areas of research are. That’s why one should always be humble and say: “i wonder which group is pursuing this thing i am thinking about?” Because there is a great chance it is being pursued by many people, or it has already been demonstrated as not good or having ultimate problems
    2. People are demonstrably better (at this point) doing unsupervised learning with few examples (as you point out). But, machines are infinitely better at transferring the knowledge. It would take a person enormous amount of time to teach, say, 10,000 other people to drive safely, but only a few seconds/minutes to transfer new driving ML SW to millions of cars. That’s why we pursue this.

  5. There is another possibility and we already have the examples proving it is possible.

    Humans can learn how to drive much faster than all those miles. The generic human is not very safe though so the few drivers with a perfect record must be used as examples. (I’m one of them and I have been driving for 40 years without a single accident.)

    If AI research manages to emulate safe driving in a smarter way, the path to success will be cheaper. Maybe someone figures out how to send the FSD AI to a driving school.

    • The AI learns what not to do from bad drivers already. Few humans employ that learning method so it’s easy for the AI to beat them.

Comments are closed.