Tesla’s Technological Moats for the Win

Ark Invest identified four technological moats that Tesla is growing for its electric car and truck businesses.

1. Superior batteries

2. AI (Full Self Driving) Chip

The Tesla FSD is an automotive grade computer powered by two custom SoCs (system-on-chip). The SoCs leverage commodity ARM CPUs and GPUs but are augmented by a Tesla-designed neural net accelerator capable of performing 147 trillion operations per second—sufficient for full-autonomous driving.

Nvidia Pegasus is Nvidia’s Level 5 self-driving computer. It uses two Xavier SoCs and two Turing-class GPUs. Pegasus is much larger, more expensive, and more power-hungry than the Tesla FSD. Pegasus uses 500 watts of power which is seven times more than the Tesla FSD.

Nvidia’s next-generation self-driving computer, dubbed “Orin”, shrinks Pegasus from four to two chips while achieving the same level of performance. Orin would be released in 2021 and integrated into competing cars in 2023.

Tesla should have a version two FSD in 2021 and a version three in 2023.

3. Miles of Data Collected.

Tesla has thousands of times more self-driving data collected than competitors.

4. Over the Air Updates

Tesla is able to update the software in its vehicles with over the air.

Tesla shipped an over-the-air update that improved the braking distance by 19 feet.
They shipped over-the-air updates that improved acceleration and driving performance.

34 thoughts on “Tesla’s Technological Moats for the Win”

  1. Hopes, dream and faith-based funding are a fickle basis to build on. Chasing nines, which can never be objectively verified, is in the same category.

    It will not make sense, but I will write this anyway, for my own reasons that I will not share. The funding-worthy work in this area should start with 1+ probability of functionality, or a guaranteed functionality with extra safety margin. That is where people may start trusting autonomous machines. Another level is added loyalty, something that is non-existent today, and a matter of debate. Should a car kill its passenger, or a bunch of people jumping in front of it? Mere existence of such debate is poison to the entire concept of autonomous transport. In summary, the obstacle to that is wrong people who set wrong objectives, and people who create wrong solutions to wrong problems. That is what we have now: a car that can crash into a truck in front, killing its passenger, but it has a huge display and can take care of your dog when parked. I know the “autopilot” is not what the word means, as officially stated by Tesla. But if that is state of the art, then the art does not exist.

  2. Those guys (not just in Google) sit there for legal reasons, as a safety feature if the car does something dangerous, and as a “backup control system” in case of failure. The car was driving, those guys were observing – clearly the car cannot be dangerous at near-zero speed, and clearly there was no failure as it was functional.

    Most humans are entirely reactive and instinctive (wrong word, but close enough for general discussion) while driving, as prediction is a highly costly process for human brain, and brain strongly demotivates such treatment. Also, taking it easy is strongly motivated by endogenous narcotics of the kinds that are very much illegal “exogenously”. That explains a part of bus driver’s behaviour. The other part is explained by one of three primary algorithms, called “dominance”. In human driving in places like India, “big is right” is said to be informal road rule. Also that was the case of “that dumb computer versus ME”, which is a common failing of pilots. Put that together, and the mystery of zero-speed collision is resolved.

  3. But the human sitting in the seat says that he was also assuming the bus would stop. So if the human was driving he would have made the same error.

    Also, I’m not convinced that humans do anything to predict other’s actions (when driving) that isn’t just following some rough algorithms.

  4. Probably because Porsche they will come out with an over the air patch to give you another 30 miles six months after release. If you over promise in the beginning it opens you up to fraud suits.

    Nobody buys a Porsche for range or economy. They buy it because it is an electric supercar that made it through the Nürburgring without dying (unlike the Model S) and based on the reservations Porsche has more than a year of production reserved.

    Apples & Oranges.

  5. When is Tesla going to teach their cars to not plow in to Police/Fire vehicles on the left hand shoulder? Because that has been happening for years.

  6. As for Over The Air updates being some kind of moat…

    Sheesh. Ford is equipping most of their cars for this in 2020. I mean if Ford can do it then it can’t be that hard.

  7. It’s an opening for malicious hackers to remotely initiate a software attack. There are likely security features in place to prevent that, but no defense is 100% perfect.

  8. Looked it up. Appearently they’re counting Z’ers earlier than I thought, so you’re technically correct. But those early Z’ers are right on the edge of the statistic. And even they haven’t encountered AI until their teen years. The later Z’ers are growing up with AI from an early age.

    edit: Further lookup says some researchers count those early Z’ers as late Millenials (born late ’90s to early 2000s). Urban Dictionary calls them Zennials.

  9. That guy was an engineer observing the car’s behaviour. He was not supposed to be driving, and he was not. Google car did not make any driving mistake, it was the bus driver’s attitude that created the incident, which is the point: driving by the rules is a very short path to a crash, as driving is dealing with humans, not with the rules or road signs. Algorithms are not good at that.

  10. I don’t know what the bicycle rider incident you refer to is about, but the Google’s collision with a bus at near-zero speed, had a human sitting in the Google car at the time, who saw the bus, and says that he also assumed the bus would not hit them.
    So that is not a good example of the AI making a mistake that a human would not.

  11. Your description of human drivers does nothing to convince me that robots would be worse. If anything your description indicates that it would be easy to program a better driver.

  12. Cause of and solution to such security issues? Most cars run unpatched software from release and are supposed to be updated through dealers, but often are not.

  13. It is true that the trust worthiness of NN is rooted on the number of nines added after the decimal point and Elon and the whole industry are aware of this. They call it the chasing of nines. Whether they will be successful in the final analysis or as fast as they hope to is a fair question. A good article on this is with some pertinent comments: https://arstechnica.com/science/2019/12/how-neural-networks-work-and-why-theyve-become-a-big-business/
    I would say that Tesla’s combo of good hardware, large data access, live updates and good integration with human drivers all give them the best chance of a successful and smooth transition to fully capable self driving, regardless of the time involved.

  14. Some situations there isn’t much you can do. And (if I read your earlier comment right) humans don’t handle them well either. In some of these, an autonomous car can still perform better, since it can have a faster reaction time and more accurate control.

    But in the cases where you can do something – even if not a perfect solution – the appropriate strategies can be applied once engineers have more real-world examples to work with. These will be worked out over time.

  15. You can try it yourself. Keep “enough” distance, and count seconds until that distance will be filled by another car with a human inside. In my experience, it takes less than 30 seconds. A program trying to keep that distance will eventually have to stop the car in traffic, which would go against quite a few other rules in it. The truth is humans do not care about rules as long as they can get away with it. I have seen people of certain heritage stupidly driving through red light with people crossing on their green light. How much training does one need to not move off and drive on red through a pedestrian crossing with people on it? What I saw in the eyes of those drivers was stupor and lack of control, but they were stepping on gas anyway. They should have never been given licenses, but that is another problem. It makes no sense at all, hence training a self-contradicting rule may not even be possible, as the dataset will never have sufficient quality or size. Formalising it as a rule is possible on assumption that each and every car is a hostile incoming projectile. Humans are the problem, not control system. That is why flight is the ultimate solution.

  16. I think they are, since both the engineers and the AI are still in the learning process.

    A common problem with today’s driving AI is that it follows rules too rigidly. Humans are more flexible, and more chaotic. But future iterations of driving AI can account for that.

    For example, if you know that a vehicle can brake unexpectedly, you make sure to keep enough distance so you won’t hit them (factored by likelyhood and other considerations), or adjust your speed. Similar for sudden turns or merges. Current AI models probably expect everyone to follow the rules, and don’t take such behaviors into account – yet.

    AI could also learn which rules can be bent in which situations and how far. Then it could both anticipate such rule breaking, and allow it in its own behavior.

    These types of adjustments and fine-tuning can only be done based on real-world experience. We’re still not at the final implementation. These are problems with the current iteration, and will be addressed over time.

  17. These are not birthing or teething pains. Google car drove by the rules, at near-zero speed. The bus driver had an attitude that resulted in avoidable collision. That bicycle rider stopped in front of a car because he felt like doing it, and perhaps he had that delusion of being surrounded by invulnerability bubble, also the delusion that machines have Azimov’s laws in them. Those were not accidents, those cars did not violate rules – human attitude did. These problems cannot be solved by any improvements in cars. Actually, there is one solution, but it is not socially or politically acceptable: humans must fear interfering with machines, while knowing that machines follow rules. Today any one with a pulse knows that trains do not chase people, but if one steps in front of a train, it is certain death. People do not mess with trains, unless they wish to die. That is how cars should be: safe within common rules, while not accomodating infantile behaviour and stupidity. Certainly, the first one who dies because of infantile behaviour or stupidity, will be the martyr and the political catapult for any thug capable of making up a slogan “reclaim the streets!”, “taxpayers first!” and so forth. And that thug will win elections like a charm.

  18. > Google’s collision with a bus at near-zero speed, and the dumbass bicycle rider that intentionally stopped in front of a car because he is so sure he would not be killed, are the best illustration for that.

    I see these as birthing and early childhood pains. The programming will get better over time, and so will the hardware. Just 10-15 years ago, cars could barely drive themselves at all, and it was considered nearly impossible. Give it another 5-10 years. People deliberately “brake testing” vehicles are rare, and IMO not worth consideration (in the sense that it doesn’t apply to the general public).

    > There were studies of generation bias. I do not recall the details, but there was no love for self-driving cars (or planes) from any generation.

    None of the generations that could participate in such studies grew up around AI. The first generation that is growing up with AI isn’t old enough yet. But even among older generations, attitudes change over time as new information comes in. I expect the younger ones will be more flexible, even if there is indeed little difference in current attitudes.

  19. Yes, but different types of technology, and different levels, types, and proximity of interactions. In particular, the latest generation (which isn’t old enough to drive yet), is growing up in close interaction with AI technologies, which the previous generation didn’t encounter until adulthood.

  20. In simple terms, NN cannot be trusted. Classifying cats or tumors with 99% confidence is acceptable. Making decisions every second of driving with 99% confidence is unacceptable, and 99.99% is also unacceptable. Lawyers will have a field day with makers of such cars, as they did with Toyota on their unintended acceleration foul up that cost them over a billion dollars. And that was just bad engineering, not an intentionally accepted safety risk with known lethal outcomes. I would love to consult lawyers on such cases, for a small share of the win, also for fun and gigs of tearing apart the piss-poor engineering practices in public. Personal responsibility is sorely lacking in modern enginering, and is totally absent in software (that is not even engineering), while personal responsibility for one’s actions is a basic prerequisite for adulthood.

  21. Humans do not know how they handle them. It is improvisation, instinct, blind luck or lack of it, stupor or panic. In short, not well. In rare cases of people who should be closely eximined for psychopathy, edge cases may be handled masterfully.
    Miles per accident, and comparison with human performance, will not be enough. In the end, the road just cannot be shared between humans and machines – they do not cooperate, at all. Google’s collision with a bus at near-zero speed, and the dumbass bicycle rider that intentionally stopped in front of a car because he is so sure he would not be killed, are the best illustration for that.
    There were studies of generation bias. I do not recall the details, but there was no love for self-driving cars (or planes) from any generation.
    The only way I see for human drivers to join horses is by separating the human and machine traffic in space: humans can drive, while machines will fly. Humans cannot fly due to effort and costs of training, machines can come out from factory as best pilots ever. Then convenience of direct and fast transit will nullify motivation to drive. If you loop at the map (I did), and calculate what becomes possible, the Sunday drive appears to be doomed soon after the Sunday flight is available. It would replace three hours in traffic with 15 minutes of joyride, there and back. Not a hard choice with the same cost.

  22. “one pixel attack”??!! That will take some explaining since NN work on such a wide variety input images or audio or video or whatever it is processing. That is why everyone is buying Neural Net devices/services and why big tech is pouring so much research $$ into them.
    Watching the video, the “one pixel” must be determined by having access to the confidence values in the NN and done with direct adversarial intent. In other words, not the case with a car trying to avoid children running in front of them. Children are generally not trying to get hit, i.e. are not adversaries, and the cars have their AI driving assist computers behind many layers of hacking protection against any actual adversaries trying to hack into them.

  23. VAG definitely needs to step it up with the ranges; I don’t know why they are so bad for the size battery packs they use. Besides the E-tron, the Porche Taycan just got its official EPA rating – 201 miles out of a 93 kw-hr pack. Which is just embarrassing when the much larger & heavier Tesla Model X manages 328 out of a 100 kw-hr pack, to say nothing about the high end Model S ranges.

  24. How well do humans handle the edge cases? And for that matter, just how common are those edge cases? There’s probably a distribution curve.

    At the end of the day, there is one very simple statistic that can prove self-driving safety, and that is accidents per X miles. You can break that up by severity, or replace it with different grades of injuries per X miles. It automatically includes all the edge cases.

    These statistics are easy to collect and easy to compare. If (or more likely when) it starts showing a significant advantage to self-driving vehicles, they will start gaining favor both among logical drivers and among regulators. It may take a while, but human-driven vehicles will go the way of the horse.

    I expect that there’s also a generation bias: younger generations are more open to technological solutions and trust them more easily, because they grew up surrounded by technology.

  25. One design, unchanged, requires that much, according to the maker of the most reliable cars (which is not Tesla). Tesla makes frequent field updates to their systems, meaning they can never, ever, get even close to that. The same could be said for any similar car today and tomorrow, with possible exception of Toyota, which is not even close to making one yet.

    “Up to now, our industry has measured on-road reliability of autonomous vehicles in the millions of miles, which is impressive,” Pratt said at the Consumer Electronics Show in Las Vegas last January. “To achieve full autonomy we actually need reliability that’s a million times better. We need trillion-mile reliability.”

  26. Prove it. Teslababble is not proof. None of that is provable in principle, pure “trust me people!!!111” until the next victim of endless beta testing.
    It is a well known fact that NN are highly sensitive to any change in inputs, down to a single pixel in some shameful cases (“one pixel attack”). Any changes made in NN after training invalidate its training, even if it can still pass a few tests.

  27. “Any update or patch invalidates all previously collected data”.

    Wrong. Tesla is using ANNs for their SW and an update means that the new version can handle all “old” cases just as well as the previous version, but can also handle the new cases. This is the nature of ANNs…

    “..as each car brings over a terabyte per day..”

    Wrong. Tesla vehicles only sends over new corner cases. When Tesla discovers new case, they send a set of instructions to the fleet to look for the new case. Only the cars that have found instances of the new case sends the information to Tesla. It’s not an open pipe from every car back to Tesla.

  28. Let’s assume that you are correct in that 1 trillion miles of driving is needed as data, and that it would take 1 million cars 10 years to achieve it. It will not take Tesla very long to get there. Let me explain. Tesla already has about 0.5 million vehicles on the roads with full self driving HW. In 2020, 2021,2022 and 2023 they will add another – conservatively – 650k, 850k,1000k and 1200k.

    Summing million cars-years, we obtain:
    0.5*5+0.65*4+0.85*3+1*2+1.2*1 = 2.5+2.6+2.55+2+1.2= 10.85 million-car-years

    So it would take Tesla *at most* 5 years to get to 1 trillion miles of data.

    Tesla probably has enough data now already to make a self driving SW that is safer than a human. By 2025 the SW should be near perfect.

  29. anybody watch the YouTube video of making motor for the audi E-tron. That’s level of automation probably has Tesla jealous… too bad the E tron range sucks…

  30. And not a word on the only obstacle to full self-driving that matters: the edge cases. Those pesky occurences that are not in the datasets, too strange for formalisation, too rare for sufficient statistics to be collected, but not too rare to be ignored. Was that Toyota who said they will need a trillion mile test run to validate full self-driving. In a million-car fleet, it is a million cars over about ten years, just to get the possibility of having enough data. Any “update” or “patch” invalidates all previously collected data. What to do with it is another problem, as each car brings over a terabyte per day. It is obvious that the pursued solution is asymptotic, and will never converge on a commercially feasible trusted black box that knows how to drive. Comparisons with human drivers are irrelevant, as human drivers will not be bamboozled out of driving in favour of a black box no one really trusts. Even the military tampered their (well substatiated) desire to have autonomous weapons, as troops distrust any black boxes that have direct control over their lives. It will come to a choice of who (or what) will be in control on the roads, as it can be either humans or black boxes, but not both. Even the limited experience accumulated by self-driving cars to this moment shows they do not mix. Needless to say, if or when it comes to a direct conflict of interests between humans and black boxes on the road, politics will serve humans, as they vote, while black boxes do not.

Comments are closed.