Tesla Autonomy Investor Day Live and the Full Self Driving Computer

Tesla is holding Autonomy Investor Day.

They are revealing the Full Self Driving Computer and going over all of the technical details.

Tesla created customized hardware only for the purpose of a full self-driving computer and they are creating software specifically to maximize that custom hardware.

Elon Musk indicates that later in the presentation and demos that they will show why LIDAR is the wrong solution for full self-driving. He indicates that LIDAR adds costs and complexity that is not needed and does not help.

Elon talked about the importance of keeping power low and heat dissipation.

The FSD system handles 21 times the frames per second to 2300 frames per second versus 110 for the previous Tesla Hardware version 2.5. The system would be 7 times the frames per second of the Nvidia Xavier drive system. All Tesla’s being produced right now are using the new system. Tesla switched over on the S and X one month ago and Model 3 changed 10 days ago.

FSD needs a higher frame processing rate to handle the inputs from eight cameras and other sensors.

Telsa is halfway through the work on the next generation system. They completed this FSD one year ago and switched to develop the next system. The next generation system is about two years away and will be three times better.

Full Self Driving Chip and the System

The FSD will fit behind the glove box and will not take up half of the trunk.

Everything is redundant with the system. Cameras or computers could fail and the system will keep working.

The chip is half the size of a GPU and has 72 teraOps per second (trillion operations) of performance.

The Neural network accelerator increases the system to 2100 picture frames per second that can be processed from only 17 frames per second from a 35 GOPs GPU.

They use an SRAM array to keep the memory operations fast and low energy. They have two terabytes per second communication channels.

14 thoughts on “Tesla Autonomy Investor Day Live and the Full Self Driving Computer”

  1. No magic.
    Just a statistical application trained with much more data than a human can hope to do. At some point, the system will outclass even the best human beings, just like in other fields.
    Improving upon the average human driver will not take long.
    The biggest problems are probably adverse conditions like snow, rain and other unusual circumstances. This is a problem only because there is much less training data available. Data is always the primary success factor in machine learning applications.

  2. Indeed, you do not need a laser to estimate distance. Also you cannot describe why, or how you actually do it – you cannot draw a schematic of your brain, build a model and sell it to Tesla. Hence it is irrelevant that even a bug with a 100k neurons can do what Tesla and others dream of doing, as there is no schematic, no code, nothing – impossible to reverse engineer. It is essentially magic.

  3. I know. Radar does not and cannot provide a comparable image resolution due to comparatively large wavelength. Also radio wave propagation and reflectivity of materials would severely limit usefulness of a system reliant on radars. No driving in rain at all, for example. Some common structural materials absorb or diffract radar emissions, or are transparent to them. Moreover, numerous radars in the same band would severely intefere with each other, effectively disabling all such systems in traffic. That is physics, and not a subject for improvement.

  4. That’s pretty much how humans operate as well. We seem to be able to estimate speed and distance alright. I don’t need a laser range finder to tell me not to pull out in front of a speeding Buick.

  5. According to some estimates, next gen fsd has the same processing power of
    a cat’s brain. Its workings are completely different though, which bodes well for
    a more predictable and less indipendent behavior.

  6. It is ironic that they worry about redundancy of hardware, while that hardware runs inherently error-prone neural networks without physical reference points. What lidar provides, despite all its limitations, is physical distance measurement – the reference point for everything else that a system does, including planning. Without it, a neural network-based system cannot be trusted with a critical task, or any task with value at risk, as it has no physical reference – essentially it is blind, despite the abundance of cameras. Until it bumps into something that triggers physical sensors (INS, etc), it operates exclusively in a neural fantasy domain without any anchors in physical domain, where the actual car exists, along with all everything else. It is good that they are so open about it though, makes decision making simple and easy: avoid.

  7. If the latest FSD computer can get the job done, did anyone catch why they are building a better computer that is 3 times better? Why? He spoke about their Dojo system which would learn from video rather than still frames, and how eventually the neural net would all but consume the software. I suppose this is all a quest for the “tale of 9s” 99.9999999999999999 reliable, etc.

  8. If anyone can find my SPIE presentation (Neural Engines Corp circa 1990) it will show a recirculating video processing architecture based on DataCube convolution/LUT hardware not unlike what Tesla put on their neural net chip. The talk was titled some variation on “Neural Image Segmentation of Multisource Data”.

Comments are closed.