Tesla is holding Autonomy Investor Day.
They are revealing the Full Self Driving Computer and going over all of the technical details.
Tesla created customized hardware only for the purpose of a full self-driving computer and they are creating software specifically to maximize that custom hardware.
Elon Musk indicates that later in the presentation and demos that they will show why LIDAR is the wrong solution for full self-driving. He indicates that LIDAR adds costs and complexity that is not needed and does not help.
Elon talked about the importance of keeping power low and heat dissipation.
The FSD system handles 21 times the frames per second to 2300 frames per second versus 110 for the previous Tesla Hardware version 2.5. The system would be 7 times the frames per second of the Nvidia Xavier drive system. All Tesla’s being produced right now are using the new system. Tesla switched over on the S and X one month ago and Model 3 changed 10 days ago.
FSD needs a higher frame processing rate to handle the inputs from eight cameras and other sensors.
Telsa is halfway through the work on the next generation system. They completed this FSD one year ago and switched to develop the next system. The next generation system is about two years away and will be three times better.
Full Self Driving Chip and the System
The FSD will fit behind the glove box and will not take up half of the trunk.
Everything is redundant with the system. Cameras or computers could fail and the system will keep working.
The chip is half the size of a GPU and has 72 teraOps per second (trillion operations) of performance.
The Neural network accelerator increases the system to 2100 picture frames per second that can be processed from only 17 frames per second from a 35 GOPs GPU.
They use an SRAM array to keep the memory operations fast and low energy. They have two terabytes per second communication channels.
Brian Wang is a Futurist Thought Leader and a popular Science blogger with 1 million readers per month. His blog Nextbigfuture.com is ranked #1 Science News Blog. It covers many disruptive technology and trends including Space, Robotics, Artificial Intelligence, Medicine, Anti-aging Biotechnology, and Nanotechnology.
Known for identifying cutting edge technologies, he is currently a Co-Founder of a startup and fundraiser for high potential early-stage companies. He is the Head of Research for Allocations for deep technology investments and an Angel Investor at Space Angels.
A frequent speaker at corporations, he has been a TEDx speaker, a Singularity University speaker and guest at numerous interviews for radio and podcasts. He is open to public speaking and advising engagements.
14 thoughts on “Tesla Autonomy Investor Day Live and the Full Self Driving Computer”
Everything I heard said that it can already churn 1000+ fps which is more than enough.
Just a statistical application trained with much more data than a human can hope to do. At some point, the system will outclass even the best human beings, just like in other fields.
Improving upon the average human driver will not take long.
The biggest problems are probably adverse conditions like snow, rain and other unusual circumstances. This is a problem only because there is much less training data available. Data is always the primary success factor in machine learning applications.
Exactly. Not my word, Elon’s. You can check on his twitter account.
The need for higher frame rate.
Indeed, you do not need a laser to estimate distance. Also you cannot describe why, or how you actually do it – you cannot draw a schematic of your brain, build a model and sell it to Tesla. Hence it is irrelevant that even a bug with a 100k neurons can do what Tesla and others dream of doing, as there is no schematic, no code, nothing – impossible to reverse engineer. It is essentially magic.
I know. Radar does not and cannot provide a comparable image resolution due to comparatively large wavelength. Also radio wave propagation and reflectivity of materials would severely limit usefulness of a system reliant on radars. No driving in rain at all, for example. Some common structural materials absorb or diffract radar emissions, or are transparent to them. Moreover, numerous radars in the same band would severely intefere with each other, effectively disabling all such systems in traffic. That is physics, and not a subject for improvement.
That’s pretty much how humans operate as well. We seem to be able to estimate speed and distance alright. I don’t need a laser range finder to tell me not to pull out in front of a speeding Buick.
The car has radar as well.
According to some estimates, next gen fsd has the same processing power of
a cat’s brain. Its workings are completely different though, which bodes well for
a more predictable and less indipendent behavior.
It is ironic that they worry about redundancy of hardware, while that hardware runs inherently error-prone neural networks without physical reference points. What lidar provides, despite all its limitations, is physical distance measurement – the reference point for everything else that a system does, including planning. Without it, a neural network-based system cannot be trusted with a critical task, or any task with value at risk, as it has no physical reference – essentially it is blind, despite the abundance of cameras. Until it bumps into something that triggers physical sensors (INS, etc), it operates exclusively in a neural fantasy domain without any anchors in physical domain, where the actual car exists, along with all everything else. It is good that they are so open about it though, makes decision making simple and easy: avoid.
If the latest FSD computer can get the job done, did anyone catch why they are building a better computer that is 3 times better? Why? He spoke about their Dojo system which would learn from video rather than still frames, and how eventually the neural net would all but consume the software. I suppose this is all a quest for the “tale of 9s” 99.9999999999999999 reliable, etc.
If anyone can find my SPIE presentation (Neural Engines Corp circa 1990) it will show a recirculating video processing architecture based on DataCube convolution/LUT hardware not unlike what Tesla put on their neural net chip. The talk was titled some variation on “Neural Image Segmentation of Multisource Data”.
That’s the beauty of electronics. They are so easily repurposed.
Miss its true calling. Would be a great computer for a hunt and kill robot.
Comments are closed.