Lex Fridman has a very successful podcast but he has been an AI researcher for 6+ years.
Lex was very impressed with the technical details of Autopilot and FSD revealed at Tesla AI day. Lex says “All of it is just brilliant”.
He thinks Tesla has all the pieces needed to solve full self-driving.
Lex thinks Tesla Dojo training system can be AI training as a service cloud will be able to directly compete with Amazon AWS cloud and Google cloud.
Amazon’s Q2 2021 total revenue – across AWS, its digital tat bazaar and other activities – was reported at $113.1 billion, up 27 percent year-on-year. AWS accounted for a $14.8 billion of this, itself up 37 percent. Amazon AWS will make over $60 billion this year. Tesla Dojo AI training as a service could be competing with this in three years.
Amazon Group operating profit climbed 31.8 per cent to $7.7 bn, of which AWS contributed $4.2bn versus the $3.4bn AWS threw in the pot a year earlier.
Lex says there were big leap forwards in what Tesla is doing.
Tesla is going beyond image space into vector space.
Tesla is fusing the sensor data before the detections. The detections performed by the multi-task heads of the neural networks. Doing all of the of machine learning on all of the sensors combined instead of doing separately and combining the decisions.
Tesla going beyond images and vision to time, parsing videos and sensor fusion.
Lex said Tesla showed many brilliant AI innovations.
Lex says the innovations on the neural network architecture and data annoation labeling side are a big leap.
Tesla is using simulation for rare situations even real-world large set.
James Douma is another well known AI expert. He is also impressed with what Tesla is doing.
Rodney Brooks created iRobot and Rethink Robotics. He is negative on Tesla Bot. Rethink Robotics lasted ten years from 2008 to 2018 and now pieces of it are in Hahn. iRobotics is publicly traded and valued at about $2.2B with $1.2B in annual revenue. About the same AI revenue as Tesla autopilot and FSD.
Rodney thinks Tesla bot announcement in ten years (August 19, 2031) will be viewed as insignificant.
Rodney Brook has dated self driving car and electric car predictions.
It's a deal. August 19, 2031. My prediction is that this project will not register at all in any history of the previous ten years. So we'll understand that it was a totally insignificant announcement. https://t.co/Xn1pXk5kG9
— Rodney Brooks (@rodneyabrooks) August 20, 2021
I meticulously document my predictions, e.g., see the dated predictions section at https://t.co/5DMzJgTPmq, and essays on AI and self driving there, plus many articles in IEEE Spectrum and MIT Technology Review (google them) https://t.co/0mrRLyYDAX for writings back to the 80's. https://t.co/vEhZXDp3Rl
— Rodney Brooks (@rodneyabrooks) August 21, 2021
Full Tesla AI Day
Brian Wang is a Futurist Thought Leader and a popular Science blogger with 1 million readers per month. His blog Nextbigfuture.com is ranked #1 Science News Blog. It covers many disruptive technology and trends including Space, Robotics, Artificial Intelligence, Medicine, Anti-aging Biotechnology, and Nanotechnology.
Known for identifying cutting edge technologies, he is currently a Co-Founder of a startup and fundraiser for high potential early-stage companies. He is the Head of Research for Allocations for deep technology investments and an Angel Investor at Space Angels.
A frequent speaker at corporations, he has been a TEDx speaker, a Singularity University speaker and guest at numerous interviews for radio and podcasts. He is open to public speaking and advising engagements.
Software tooling is a big hurdle to getting people to jump to someone else's NN/GPU cloud, and currently there is a lot of institutional inertia surrounding Nvidia's CUDA ecosystem. There hasn't exactly been a huge rush to get on Google's TPU cloud, despite the alleged efficiency and throughput improvements. Though knowing the field, they'll whip out an updated version of pytorch post haste if a DOJO cluster was opened up. With that, that lets lesser mortals mess around with.
With world's most powerful, 1exaflop NN training AI supercomputer soon operational, I am wondering if they want and will use it to train 100Trillion parameters(human brain size) model and ultimately try to create proto AGI or even AGI.
GPT-3 is 175Billion, world's largest model at this moment has around 1,7Trillion parameters, below cat brain size net, approaching 100T would give us enormous insights if this is the way to create AGI, and with such powerful AI training supercomputer, they can do it in a matter of few months, maybe even weeks.