The more compute you put into a neural network then the better result you can get. There does not seem to be a limit for neural nets to utilize more resources to get better and faster results.
Tesla is motivated to develop bigger, faster computers that are precisely suited to their needs.
The Google TPU architecture has not evolved as much over the last 5 years. The Google TPU chip is designed for the problems that Google runs. They are not optimized for training AI.
Tesla has rethought the problem of AI training and designed the Dojo AI supercomputer to optimally solve their problems.
If Tesla commercializes the AI supercomputer that will help to get to lower costs and greater power with more economies of scale.
One of the reasons that TSMC overtook Intel was that TSMC was making most of the ARM chips for cellphones. TSMC having more volume let them learn faster and drive down costs and accelerate technology.
99% of what neural network nodes do are 8 by 8 matrix multiply and 1% that is more like a general computer. Tesla created a superscalar GPU to optimize for this compute load.
SOURCES- David Lee Investing
Written by Brian Wang, Nextbigfuture.com (Brian owns shares of Tesla)
Brian Wang is a Futurist Thought Leader and a popular Science blogger with 1 million readers per month. His blog Nextbigfuture.com is ranked #1 Science News Blog. It covers many disruptive technology and trends including Space, Robotics, Artificial Intelligence, Medicine, Anti-aging Biotechnology, and Nanotechnology.
Known for identifying cutting edge technologies, he is currently a Co-Founder of a startup and fundraiser for high potential early-stage companies. He is the Head of Research for Allocations for deep technology investments and an Angel Investor at Space Angels.
A frequent speaker at corporations, he has been a TEDx speaker, a Singularity University speaker and guest at numerous interviews for radio and podcasts. He is open to public speaking and advising engagements.