Tesla Dojo AI Training Supercomputer

Tesla Dojo team presented at the Hot Chip 34 conference.

Tesla has made sure that the IO (input output), memory, software, power, cooling and all other aspects of the system are perfectly scalable. This will enable them to just build and add tiles to scale to the Exaflop level and beyond.

About 120 compute tiles would equal 1.1 Exaflops and 120,000 compute tiles would be 1100 Exaflops (or 1.1 Zettaflops).

Tesla will reveal actual compute levels and other metrics on Tesla AI Day 2022 on September 30, 2022.

SOURCES- Anatasi in Tech, Tesla, Hot Chips 34, Lex Fridman – Jim Keller interview
Written by Brian Wang, Nextbigfuture.com

5 thoughts on “Tesla Dojo AI Training Supercomputer”

  1. The gravy train stops when the machine learning rig reaches ridiculous proportions, power dissipation and/or cost.

    So far, they seem to have found ways to continue training ever bigger models, but the size, power and cost are definitely upwards trending.

    Another (AI) Winter is coming.

  2. I have this funny feeling that dojo is the precursor to something like the positronic brain. Hopefully, neuralink can catch up so we don’t get terminated.

  3. Cooling is easy! Even a kindergartener can do it! Memory is a problem! IBM does not use PCI buses ! Two more were made! And the problem is promised to be solved by programming by 2030!

  4. You are a genius! I read about this dojo and didn’t get any information! You, on the other hand, were able to contain all the information comprehensibly and comprehensively! Compressed space! Thanks, good information for the presentation!

  5. Cooling is a big issue for scalability. They had an engineer apparently steal some of the cooling designs that they are prosecuting.

    Their interconnect fabric is also a big scalability issue.

Comments are closed.