Tesla’s Hidden Compute Power

I did a video with Brighter with Herbert, Herbert Ong. I explained why the chart from last year showing Tesla going to 100 Exaflops of compute in October 2024 is out of date. Elon Musk said that both XAI and Tesla had over 30,000 H100 chip equivalents. Nvidia H100 chips each have petaflops of compute. …

Read more

Snapshot of the Race for More AI Compute

The number of Nvidia H100 and other Nvidia chips that an AI company represents the AI compute resources of that company. Elon indicated that a recent chart showing Meta leading on the GPU count and Tesla trailing at 10,000 G100 GPUs. Microsoft and OpenAI would also have higher GPU counts. It is unclear why Microsoft …

Read more

Tesla Not Compute Limited for FSD AI Means 100+ Exaflops and 1+ Exabyte of Cache Memory

Tesla indicated in August, 2023 they were activating 10,000 Nvidia H100 cluster and over 200 Petabytes of hot cache (NVMe) storage. This memory is used to train the FSD AI on the massive amount of video driving data. Elon Musk posted yesterday that Tesla FSD training is no longer compute constrained. Tesla has likely activated …

Read more

Nvidia Blackwell B200 Chip is 4X Faster than the H100 – 1 Exaflop in a Rack

The NVIDIA Blackwell platform was announced today. It will run real-time generative AI on trillion-parameter large language models at up to 25x less cost and energy consumption than the H100. The Blackwell GPU architecture has six transformative technologies for accelerated computing, which will help unlock breakthroughs in data processing, engineering simulation, electronic design automation, computer-aided …

Read more

Nvidia’s AI Platform for All Major Humanoid Bots Except Teslabot

Nvidia has made a software, AI and hardware platform to make developing humanoid robots far faster and easier. They have software to help with the testing, learning and development. Brian WangBrian Wang is a Futurist Thought Leader and a popular Science blogger with 1 million readers per month. His blog Nextbigfuture.com is ranked #1 Science …

Read more

All in Podcast Makes the Case for the AI Decade and Terminal Nvidia Value of $10 Trillion

The All in Podcast had a discussion and debate about the AI decade. They compare the current AI infrastructure boom with the internet boom. The current situation is less of a bubble because Nvidia is not getting that large a multiple off its real earnings. They are certain that the current build out phase will …

Read more

Why is There a Shortage of Nvidia AI Chips?

TSMC’s (Taiwan Semiconductor) 2.5D advanced packaging CoWoS (Chip on wafer and wafer on substrate) technology is currently the primary technology used for AI chips. The production capacity of CoWoS packaging technology is a major bottleneck in AI chip output and will stay as a problem for AI chip supply in 2024. Nvidia H100 (AI chips) …

Read more