Tesla Not Compute Limited for FSD AI Means 100+ Exaflops and 1+ Exabyte of Cache Memory

Tesla indicated in August, 2023 they were activating 10,000 Nvidia H100 cluster and over 200 Petabytes of hot cache (NVMe) storage. This memory is used to train the FSD AI on the massive amount of video driving data. Elon Musk posted yesterday that Tesla FSD training is no longer compute constrained. Tesla has likely activated …

Read more

Nvidia Blackwell B200 Chip is 4X Faster than the H100 – 1 Exaflop in a Rack

The NVIDIA Blackwell platform was announced today. It will run real-time generative AI on trillion-parameter large language models at up to 25x less cost and energy consumption than the H100. The Blackwell GPU architecture has six transformative technologies for accelerated computing, which will help unlock breakthroughs in data processing, engineering simulation, electronic design automation, computer-aided …

Read more

Nvidia’s AI Platform for All Major Humanoid Bots Except Teslabot

Nvidia has made a software, AI and hardware platform to make developing humanoid robots far faster and easier. They have software to help with the testing, learning and development. Brian WangBrian Wang is a Futurist Thought Leader and a popular Science blogger with 1 million readers per month. His blog Nextbigfuture.com is ranked #1 Science …

Read more

All in Podcast Makes the Case for the AI Decade and Terminal Nvidia Value of $10 Trillion

The All in Podcast had a discussion and debate about the AI decade. They compare the current AI infrastructure boom with the internet boom. The current situation is less of a bubble because Nvidia is not getting that large a multiple off its real earnings. They are certain that the current build out phase will …

Read more

Why is There a Shortage of Nvidia AI Chips?

TSMC’s (Taiwan Semiconductor) 2.5D advanced packaging CoWoS (Chip on wafer and wafer on substrate) technology is currently the primary technology used for AI chips. The production capacity of CoWoS packaging technology is a major bottleneck in AI chip output and will stay as a problem for AI chip supply in 2024. Nvidia H100 (AI chips) …

Read more

FigureAI Gets $675 Million from Nvidia, Bezos, OpenAI and Microsoft

Humanioid Robot company, FigureAU, has received $675 million in funding from a group of investors, Nvidia, Jeff Bezos, OpenAI and Microsoft. They are not only powerful investors but strong partners for the development and rollout of humanoid robots. I, Brian Wang, described the massive investments coming into humanoid bot from all of the Big Tech …

Read more

Samsung, SK Hynix and Micron Battle for HBM3e AI Memory

High Bandwidth Memory (HBM) is a type of DRAM technology that offers a number of advantages: Lower voltages – HBM is designed to operate at lower voltages, which means it generates less heat. Higher capacity – HBM can store more data and process it at once than previous generations. Faster training times – HBM3 Gen2 …

Read more