Nvidia Blackwell B200 Chip is 4X Faster than the H100 – 1 Exaflop in a Rack

The NVIDIA Blackwell platform was announced today. It will run real-time generative AI on trillion-parameter large language models at up to 25x less cost and energy consumption than the H100. The Blackwell GPU architecture has six transformative technologies for accelerated computing, which will help unlock breakthroughs in data processing, engineering simulation, electronic design automation, computer-aided …

Read more

China Developing 1.57 Exaflop Supercomputer With China Made CPU-GPU Chip

There are reports that China has a new superchip MT-3000 processor designed by the National University of Defense Technology (NUDT). The MT-3000 has general-purpose CPU cores, control cores, and matrix accelerator cores. NUDT’s MT-3000 processor features a multi-zone structure that packs 16 general-purpose CPU cores with 96 control cores and 1,536 accelerator cores. The MT-3000 …

Read more

Stepping Up to the $20+ Billion Per Year ASI Race

Microsoft was already spending over $10.7 billion per quarter to build out data centers and AI compute (Q4 2023). There are reports that Microsoft will increase this capital spending to $50 billion per year. This would be averaging $12.5 billion per quarter. This capex spending is up from $7.8 billion in Q3 2023 (Jan-Apr 2023). …

Read more

HP And Nvidia Will Supply the UK With a 21 Exaflop AI Supercomputer

Isambard-AI will be built using the HPE Cray EX supercomputer, a next-generation platform architected to support unprecedented performance and scale, and consist of 5,448 NVIDIA GH200 Grace Hopper Superchips which combine NVIDIA’s Arm-based Grace CPU with a Hopper-based GPU optimized for power efficiency and giant-scale AI, along with the latest HPE Slingshot 11 interconnect, and …

Read more

First Generation Techyum AI Data Centers With 1800 Exaflops in 6,000 Square Feet

Tachyum is a startup that will make Prodigy processors will enable hyperscale data centers that have 25% of the cost (4X lower cost) and will saving each hyperscale customer billions of dollars per year. First-generation Prodigy data centres will offer 3.3 EF of FP64, three times the performance of existing supercomputers, and also deliver around …

Read more

Chip Startup Tachyum Will Make 50 Exaflop Supercomputers and 8 ZettaFlop AI Computers in 2025

Tachyum’s first chip Prodigy has not completely finished its design but one customer will buy hundreds of thousands of Universal chips to build a 50 exaFLOPS supercomputer. Tachyum® announced that it has accepted a major purchase order from a US company to build a large-scale system, based on its 5nm Prodigy® Universal Processor chip, which …

Read more

Cerebras 4 Exaflop AI Training Supercomputer

Cerebras Systems and G42, the UAE-based technology holding group, announced Condor Galaxy, a network of nine interconnected supercomputers, offering a new approach to AI compute that promises to significantly reduce AI model training time. The first AI supercomputer on this network, Condor Galaxy 1 (CG-1), has 4 exaFLOPs and 54 million cores. Cerebras and G42 …

Read more

Nvidia and the Acceleration of AI to Exaflops and 144 Terabyte LLM

Generative AI lets us learn the structure of any kinds of information. Nvidia CEO Jensen Huang’s gives the keynote address at #COMPUTEX2023. He unveils platforms companies can use to ride a historic wave of generative #AI that’s transforming industries — from advertising to manufacturing to telecom. The Nvidia Grace Hopper superchip can see 600 Gigabytes …

Read more

Intel and AMD Path to Zettaflop Supercomputers

In 2021, Intel talked reaching Zettascale in three phases. AMD CEO talked about getting to a Zettaflop in 2030-2035. AMD CEO indicated that an earlier Zettaflop supercomputer would need about 500 Megawatts of power. Exascale systems today consume 21MW of power. AMD and Intel have managed to roughly double the performance of their CPUs and …

Read more

13 Wafer Scale Chips for an Exaflop AI Supercomputer

Cerebras Systems, the pioneer in accelerating artificial intelligence (AI) compute, today unveiled Andromeda, a 13.5 million core AI supercomputer, now available and being used for commercial and academic work. Built with a cluster of 16 Cerebras CS-2 systems and leveraging Cerebras MemoryX and SwarmX technologies, Andromeda delivers more than 1 Exaflop of AI compute and …

Read more