XAI is complete installation of 550,000 Nvidia B200 GPUs in weeks. This is far faster than previously expected. It is over 7 times the compute used to train Grok 4 which was released just weeks ago. It is 30 times the compute used to train Grok 3 which was released in February of this year.
There is picture of the wiring inside the Colossus 2 data center.
Grok 3 100,000 Nvidia H100 chips just 6 months ago. This was 12 times the compute of Grok 2 6 months before Grok 3.
Grok 4 used 230,000 Nvidia chips (150K H100, 50K H200 and 30K b200). This was 400,000 H100 equivalent chips. Four times the compute in about 3 months.
The installation speed for Grok 3 was over twenty times faster than anyone else. They installed 100,000 chips in 120 days. They installed the next 100,000 in about 90 days.
The new installation speed is about 200,000 to 300,000 chips in 30 days. XAI is likely 100 times faster than others at installing GPU chips in data centers.
The installation of power is also far faster than any other company. The 550,000 B200 chips will need about 1 gigawatt of power. This is up from 400-500 Megawatts for the 230,000 chips for Grok 4.
The @xAI goal is 50 million in units of H100 equivalent-AI compute (but much better power-efficiency) online within 5 years
— Elon Musk (@elonmusk) July 22, 2025





Brian Wang is a Futurist Thought Leader and a popular Science blogger with 1 million readers per month. His blog Nextbigfuture.com is ranked #1 Science News Blog. It covers many disruptive technology and trends including Space, Robotics, Artificial Intelligence, Medicine, Anti-aging Biotechnology, and Nanotechnology.
Known for identifying cutting edge technologies, he is currently a Co-Founder of a startup and fundraiser for high potential early-stage companies. He is the Head of Research for Allocations for deep technology investments and an Angel Investor at Space Angels.
A frequent speaker at corporations, he has been a TEDx speaker, a Singularity University speaker and guest at numerous interviews for radio and podcasts. He is open to public speaking and advising engagements.
All things equal, that would be impressive.
However, at this stage GROK is behind other models in spite of all that compute. GROK 4 is like working with a human with Altzheimers compared to GROK 3. Basically, one can only ask standalone prompts to GROK 4.
I hope they will catch up with the algorithmic side of things and put all that compute to good use.