OpenAI Sam Altman Claim About 1 Million GPUs Are Not One Data Center

Sam Altman’s recent claim (from mid-July 2025) that OpenAI will bring “well over 1 million GPUs online by the end of this year” appears to refer to aggregate compute capacity across many partnerships and locations. The main Texas location for Stargate plans to have 400,000 chips installed in the middle of 2026.

Oracle partnership announcement and reports of funding/execution hurdles is evidence that Sam Altman is making misleading statements. Multiple sources indicate Stargate has faced significant delays, with scaled-back near-term plans focusing on smaller facilities rather than rapid hyperscale deployments. Only modest GPU additions (e.g., 30k-60k net new in 2025) seem feasible based on public timelines, though OpenAI could leverage existing Microsoft Azure and emerging Oracle capacity to pad totals.

xAI’s Colossus in Memphis, TN, sets the benchmark for fast, coherent GPU clusters (unified memory across chips for efficient training). OpenAI hasn’t matched this yet— their approach uses multi-datacenter setups, which enable scale but sacrifice single-cluster coherence due to latency/network issues.

Meta’s clusters (~100k GPUs, expanding to 300k+); Google’s TPUs (millions equivalent, but not GPU-coherent in one building). China’s DeepSeek (large but fragmented and weaker chips). No one else has solved xAI’s fast-install (e.g., 100k in weeks) for 1M+ coherent scale—OpenAI/Oracle focus on distributed efficiency, not single-site mastery.