AI growth has re-accelerated.
Nvidia earned an adjusted $1.30 a share on sales of $57 billion in the quarter ended Oct. 26. Analysts polled by FactSet had expected Nvidia to earn an adjusted $1.26 a share on sales of $54.9 billion in the quarter ended Oct. 26. In the year-earlier period, Nvidia earned 81 cents a share on sales of $35.08 billion.
“Blackwell sales are off the charts, and cloud GPUs are sold out,” said Jensen Huang, founder and CEO of NVIDIA. “Compute demand keeps accelerating and compounding across training and inference — each growing exponentially. We’ve entered the virtuous cycle of AI. The AI ecosystem is scaling fast — with more new foundation model makers, more AI startups, across more industries, and in more countries. AI is going everywhere, doing everything, all at once.




Outlook
NVIDIA’s outlook for the fourth quarter of fiscal 2026 is as follows:
Revenue is expected to be $65.0 billion, plus or minus 2%.
GAAP and non-GAAP gross margins are expected to be 74.8% and 75.0%, respectively, plus or minus 50 basis points.
GAAP and non-GAAP operating expenses are expected to be approximately $6.7 billion and $5.0 billion, respectively.
GAAP and non-GAAP other income and expense are expected to be an income of approximately $500 million, excluding gains and losses from non-marketable and publicly-held equity securities.
GAAP and non-GAAP tax rates are expected to be 17.0%, plus or minus 1%, excluding any discrete items.
Highlights Data Center
Third-quarter revenue was a record $51.2 billion, up 25% from the previous quarter and up 66% from a year ago.
Revealed that NVIDIA Blackwell achieved the highest performance and best overall efficiency in the SemiAnalysis InferenceMAX benchmarks, while delivering 10x throughput per megawatt compared with the previous generation.
Announced a strategic partnership with OpenAI to deploy at least 10 gigawatts of NVIDIA systems for OpenAI’s next-generation AI infrastructure.
Partnered with industry leaders, including Google Cloud, Microsoft, Oracle and xAI, to build America’s AI infrastructure with hundreds of thousands of NVIDIA GPUs.
Announced that, for the first time, Anthropic will run and scale on NVIDIA infrastructure, initially adopting 1 gigawatt of compute capacity with NVIDIA Grace Blackwell and Vera Rubin systems.
Announced a collaboration with Intel to jointly develop multiple generations of custom data center and PC products with NVIDIA NVLink.
Revealed plans to accelerate seven new supercomputers, including with Oracle to build the U.S. Department of Energy’s largest AI supercomputer, Solstice, featuring 100,000 NVIDIA Blackwell GPUs, plus another system, Equinox, featuring 10,000 NVIDIA Blackwell GPUs.
Celebrated the first NVIDIA Blackwell wafer produced on U.S. soil at TSMC’s Arizona facility, representing revitalization of U.S. manufacturing as Blackwell reached volume production.
Unveiled NVIDIA Rubin CPX, a new class of GPU purpose-built for massive-context processing.
Introduced NVIDIA NVQLink™, an open system architecture for tightly coupling the extreme performance of NVIDIA GPU computing with quantum processors, which will be adopted by more than a dozen supercomputing centers globally.
Revealed that Arm is extending its Neoverse platform with NVIDIA NVLink Fusion™ to accelerate AI data center adoption.
Revealed that Meta, Microsoft and Oracle will boost their AI data center networks with NVIDIA Spectrum-X™ Ethernet networking switches.
Introduced NVIDIA Omniverse™ DSX, a comprehensive, open blueprint for designing and operating gigawatt-scale AI factories.
Launched NVIDIA BlueField-4, the processor for the operating system of AI factories, with industry leaders including CoreWeave, Dell Technologies, Oracle Cloud Infrastructure, Palo Alto Networks, Red Hat and VAST Data building next-generation BlueField®-accelerated data center platforms.
Partnered with Nokia to add NVIDIA-powered AI-RAN products to Nokia’s industry-leading RAN portfolio, enabling communication service providers to launch AI-native 5G-Advanced and 6G networks on NVIDIA platforms.
Unveiled the all-American AI-RAN stack to accelerate the path to 6G with industry-leading partners Booz Allen, Cisco, MITRE, ODC and T-Mobile.
Teamed with Palantir Technologies to build a first-of-its-kind integrated technology stack for operational AI.
Set records on the new MLPerf Inference v5.1 benchmark with NVIDIA Blackwell Ultra, and won every MLPerf Training v5.1 benchmark.
Revealed that NVIDIA is working with partners including CoreWeave, Microsoft and Nscale to build the U.K.’s next generation of AI infrastructure, and announced an investment of £2 billion in the U.K. market.
Launched the world’s first Industrial AI Cloud with Deutsche Telekom to power the AI era of Germany’s industrial transformation.
Announced that NVIDIA is working with the South Korea government and industrial leaders, including Hyundai Motor Group, Samsung Electronics, SK Group and NAVER Cloud, to expand the nation’s AI infrastructure with over a quarter-million NVIDIA GPUs.


![]()


![]()
![]()

Brian Wang is a Futurist Thought Leader and a popular Science blogger with 1 million readers per month. His blog Nextbigfuture.com is ranked #1 Science News Blog. It covers many disruptive technology and trends including Space, Robotics, Artificial Intelligence, Medicine, Anti-aging Biotechnology, and Nanotechnology.
Known for identifying cutting edge technologies, he is currently a Co-Founder of a startup and fundraiser for high potential early-stage companies. He is the Head of Research for Allocations for deep technology investments and an Angel Investor at Space Angels.
A frequent speaker at corporations, he has been a TEDx speaker, a Singularity University speaker and guest at numerous interviews for radio and podcasts. He is open to public speaking and advising engagements.
I saw Gary Marcus claim on X that Gemini 3.0 was trained entirely on their own TPU chips, which might obviously spell trouble for Nvidia.