Nvidia has released their quarterly results and they have beat on revenue with $30 billion versus expectations of $27 billion.
Nvidia is guiding to $32.5 billion for revenue in the next quarter versus expectations of $31 billion.
Record quarterly revenue of $30.0 billion, up 15% from Q1 and up 122% from a year ago
Record quarterly Data Center revenue of $26.3 billion, up 16% from Q1 and up 154% from a year ago
AJ nailed the Nvidia revenue and EPS.
🔥BREAKING: Nvidia reports $0.68 per share in quarterly earnings! I NAILED IT. I predicted $0.68!
I beat Wallstreet just like the last few (WS was way too low at $0.64).
If you're a Nvidia investor, you should really follow me. You're missing out. https://t.co/75v1EKnXB8
— AJ (@alojoh) August 28, 2024

Nvidia reported revenue for the second quarter ended July 28, 2024, of $30.0 billion, up 15% from the previous quarter and up 122% from a year ago.
For the quarter, GAAP earnings per diluted share was $0.67, up 12% from the previous quarter and up 168% from a year ago. Non-GAAP earnings per diluted share was $0.68, up 11% from the previous quarter and up 152% from a year ago.
“Hopper demand remains strong, and the anticipation for Blackwell is incredible,” said Jensen Huang, founder and CEO of NVIDIA. “NVIDIA achieved record revenues as global data centers are in full throttle to modernize the entire computing stack with accelerated computing and generative AI.”
“Blackwell samples are shipping to our partners and customers. Spectrum-X Ethernet for AI and NVIDIA AI Enterprise software are two new product categories achieving significant scale, demonstrating that NVIDIA is a full-stack and data center-scale platform. Across the entire stack and ecosystem, we are helping frontier model makers to consumer internet services, and now enterprises. Generative AI will revolutionize every industry.”
During the first half of fiscal 2025, NVIDIA returned $15.4 billion to shareholders in the form of shares repurchased and cash dividends. As of the end of the second quarter, the company had $7.5 billion remaining under its share repurchase authorization. On August 26, 2024, the Board of Directors approved an additional $50.0 billion in share repurchase authorization, without expiration.
NVIDIA will pay its next quarterly cash dividend of $0.01 per share on October 3, 2024, to all shareholders of record on September 12, 2024.
On June 7, 2024, NVIDIA completed a ten-for-one forward stock split. All share and per-share amounts presented have been retroactively adjusted to reflect the stock split.




Outlook
NVIDIA’s outlook for the third quarter of fiscal 2025 is as follows:
Revenue is expected to be $32.5 billion, plus or minus 2%.
GAAP and non-GAAP gross margins are expected to be 74.4% and 75.0%, respectively, plus or minus 50 basis points. For the full year, gross margins are expected to be in the mid-70% range.
GAAP and non-GAAP operating expenses are expected to be approximately $4.3 billion and $3.0 billion, respectively. Full-year operating expenses are expected to grow in the mid- to upper-40% range.
GAAP and non-GAAP other income and expense are expected to be an income of approximately $350 million, excluding gains and losses from non-affiliated investments and publicly-held equity securities.
GAAP and non-GAAP tax rates are expected to be 17%, plus or minus 1%, excluding any discrete items.
Data Center
Second-quarter revenue was a record $26.3 billion, up 16% from the previous quarter and up 154% from a year ago.
Announced that the combination of NVIDIA H200 Tensor Core and NVIDIA Blackwell architecture B200 Tensor Core processors swept the latest industry-standard MLPerf benchmark results for inference.
Revealed that H200 GPU-powered systems are now available on CoreWeave, the first cloud service provider to announce general availability.
Unveiled an array of Blackwell systems featuring NVIDIA Grace™ CPUs, networking and infrastructure from top manufacturers such as GIGABYTE, QCT and Wiwynn.
Reported broad adoption of the NVIDIA Spectrum-X™ Ethernet networking platform by cloud service providers, GPU cloud providers and enterprises, as well as partners incorporating it into their offerings.
Released NVIDIA NIM™ for broad availability to developers globally and announced more than 150 companies are integrating microservices into their platforms to speed generative AI application development.
Unveiled an inference service with Hugging Face powered by NIM microservices on NVIDIA DGX™ Cloud to enable developers to deploy popular large language models.
Introduced an NVIDIA AI Foundry service and NIM inference microservices to accelerate generative AI for the world’s enterprises with the Llama 3.1 collection of models.
Announced Japan advanced its sovereign AI capabilities with its ABCI 3.0 supercomputer, integrating H200 GPUs and NVIDIA Quantum-2 InfiniBand networking.
Accelerated quantum computing efforts at national supercomputing centers around the world with the open-source NVIDIA CUDA-Q™ platform.

Brian Wang is a Futurist Thought Leader and a popular Science blogger with 1 million readers per month. His blog Nextbigfuture.com is ranked #1 Science News Blog. It covers many disruptive technology and trends including Space, Robotics, Artificial Intelligence, Medicine, Anti-aging Biotechnology, and Nanotechnology.
Known for identifying cutting edge technologies, he is currently a Co-Founder of a startup and fundraiser for high potential early-stage companies. He is the Head of Research for Allocations for deep technology investments and an Angel Investor at Space Angels.
A frequent speaker at corporations, he has been a TEDx speaker, a Singularity University speaker and guest at numerous interviews for radio and podcasts. He is open to public speaking and advising engagements.
AMD and others need to catch up a bit, those margins are brutal.
The imaginary dual processor Blackwell is a no show and cancelled. Nvidia cannot compete on performance without a multi chip solution. The single chip Blackwell is not competitive. Nvidia is coasting on ease of use and familiarity. Adopting anything new, better, half price, and energy efficient is always difficult and time consuming the first time. In the case of MI300 AI systems the time lag is around six months. Three hundred corporations are validating MI300 systems. The incentive to succeed is high.