In a fascinating bit of silicon serendipity, it turns out that the same technology that can conjure up a gorgeous alien landscape or paint a picture-perfect explosion is also nearly optimal for the hottest area of AI: deep learning. Deep learning enables a computer to learn by itself, without programmers having to code everything by hand, and it’s leading to unparalleled levels of accuracy in areas like image and speech recognition.
Tech giants like Google, Microsoft, Facebook and Amazon are buying ever larger quantities of Nvidia’s chips for their data centers. Institutions like Massachusetts General Hospital are using Nvidia chips to spot anomalies in medical images like CT scans. Tesla recently announced it would be installing Nvidia GPUs in all of its cars to enable autonomous driving. Nvidia chips provide the horsepower underlying virtual reality headsets, like those being brought to market by Facebook and HTC.
There are an estimated 3,000 AI startups worldwide, and many of them are building on Nvidia’s platform. They’re using Nvidia’s GPUs to put AI into apps for trading stocks, shopping online and navigating drones. There’s even an outfit called June that’s using Nvidia’s chips to make an AI-powered oven.
“We’ve been investing in a lot of startups applying deep learning to many areas, and every single one effectively comes in building on Nvidia’s platform,” says Marc Andreessen of venture capital firm Andreessen Horowitz. “It’s like when people were all building on Windows in the ’90s or all building on the iPhone in the late 2000s.
Nvidia’s dominance of the GPU sector–it has more than a 70% share–and its expansion into these new markets have sent its stock soaring. Its shares are up almost 200% in the past 12 months, and more than 500% in the past five years. Nvidia’s market cap of $50 billion brings its trailing earnings multiple to more than 40 times, among the highest in the industry.
Nvidia has increasingly optimized its hardware for deep learning. It has taken its latest server chip, the Tesla P100, and put eight of them into the DGX-1, a 3-foot-long, 5-inch-thin rectangular container that Nvidia calls “the world’s first AI supercomputer in a box.” The $130,000 machine delivers 170 teraflops of performance–on par with 250 conventional servers. In August Huang personally delivered the first unit to Elon Musk and his San Francisco AI nonprofit, OpenAI.
Virtually every major power in chips is suddenly chasing the AI dream. A slew of startups are emerging with new types of deep learning chip architecture. And the chip players aren’t the only ones excited. Deep learning is so vital to the future of the tech business that one of Nvidia’s most important customers–which has never before made its own chips–is now also a competitor: Google.
In May at its annual developer conference, Google announced it had built a custom chip called the Tensor Processor Unit, which is tailor-made for TensorFlow, its deep learning framework. Google said it had been equipping its data centers with these chips to improve its maps and search results.
Similarly, another Nvidia customer, Microsoft, is now making its own chips for its data centers: a custom chip called a field-programmable gate array (or FPGA), which can be reprogrammed after it’s manufactured and has proven useful for AI apps.