Nvidia 8 teraflop deep learning supercomputer for self driving cars

NVIDIA is applying its deep learning prowess to enable autonomous vehicles. The GPU vendor launched NVIDIA DRIVE PX 2, an autonomous vehicle development platform powered by the 16nm FinFET-based Pascal GPU, the named successor to Maxwell. Like last year’s DRIVE PX, the next-gen development platform targets NVIDIA’s automotive partners, a growing list that includes Audi, BMW, Daimler, Ford and dozens more.

Equipped with two Tegra SOCs with ARM cores plus two discrete Pascal GPUs, the new platform is capable of delivering up to 24 trillion deep learning operations per second — 10 times what the previous-generation product offered. In terms of general computing capability, the PX 2 offers an aggregate of 8 teraflops of single-precision performance, a four-fold increase over the PX 1. In addition to pertinent interfaces and middleware, the development platform includes the Caffe deep learning framework to run DNN models designed and trained on DIGITS, NVIDIA’s interactive deep learning training system.

Reprising a conversation he had with Elon Musk on stage at GTC15, Huang noted that humans are the least reliable part of the car, responsible for most of the one million automotive-related fatalities each year. Thus, said Huang, replacing the human altogether will make a great contribution to society. Perception is the main issue and deep learning is able to achieve super-human perception capability. DRIVE PX 2 can process 12 video cameras, plus lidar, radar and ultrasonic sensors. This 360 degree assessment makes it possible to detect objects, identify them and their position relative to the car, and then calculate a safe and comfortable trajectory.

SOURCE – HPCWire