Using HPL-AI, a new approach to benchmarking AI supercomputers, Oak Ridge National Laboratory’s Summit supercomputer system reached 445 petaflops or nearly half an exaflops. The system’s official Linpack performance is 148 petaflops announced in the new TOP500 list of the world’s fastest supercomputers.
Summit, the world’s fastest supercomputer, NVIDIA ran HPL-AI computations in less than half an hour, a 3x speedup from the 90 minutes it takes Summit to run the original HPL.
“Ever since the delivery and installation of our 200 petaflops Summit system — which included the mixed-precision Tensor Core capability powered by NVIDIA’s Volta GPU — it has been a goal of ours to not only use this unique aspect of the system to do AI but also to use it in our traditional HPC workloads,” said Jeff Nichols, associate laboratory director at ORNL. “Achieving a 445 petaflops mixed-precision result on HPL (equivalent to our 148.6 petaflops DP result) demonstrates that this system is capable of delivering up to 3x more performance on our traditional and AI workloads. This gives us a huge competitive edge in delivering science at an unprecedented scale.”
Summit is loaded with more than 27,000 NVIDIA V100 GPUs, each utilizing hundreds of Tensor Cores that support mixed-precision computing
To account for the AI techniques that represent the new era of supercomputing, a new approach to benchmarking based on the HPL standard — called HPL-AI — uses the mixed-precision calculations widely used to train AI models.
The test implementing HPL-AI on the Summit supercomputer affirms the feasibility of HPL-AI measurements at scale to gauge mixed-precision computing performance and complement existing HPL benchmarks.
“Mixed-precision techniques have become increasingly important to improve the computing efficiency of supercomputers, both for traditional simulations with iterative refinement techniques as well as for AI applications,” Dongarra said. “Just as HPL allows benchmarking of double-precision capabilities, this new approach based on HPL allows benchmarking of mixed-precision capabilities of supercomputers at scale.”
Science Researchers Use Mixed-Precision Supercomputing, for Simulations and AI
Scientists involved in chemistry, nuclear energy, and oil and gas are using NVIDIA GPU-powered computing resources for groundbreaking work that requires both AI and simulation.
* Nuclear fusion: Nuclear fusion is effectively replicating the sun in a bottle. While it promises unlimited clean energy, nuclear fusion reactions involve working with temperatures above 10 million degrees Celsius. They’re also prone to disruptions — and tricky to sustain for more than a few seconds. Researchers at ORNL are simulating fusion reactions so that physicists can study the instabilities of plasma fusion, giving them a better understanding of what’s happening inside the reactor. The mixed-precision capabilities of Tensor Core GPUs speed up these simulations by 3.5x to advance the development of sustainable energy at leading facilities such as ITER.
* Identifying new molecules: Whether it’s to develop a new chemical compound for industrial use or a new drug to treat a disease, scientists need to identify and synthesize new molecules with desirable chemical properties. Using NVIDIA V100 GPUs for training and inference, Dow Chemical Company researchers developed a neural network to identify new molecules for use in the chemical manufacturing and pharmaceutical industries.
* Seismic fault interpretation: The oil and gas industry analyzes seismic images to detect fault lines, an essential step toward characterizing reservoirs and determining well placement. This process typically takes days to weeks for one iteration — but with an NVIDIA GPU, University of Texas researchers trained an AI model that can predict faults in mere milliseconds instead.
SOURCE- Nvidia, ORNL
Written By Brian Wang