Race for ExaFlop Supercomputers with GPU dominant machines

The World of supercomputers is shifting rapidly with most of the compute power coming from GPU acceleration. In June 2018, 56 percent of the additional flops were a result of NVIDIA Tesla GPUs running in new supercomputers.

Most of the added supercomputing power came from three top systems new to the list: Summit, Sierra, and the AI Bridging Cloud Infrastructure (ABCI).

Summit surpassed the 93-petaflop Sunway TaihuLight with a Linpack score of 122.3 petaflops. Summit is powered by IBM servers, each one equipped with two Power9 CPUs and six V100 GPUs. According to NVIDIA, 95 percent of the Summit’s peak performance (187.7 petaflops) is derived from the system’s 27,686 GPUs.

Sierra now ranks as the third fastest supercomputer in the world at 71.6 Linpack petaflops. And, although very similar to Summit, it has four V100 GPUs in each dual-socked Power9 node, rather than six. The 17,280 GPUs in Sierra still represent the lion’s share of that system’s flops.

The new ABCI machine in Japan is ranked fifth in the world. Each of its servers pairs two Intel Xeon Gold CPUs with four V100 GPUs.

ExaFLOP supercomputers in 2020 to 2021

In 2015, China unveiled a plan to produce an exascale machine by the end of 2020. , Depei Qian, a professor at Beihang University in Beijing who helps manage the country’s exascale effort, said they might hit the end of 2020 but might slip 12 to 18 months.

China has three prototype exascale machines. Two use domestic chips derived from work on existing supercomputers the country has developed. The third uses licensed processor technology.

A decision on selecting the version to scale to an exascale has been delayed.

The USA has moved up its exascale target from 2023 to 2021.

Japan is targeting 2021.

Europe is targeting 2023.

3 thoughts on “Race for ExaFlop Supercomputers with GPU dominant machines”

Comments are closed.