United States, China, Europe and Japan race to the Exaflop Supercomputer

1. The United States Department of Energy has commissioned two supercomputers: one is a system codenamed “Summit”, and it will be installed at the Oak Ridge National Laboratory in Tennessee. It is designed to peak at 150 to 300 petaFLOPS in 2017 – that’s 300 quadrillion calculations per second, or about five times faster than the 54 PFLOPS Tianhe-2.

The other system is codenamed “Sierra”, which is designed to peak at more than 100 petaFLOPS.

The systems will cost $325 million to build. The DoE has set aside a further $100 million to develop “extreme scale supercomputing”, or an exaFLOP machine in other words.

Assuming the 300 petaflop system is delivered on time and with the desired performance then the next system iteration with another 3X improvement in energy efficiency and performance would deliver an exaFLOP system in about 2019 or 2020.

2. China has two 100 petaflop supercomputer projects for 2015

*  China’s 100PetaFLOP projects will have more than double their supercomputer investment compared the previous five year supercomputer project

– MOST (863)
– Local government

*  Tianhe-2 33.86/54.9PFlops now, 2015 ~100PFlops
*  Shenwei-x ~100PFlops

China will then have two 100 Petaflop class supercomputers.

China is targeting over 50GFlops/watt for its exascale architecture

China would be on track to an exaFLOP system in 2017 or 2018.

The DOE system would mainly be using parts from IBM and Nvidia so any other country that was willing to invest about $600 million should also be able to get an exaFLOP system in 2020.

The newest Intel Knight’s Landing chip will provide an approximately 3x speed boost. China can swap out the 48,000 Phi cards and maek the Tianhe-2 into a 100+ petaflops supercomputer. China will also likely upgrade the custom TH-express interconnect.

China is also hard at work on the first legs of its exascale research program with the goal being to create an “advanced and feasible architecture” that falls into the target of 30GFlops per Watt.

3. Japan has an exaFLOP supercomputer project targeting delivery by 2021.

Fujitsu and the Riken research center have been chosen to develop an exascale supercomputer, which at 1,000 petaflops would be about 30 times faster than the leading supercomputer today. The Riken Advanced Institute for Computational Science did not specify a performance speed or other characteristics of the machine, which it is calling the FLAGSHIP 2020 Project. However, planning documents (9 pages) suggest using over 10 million CPU cores and reaching 1 exaflop. The machine is planned for April, 2021.

Fujitsu Japan should deliver a 100 petaFLOP supercomputer in 2017

4. The European project called Mont-Blanc has been to design a new type of computer architecture capable of setting future global HPC standards, built from energy efficient solutions used in embedded and mobile devices. They want to use the OmpSs parallel programming model to automatically exploit multiple cluster nodes, transparent application check pointing for fault tolerance, support for ARMv8 64-bit processors, and the initial design of the Mont-Blanc Exascale architecture.

5. A startup company called Optalysis is trying to invent a fully-optical computer that would be aimed at many of the same tasks for which GPUs are currently used. Amazingly, Optalysis is claiming that they can create an optical solver supercomputer astonishing 17 exaFLOPS machine by 2020.

To date they have successfully built a Proof of Concept derivative processor to demonstrate the ability to process a spectral derivative function using optical technology. This function forms the basis of spectral Partial Differential Equation solvers such as those used in high-end Computational Fluid Dynamics models.

The system produced two-dimensional derivative functions. Numerical data was represented as grey-level intensities on liquid crystal SLMs and projected through the optical system using a low power laser light. The results were then converted back into digital form with a camera.

A 340 gigaflops proof-of-concept model is slated for launch in January 2015, sufficient to analyze large data sets, and produce complex model simulations in a laboratory environment, according to the company.

Unlike current supercomputers, which still use what are essentially serial processors, the Optalysys Optical Processor takes advantage of the properties of light to perform the same computations in parallel and at the speed of light