25 times more power efficiency is needed for exaflop supercomputers with reasonable energy needs

An exascale system built with NVIDIA Kepler K20 co-processors would consume about 150 megawatts. That’s nearly 10 times the amount consumed by Tianhe-2, which is composed of 32,000 Intel Ivy Bridge sockets and 48,000 Xeon Phi boards.

Theoretically, an exascale system – 100 times more computing capability than today’s fastest systems – could be built with only x86 processors, but it would require as much as 2 gigawatts of power.

That’s the entire output of the Hoover Dam.

Instead, HPC system developers need to take an entirely new approach to get around the power crunch, Dally said. The NVIDIA chief scientist said reaching exascale will require a 25x improvement in energy efficiency. So the 2 gigaflops per watt that can be squeezed from today’s systems needs to improve to about 50 gigaflops per watt in the future exascale system.

Relying on Moore’s Law to get that 25x improvement is probably not the best approach either. According to Dally, advances in manufacturing processes will deliver about a 2.2x improvement in performance per watt. That leaves an energy efficiency gap of 12x that needs to be filled in by other means.

Dally sees a combination of better circuit design and better processor architectures to close the gap. If done correctly, these advances could deliver 3x and 4x improvements in performance per watt, respectively.

US does not have exaflop funding

The United States is not likely to be the first nation to break the exaflop barrier without significant increases in DOE funding. The DOE’s 27-petaflop Oak Ridge National Laboratory-based Titan is the fastest supercomputer in the US and is second to China 55-petaflop Milky Way 2 (Tianhe-2).

DOE’s stated goal has also been to develop an exascale supercomputing system – one capable of a quintillion, or 1,000,000,000,000,000,000 floating point operations per second (FLOPS) – by 2020, but developing the technology to make good on that goal would take at least an additional $400 million in funding per year, said Rick Stevens, associate laboratory director at Argonne National Lab.

“At that funding level, we think it’s feasible, not guaranteed, but feasible, to deploy a system by 2020,” Stevens said, testifying before the House Science, Space and Technology subcommittee on Energy on May 22. He also said that current funding levels wouldn’t allow the United States to hit the exascale barrier until around 2025

China is rapidly stockpiling cash for its supercomputing efforts, while Japan recently invested $1 billion into building an exascale supercomputer – both countries hope to build one by 2020 – and the European Union, Russia and a handful of large private sector companies are all in the mix as well.

If you liked this article, please give it a quick review on ycombinator or StumbleUpon. Thanks

25 times more power efficiency is needed for exaflop supercomputers with reasonable energy needs

An exascale system built with NVIDIA Kepler K20 co-processors would consume about 150 megawatts. That’s nearly 10 times the amount consumed by Tianhe-2, which is composed of 32,000 Intel Ivy Bridge sockets and 48,000 Xeon Phi boards.

Theoretically, an exascale system – 100 times more computing capability than today’s fastest systems – could be built with only x86 processors, but it would require as much as 2 gigawatts of power.

That’s the entire output of the Hoover Dam.

Instead, HPC system developers need to take an entirely new approach to get around the power crunch, Dally said. The NVIDIA chief scientist said reaching exascale will require a 25x improvement in energy efficiency. So the 2 gigaflops per watt that can be squeezed from today’s systems needs to improve to about 50 gigaflops per watt in the future exascale system.

Relying on Moore’s Law to get that 25x improvement is probably not the best approach either. According to Dally, advances in manufacturing processes will deliver about a 2.2x improvement in performance per watt. That leaves an energy efficiency gap of 12x that needs to be filled in by other means.

Dally sees a combination of better circuit design and better processor architectures to close the gap. If done correctly, these advances could deliver 3x and 4x improvements in performance per watt, respectively.

US does not have exaflop funding

The United States is not likely to be the first nation to break the exaflop barrier without significant increases in DOE funding. The DOE’s 27-petaflop Oak Ridge National Laboratory-based Titan is the fastest supercomputer in the US and is second to China 55-petaflop Milky Way 2 (Tianhe-2).

DOE’s stated goal has also been to develop an exascale supercomputing system – one capable of a quintillion, or 1,000,000,000,000,000,000 floating point operations per second (FLOPS) – by 2020, but developing the technology to make good on that goal would take at least an additional $400 million in funding per year, said Rick Stevens, associate laboratory director at Argonne National Lab.

“At that funding level, we think it’s feasible, not guaranteed, but feasible, to deploy a system by 2020,” Stevens said, testifying before the House Science, Space and Technology subcommittee on Energy on May 22. He also said that current funding levels wouldn’t allow the United States to hit the exascale barrier until around 2025

China is rapidly stockpiling cash for its supercomputing efforts, while Japan recently invested $1 billion into building an exascale supercomputer – both countries hope to build one by 2020 – and the European Union, Russia and a handful of large private sector companies are all in the mix as well.

If you liked this article, please give it a quick review on ycombinator or StumbleUpon. Thanks