LSU’s Sterling noted that the Chinese are not simply buying their way into the upper echelons of supercomputing. The top Tianhe-1A machines is based on a home-grown system design, sporting a custom network interconnect and I/O processor developed from the silicon on up. “Unlike us, they have a long tradition of five year plans, and they stayed the course,” said Sterling. And they will do so here as well. They have no doubt through the procedures and methods they’ve applied that they will be the leader in this field before the end of this decade.” Indeed, China has stated it wants to be the first nation to field an exascale machine.
China rushing to build supercomputer centers
“Within a year, there will be more Top500 systems in China than there are in Europe collectively,” predicts David Turek, IBM VP deep computing, referring to the list of the world’s 500 most powerful supercomputers, which is regularly updated by academic researchers in the U.S. and Europe. China has 41 systems on the most recent Top500 list. Europe has triple that amount (Europe 126, Germany 26, UK 25, France 26, Italy 6, Sweden 6, Spain 3, Poland 6, Norway 3)
Under construction: China’s massive new supercomputing center site in Shenzhen.
Whether new machines will be GPGPU-based is less certain. Other accelerators like Intel’s Many Integrated Core (MIC) processor won’t debut for another year or so. And AMD’s Fusion (CPU-GPU) processors are just now making their way into client side of the ecosystem. Even NVIDIA’s roadmap for its discrete GPUs over the next couple of years will likely produce something that is unrecognizable as a graphics processor of the last decade.
It’s even possible that accelerators, GPUs or otherwise, will not figure prominently in the biggest machines. PGI’s Wolfe noted that the only two publicly announced 10 petaflop systems — the Power7-based Blue Waters in the US and the Sparc64 VIIIfx-based Kei Soku Keisanki (aka the “K computer”) in Japan — will rely solely on CPUs, albeit very high-end ones.
Wolfe thinks Intel’s MIC is a “fascinating architecture,” which has the potential to unseat NVIDIA’s current dominance in HPC acceleration arena. And AMD (ATI) GPUs have the raw performance advantage, he says. That could be the decisive factor once a more level playing field for GPGPU middleware is in place, which is what AMD is banking on with the open standard OpenCL API. That technology is even more attractive when seen against AMD’s CPU-GPU Fusion roadmap, which will eventually wind its way into the server side of the business.
Sterling is even more circumspect about the GPGPU’s longevity in HPC, at least in its current form. “With respect to the GPU, it’s the flavor of the month,” he said. “We’ve been here before with attached array processors.”
Like others, Sterling believes heterogeneous architectures will be the model of the future, but the accelerator componentry will eventually be integrated on-chip, a la Fusion. That will solve much of the latency and bandwidth issues that currently limit performance on the PCI-connected discrete GPUs. It will also simplify the programming model.
var MarketGidDate = new Date();
Brian Wang is a Futurist Thought Leader and a popular Science blogger with 1 million readers per month. His blog Nextbigfuture.com is ranked #1 Science News Blog. It covers many disruptive technology and trends including Space, Robotics, Artificial Intelligence, Medicine, Anti-aging Biotechnology, and Nanotechnology.
Known for identifying cutting edge technologies, he is currently a Co-Founder of a startup and fundraiser for high potential early-stage companies. He is the Head of Research for Allocations for deep technology investments and an Angel Investor at Space Angels.
A frequent speaker at corporations, he has been a TEDx speaker, a Singularity University speaker and guest at numerous interviews for radio and podcasts. He is open to public speaking and advising engagements.