The Department of Energy has commissioned two supercomputers: one is a system codenamed “Summit”, and it will be installed at the Oak Ridge National Laboratory in Tennessee. It is designed to peak at 150 to 300 petaFLOPS – that’s 300 quadrillion calculations per second, or about five times faster than the 54 PFLOPS Tianhe-2.
The other system is codenamed “Sierra”, which is designed to peak at more than 100 petaFLOPS.
The systems will cost $325m to build. The DoE has set aside a further $100m to develop “extreme scale supercomputing”, or an exaFLOP machine in other words.
The US’s fastest publicly known supercomputer is the Cray-built 27 petaFLOPS Titan at Oak Ridge, which is number two in the world rankings. Number three is the 20 teraFLOPS Sequoia at the Lawrence Livermore.
This rendering shows a few of the cabinets that ultimatly will comprise IBM’s Sierra supercomputer at Lawrence Livermore National Laboratory. IBM
The DOE will spend about $100 million on a program called FastForward2 to make next-generation, massive-scale supercomputers 20 to 40 times faster than today’s high-end models, Energy Secretary Ernest Moniz was scheduled to announce Friday. It’s all part of a project called Coral after the national labs involved: Oak Ridge, Argonne and Lawrence Livermore.
The system encompasses relatively new computing trends, including flash-memory storage that’s faster but more expensive than hard drives, and the graphic processing unit (GPU) boost from Nvidia. Such accelerators aren’t as versatile as general-purpose central processing units, but they can solve particular types of math problems faster. That’s why accelerators from Nvidia, AMD and Intel have found a place in supercomputing systems.
“This is a huge endorsement for the Tesla GPU accelerator platform,” said Sumit Gupta, general manager of Nvidia’s Tesla accelerated computing business. “To be able to build up these large systems, you need the energy efficiency that GPU accelerators provide.”
Data Centric Computing – Bringing processing to the data for more speed and less energy usage
Working with IBM, NVIDIA developed the advanced NVIDIA NVLink interconnect technology, which will enable CPUs and GPUs to exchange data five to 12 times faster than they can today. NVIDIA NVLink will be integrated into IBM POWER CPUs and next-generation NVIDIA GPUs based on the NVIDIA Volta
When Oak Ridge and Lawrence Livermore national laboratories purchased new supercomputers from IBM, it marked an early step in the shift from processor-centric to data-centric computing. This shift is made necessary by the emergence of big data and big data applications in both private industry and government research.
The needs of businesses and society are changing rapidly, so the computer industry must respond with a new approach to computer design—which we at IBM call data-centric computing. In the future, much of the processing will move to where the data resides, whether that’s within a single computer, in a network or out on the cloud. Microprocessors will still be vitally important, but their work will be divided up.
This shift is necessary because of the explosion of big data. Every day, society generates an estimated 2.5 billion gigabytes of data—everything from corporate ledgers to individual health records to personal Tweets.
Because of the fundamental architecture of computing, data has to be moved repeatedly from where it’s stored to the microprocessor. That consumes a lot of time and energy. And now, with the emergence of the big data phenomenon, it’s no longer sustainable. That’s why we need to turn computing inside out—moving processing to the data. Over time, the shift will have huge consequences for everybody—from the managers of high-end data centers to kids playing games on their smartphones.
When Summit and Sierra are delivered starting in 2017, they are expected to achieve five to 10 times the processing performance of current supercomputers. But raw computation is only part of the story. Just as important, a series of system and software innovations will enable the computers to efficiently handle a wider array of analytics and big data applications.
IBM’s Blue Gene supercomputers made great leaps forward in energy efficiency. Summit and Sierra represent great advances in data efficiency along with significant improvements in energy efficiency.
SOURCES – Register UK, IBM, smarterplanet, Youtube
Brian Wang is a Futurist Thought Leader and a popular Science blogger with 1 million readers per month. His blog Nextbigfuture.com is ranked #1 Science News Blog. It covers many disruptive technology and trends including Space, Robotics, Artificial Intelligence, Medicine, Anti-aging Biotechnology, and Nanotechnology.
Known for identifying cutting edge technologies, he is currently a Co-Founder of a startup and fundraiser for high potential early-stage companies. He is the Head of Research for Allocations for deep technology investments and an Angel Investor at Space Angels.
A frequent speaker at corporations, he has been a TEDx speaker, a Singularity University speaker and guest at numerous interviews for radio and podcasts. He is open to public speaking and advising engagements.