From Petaflop to Exaflop


The NCCS massively parallel high-performance storage system (HPSS) data archive allows users to store vast amounts of long-term data. HPSS currently consists of tape and disk storage components, Linux servers, and HPSS software. As of October 2008 the system is storing over 3.5 petabytes of data. Tape storage is provided by robotic tape libraries that can each hold up to 30,000 cartridges, each of which can store up to a terabyte of data.

Oak Ridge National Lab (ORNL) is working in collaboration with the Office of Science and the Department of Defense in the High Productivity Computer Systems (HPCS) program. Cray and IBM have been selected to work on building machines capable of around 20 petaflops. ORNL will work closely with both companies to help them understand the strengths and weaknesses of their designs and the computational needs of large-scale scientific applications. By 2011-2012, the Office of Science plans to install a 20-petaflop machine designed by the vendor whose design is selected. Looking even further into the future, the goal is to install a 100-250-petaflop machine in the 2015 time-frame, and an exaflop machine by 2018.


EVEREST (Exploratory Visualization Environment for REsearch in Science and Technology) is a large-scale venue for data exploration and analysis

Jaguar Petaflop MAchine Getting 10 Petabytes of Storage

One of the other major features supporting the new Jaguar system is a Lustre-based file system called Spider. “Jaguar also has an enormous amount of I/O bandwidth—all that memory is useless if you can’t load data into the computer and get results back out,” Bland adds. “Jaguar has a disc bandwidth of 288 gigabytes per second—larger than any other supercomputer, but very balanced for a system of this size.”

Lustre, originally developed by Cluster File Systems, is an open-source, scalable, secure, robust, highly available cluster file system designed, developed, and maintained by Sun Microsystems. Spider will provide the Jaguar system with 10 petabytes of storage space.


A magnetohydrodynamic simulation performed with the GenASiS code explores what effects the instability of a supernova shock wave has on the magnetic fields in stellar cores during core-collapse supernova explosions. Shown is a volumetric representation of magnetic field strength (the semitransparent part) as well as a sampling of magnetic field strength and orientation at selected nodes (the vectors). The purpose of this was to help understand the effect of magnetic fields on the evolution of the 3D shock front.


FURTHER READING
Achieving an exaflop computer by 2015 was examined earlier and also creating a SETI@home collection of distributed computers with an exaflop of compute power in 2010-2011.

Folding@Home is the largest of the compute networks

The fastest, Folding@home, reported over 8.5 petaflops of processing power as of May, 2009. Of this, 2.5 petaflops of this processing power is contributed by clients running on PlayStation 3 systems and another 5.3 petaflops is contributed by their newly released GPU2 client.

Another distributed computing project BOINC platform, a host for a number of distributed computing projects. As of February 2009[update], BOINC recorded a processing power of over 1.7 petaflops through over 530,000 active computers on the network. One such project, SETI@home, reported processing power of over 508 teraflops through almost 317,000 active computers