IBM has promised the DOE National Nuclear Security Administration a 20 petaflop supercomputer that is scheduled for delivery in 2011. The supercomputer will be called Sequoia. The computer should be ten times more energy efficient per calculation than current supercomputers.
Can’t Afford IBM Supercomputers Consider DIY Supercomputers
Can’t afford IBM Supercomputers ? Consider following the example of Bruce Allen, who is the king of Beowulf open-source cluster supercomputers.
The most complicated thing about building a cluster is the networking, and the trickiest part of that is automating configuration of the boxes. When he started out on the 48-node cluster in 1998, Allen did each operation by hand on each server. “You quickly discover if it takes you five minutes per computer to do something and you have to do it 48 times, an entire morning or afternoon goes by, and what’s more, you make mistakes,” he says.
“So the name of the game is setting up automated systems to do things, like automated systems for installing operating systems and cloning machines and so forth. But there’s lots of public domain tools out there for doing that.”
The 1998 supercomputer was built from a Linux cluster of bargain 48 DEC Alpha Servers that had been discontinued, each with a single 300-MHz, 64-bit AXP processor for $70,000. His most recent supercomputer, a cluster of 1,680 machines with four cores each, is in Hanover, Germany. Essentially, it’s a 6,720-core processor that in the months after it was built was ranked No. 58 in the world.
Specifics on the Processors and Networking for the 20 Petaflop Machine
IBM will use custom built version of its gadget-focused Power processor–aimed specifically at supercomputer applications. The Power chips will have 18 processing cores on a single chip. The system as a whole will use 1.6 million cores.
Each chip will have its own built-in networking hardware and memory, reducing data bottlenecks between chips processing in parallel. And those processors will be arranged in a three-dimensional torus shape–two interlocking donuts–to bring each chip as close as possible to every other chip in the configuration.
Just as important as those physical tweaks for improving processors’ cooperation is the networking software that ties them together, says IBM’s vice president for Deep Computing, Ron Favali. Like older Blue Gene supercomputers, Sequoia will use five networks that split up and route data to optimize its ability to share processing throughout the system.
“Types of communication aren’t homogeneous, and if you organize them properly you see dramatic speed-ups in communication,” Favali says. “It’s like we’re setting up one network for trailer trucks, another for pleasure drivers, another for people driving to and from work. You design your roads differently for what they require.”
Brian Wang is a Futurist Thought Leader and a popular Science blogger with 1 million readers per month. His blog Nextbigfuture.com is ranked #1 Science News Blog. It covers many disruptive technology and trends including Space, Robotics, Artificial Intelligence, Medicine, Anti-aging Biotechnology, and Nanotechnology.
Known for identifying cutting edge technologies, he is currently a Co-Founder of a startup and fundraiser for high potential early-stage companies. He is the Head of Research for Allocations for deep technology investments and an Angel Investor at Space Angels.
A frequent speaker at corporations, he has been a TEDx speaker, a Singularity University speaker and guest at numerous interviews for radio and podcasts. He is open to public speaking and advising engagements.