El Capitan 1.5 Exaflop Supercomputer Will Arrive Late in 2022

The Department of Energy (DOE), National Nuclear Security Administration (NNSA) and Lawrence Livermore National Laboratory (LLNL) today announced the signing of contracts with Cray Inc. to build the NNSA’s first exascale supercomputer, “El Capitan.” El Capitan will have a peak performance of more than 1.5 exaflops (1.5 quintillion calculations per second) and an anticipated delivery in late 2022. The total contract award is valued at $600 million.

Featuring advanced capabilities for modeling, simulation and artificial intelligence (AI), based on Cray’s new Shasta architecture, El Capitan is projected to run national nuclear security applications at more than 50 times the speed of LLNL’s Sequoia system. Depending on the application, El Capitan will run roughly 10 times faster on average than LLNL’s Sierra system, currently the world’s second most powerful supercomputer at 125 petaflops of peak performance. Projected to be at least four times more energy efficient than Sierra, El Capitan is expected to go into production by late 2023, servicing the needs of NNSA’s Tri-Laboratory community: Lawrence Livermore National Laboratory, Los Alamos National Laboratory and Sandia National Laboratories.

El Capitan will be DOE’s third exascale-class supercomputer, following Argonne National Laboratory’s “Aurora” and Oak Ridge National Laboratory’s “Frontier” system. All three DOE exascale supercomputers will be built by Cray utilizing their Shasta architecture, Slingshot interconnect and new software platform.

“The Department of Energy is the world leader in supercomputing and El Capitan is a critical addition to our next-generation systems,” said U.S. Energy Secretary Rick Perry. “El Capitan’s advanced capabilities for modeling, simulation and artificial intelligence will help push America’s competitive edge in energy and national security, allow us to ask tougher questions, solve greater challenges and develop better solutions for generations to come.”

Exascale performance will be delivered by a heterogeneous Central Processing Unit (CPU)/Graphical Processing Unit (GPU) architecture. This architecture will allow researchers to run exploratory 3D simulations at resolutions that are currently unobtainable and ensembles of 3D calculations at resolutions that are difficult, time-consuming or even impossible using today’s state-of-the art supercomputers. 3D simulations are becoming essential to meet the unprecedented demands of the NNSA Life Extension Programs (LEPs) and address nuclear weapon aging issues for which researchers have no nuclear test data.

El Capitan will be built on Cray’s Shasta supercomputing architecture and will be comprised of Shasta compute nodes and a future generation of ClusterStor storage. This unique architecture will be connected with Cray’s new Slingshot high-speed interconnect. The Shasta hardware and software architecture can accommodate a variety of processors and accelerators, making it possible for Cray and LLNL to work together in the coming months to finalize the decision on which processor and GPU components will be used at the node level to maximize performance for the enormous projected workloads. The platform also will utilize Cray’s new system and analytics software stack, which will deliver the scalability and flexibility needed for exascale computing. It also will enable the converged use of modeling, simulation and AI in support of the Lab’s research missions.

SOURCES- Livermore National Labs
Written By Alvin Wang, Nextbigfuture.com

10 thoughts on “El Capitan 1.5 Exaflop Supercomputer Will Arrive Late in 2022”

  1. Almost certainly no more than 1 exaflop. Maybe that’s why they called it el capitan.
    It is the first computing entity that is surely more powerful than a human brain.

  2. Thank you. This dates me a bit but I recall reading a book ca 1965 which stated that man will never build a machine with the capacity of the human brain. It looked at what it would take to cool vacuum tubes, and replace faulty ones. So we are ALREADY at a comparative level of complexity to the basic neuronal wiring of the brain, right now? I suppose the software has a way to go.

  3. As you may know, I think the human brain (or any brain) is more than just a calculation rate. The sliding scale of synapse potentials in their vast matrix evolved over millions of years. And even if we could simulate ‘just’ the brain it would be completely detached/deranged/insane without the entire nervous system to help it achieve the maturity of integrated systems that we see in a 2 year old.

    What we will have for a very long time is many specialized AIs that are very domain specific. They operate in clearly defined bounds. Yet, they will out pace us like an endless stream of unique idiot savants.

    And if a general artificial intelligence is somehow achieved… the vast majority will reject it’s views as politically incorrect. I have this creeping suspicion that an AGI will not be morally sensitive.

  4. Hans Moravec estimated it at ~100 teraflops by extrapolating from a comparison of the retina (which is basically a far-flung extension of the brain) to machine vision systems. But the machine performance target he used (1 megapixel @ 10 Hz) was preposterously underpowered compared to real human vision.

    A more biologically appropriate target like 10 megapixels @ 1000 Hz (1000 Hz for typical ~1 ms nerve reaction times vs. 10 Hz for the dubiously applicable ~100 ms cone photopigment reaction time) gives ~100 petaflops / ~0.1 exaflops. Furthermore if one assumes that each action potential in the brain represents ~1 flop, one arrives at a very similar figure. So ~0.1 exaflops seems like at least a reasonable guess.

  5. So we are at the exaflop stage now. We are in human brain territory. Anyone know how many exaflops our brain runs at?

  6. It’s all because of the terrible Stockpile Stewardship program in USA. Old nuclear weapons, not sure they work. So instead of testing them in underground test, they spend billions on dubious supercomputer simulations (using data from laser fusion experiments, costing more billions). So inefficient compared to TESTING the nukes.

  7. Well nuclear power from fission is simple enough to model on a good workstation. Technically all you need is a slide rule.

    Fusion is a whole other story.

  8. I agree, I bet you’d get a much better return on your investment simulating things besides nukes. Weather and climate are good, but AI would be even better.

  9. Computing power dedicated to weather simulation and climate change, almost nothing, computing power dedicated to NUKES, nearly unlimited.

    Got some big brains working in the White House, gigantic beautiful genius big brains.

Comments are closed.