Cray is building new Shasta Supercomputers. The first will be built for $146 million at Berkeley National Labs. It will use AMD Epyc processor and the next-generation Einstein Nvidia Tesla GPU accelerator. The “Perlmutter” NERSC-9 supercomputer will be installed in late 2020.
Cray has built a new interconnect, called “Slingshot”.
The current Cori NERSC-8 machine at LBNL is a Cray XC40 system using the company’s “Aries” interconnect. There are 14 cabinets. The Cori system has 32.3 peak petaflops using Xeon and Xeon Phi nodes. It cost $75 million, or $2,321 per teraflops
They are aiming for the Shasta supercomputer to have 3 to 4 times the processing power of the Cori.
Shasta should have 97 to 130 petaflops.
Brian Wang is a Futurist Thought Leader and a popular Science blogger with 1 million readers per month. His blog Nextbigfuture.com is ranked #1 Science News Blog. It covers many disruptive technology and trends including Space, Robotics, Artificial Intelligence, Medicine, Anti-aging Biotechnology, and Nanotechnology.
Known for identifying cutting edge technologies, he is currently a Co-Founder of a startup and fundraiser for high potential early-stage companies. He is the Head of Research for Allocations for deep technology investments and an Angel Investor at Space Angels.
A frequent speaker at corporations, he has been a TEDx speaker, a Singularity University speaker and guest at numerous interviews for radio and podcasts. He is open to public speaking and advising engagements.
Processors are fast enough for exascale, memory and intercoects are slacking.
It should be noted that all of these supercomputers are only useful for problems that can be partitioned into really, really little pieces.
You know its gonna be awesome when you named your super after soda pop.
You know its gonna be awesome when you named your super after soda pop.
thats equal to human – correct?
thats equal to human – correct?
It should be noted that all of these supercomputers are only useful for problems that can be partitioned into really, really little pieces.
It should be noted that all of these supercomputers are only useful for problems that can be partitioned into really, really little pieces.
Processors are fast enough for exascale, memory and intercoects are slacking.
Processors are fast enough for exascale, memory and interconnects are slacking.