Computer manufacturer Atos has named its first customer for Bull sequana, a supercomputer design it hopes will reach exaflop levels of performance by 2020.
Atos is building the computer for the French Alternative Energies and Atomic Energy Commission (CEA), it said Tuesday.
It’s an ambitious target for Atos, as it will mean a thousand-fold increase in performance compared to the last machine it built for the CEA, the 1.25-Pflop Tera 100 completed in 2010. That machine, fast for its day, now languishes in 74th place in the Top500 list.
Atos also promises that Bull sequana will be 10 times more energy efficient than today’s machines.
The six-year-old Tera 100 is a laggard when it comes to energy efficiency, at just 0.23 Gflops/watt. Average power consumption across the November 2015 Top500 list was 1.45 Gflops/watt. That would put the power consumption of an exaflop machine at around 690 MW, around a third of the output of the Hoover Dam.
The most efficient machines on the list, many of them delivered within the last year, performed two or three times better, with a clutch of new entrants from China delivering between 3.77 and 4.78 Gflops/watt.
Modern supercomputers are already over ten times more efficient than Tera 100. Atos wants to beat the average supercomputer efficiency by a factor of ten. They will have to build one three to four times better than today’s best to meet its target, a performance improvement of 300 percent.
Graphics card manufacturer Nvidia has developed a new processor architecture it calls Pascal to speed up scientific calculations. Last week the Swiss National Supercomputing Center said it will use 4,500 of Nvidia’s latest processors to triple the performance of its Piz Daint supercomputer by year end. The machine already runs at 7.8 Pflops.
In sequana the computing resources are grouped into cells. Each cell tightly integrates compute nodes, interconnect switches, redundant power supply units, redundant liquid cooling heat exchangers, distributed management and diskless support.
Large building blocks to facilitate scaling
This packaging consisting of large building blocks facilitates high-scale deployment – up to tens of thousands of nodes, by optimizing density, scalability and cost-effectiveness.
Each sequana cell is organized across three cabinets: two cabinets contain the compute nodes and the central cabinet houses the interconnect switches.
Each compute cabinet houses 48 horizontal compute blades, with the associated power modules at the top of the cabinet and the redundant hydraulic modules for cooling at the bottom of the cabinet.
24 blades are mounted on the front side of the cabinet, while the 24 other blades are mounted on the rear side.
Each cell can therefore contain up to 96 compute blades, i.e. 288 compute nodes, equipped either with conventional processors (such as Intel® Xeon® processors) or accelerators (e.g. Intel® Xeon Phi™ or NVIDIA® GPUs).
In each 1U blade, a cold plate with active liquid flow cools all hot components by direct contact – the sequana compute blades contain no fan.
Bull eXascale Interconnect (BXI)
The core feature of BXI is a full hardware-encoded communication management system, which enables compute processors to be fully dedicated to computational tasks while communications are independently managed by BXI. This interconnect offers:
- sustained performance under the most demanding workloads;
- revolutionary hardware acceleration;
- designed for massive parallelism – up to 64k nodes, up to 16 million threads;
- designed to support exascale programming models and languages.
SOURCE- Atos, Bull, Computerworld, Youtube