IBM is developing energy-efficient run-time thermal control strategies to achieve energy-efficient cooling mechanisms to compress almost 1 Tera nano-sized functional units into one cubic centimeter with a 10 to 100 fold higher connectivity than otherwise possible. This will hopefully enable compact mobile supercomputers with the power of todays room sized systems in a desktop package in the 2017-2025 timeframe.
“The long-term vision of a zero emission data center we may eventually achieve a million fold reduction in the size of SuperMUC, so that it can be reduced to the size of a desktop computer with a much higher efficiency than today,” said Dr. Bruno Michel, manager, Advanced Thermal Packaging, IBM Research.
SuperMUC combines its hot-water cooling capability, which removes heat 4,000 times more efficiently than air, with 18,000 energy-efficient Intel Xeon processors. In addition to helping with scientific discovery, the integration of hot-water cooling and IBM application-oriented, dynamic systems management software, allows energy to be captured and reused to heat the buildings during the Winter on the sprawling Leibniz Supercomputing Centre Campus, for savings of one million Euros ($1.25 million USD) per year.
The continued miniaturization and the increased density of components in today’s electronics have pushed heat generation and power dissipation to unprecedented levels. Current thermal management solutions, usually involving remote cooling, are unable to limit the temperature rise of today’s complex electronic components. Such remote cooling solutions, where heat must be conducted away from components before rejection to the air, add considerable weight and volume to electronic systems. The result is complex military systems that continue to grow in size and weight due to the inefficiencies of existing thermal management hardware.
Recent advances of the DARPA Thermal Management Technologies (TMT) program enable a paradigm shift—better thermal management. DARPA’s Intrachip/Interchip Enhanced Cooling (ICECool) program seeks to crack the thermal management barrier and overcome the limitations of remote cooling. ICECool will explore ‘embedded’ thermal management by bringing microfluidic cooling inside the substrate, chip or package by including thermal management in the earliest stages of electronics design.
The new LRZ “SuperMUC” system was built with IBM System x iDataPlex Direct Water Cooled dx360 M4 servers with more than 150,000 cores to provide a peak performance of up to three petaflops, which is equivalent to the work of more than 110,000 personal computers. Put another way, three billion people using a pocket calculator would have to perform one million operations per second each to reach equivalent SuperMUC performance. Also, a revolutionary new form of hot-water cooling technology invented by IBM allows the system to be built 10 times more compact and substantially improve its peak performance while consuming 40 percent less energy than a comparable air-cooled machine.
“This year all the electricity consumed by state-funded institutions across Germany are required to purchase 100% sustainable energy,” said Prof. Dr. Arndt Bode, Chairman of the Board, Leibniz Supercomputing Centre. “SuperMUC will help us keep our commitment, while giving the scientific community a best-in-class system to test theories, design experiments and predict outcomes as never before.”
Five-dimensional scaling: How density improves efficiency in future computers
We address integration density in future computers based on packaging and architectural concepts of the human brain: a dense 3-D architecture for interconnects, fluid cooling, and power delivery of energetic chemical compounds transported in the same fluid with little power needed for pumping. Several efforts have demonstrated that by vertical integration, memory proximity and bandwidth are improved using efficient communication with low-complexity 2-D arrays. However, power delivery and cooling do not allow integration of multiple layers with dense logic elements. Interlayer cooled 3-D chip stacks solve the cooling bottlenecks, thereby allowing stacking of several such stacks, but are still limited by power delivery and communication. Electrochemical power delivery eliminates the electrical power supply network, freeing valuable space for communication, and allows scaling of chip stacks to larger systems beyond exascale device count and performance. We find that historical efficiency trends are related to density and that current transistors are small enough for zetascale systems once communication and supply networks are simultaneously optimized. We infer that biological efficiencies for information processing can be reached by 2060 with ultracompact space-filled systems that make use of brain-inspired packaging and allometric scaling laws.