HPE 160 terabyte RAM computer is beginning of a planned transformation of computing

HPE announced that it has created the largest single-memory computing system the world has ever seen, capable of holding 160 terabytes of data. In 2014, HPE introduced The Machine research project—the largest and most complex research project in our company’s history—designed to deliver the world’s first Memory-Driven Computing architecture. They believe this new computer will enable critical leaps in performance, allowing us to extract insights from data like never before.

In 2016, we delivered the first prototype. And, in just six months, they have scaled the prototype 20 fold.

HPE has invested billions (pretty all of their research budget) and basically bet the company on memristor technology and creating a new memory focused computing paradigm.

In 2016, the SK Hynix partnership faded and is focused on ReRAM, not memristor. Memristors were on the back burner.

The memory focused concepts are using DIMMS and other nearterm memory.

The system is the latest development from the The Machine research project, HPE’s quest to invent the world’s first Memory-Driven Computing architecture—a completely new way of storing and processing data.

HPE believes they can build a Memory-Driven Computer system with up to 4,096 yottabytes of data, more than 250,000 times the size of our digital universe today.

As of late 2016, HPE’s plans for this technology were:

2016 ProLiant boxes with persistent memory for applications to use, using a mix of DRAM and flash.
2016 – 2017: Improved DRAM-based persistent memory.
2018 – 2019: True non-volatile memory (NVM) for software to use as slow-but-copious RAM.
2020 and onwards: NVM technology used across multiple product categories.

The perfect memory for Memory-Driven Computing combines the best features of today’s memory and storage technologies. We want the speed of DRAM—which is used for “working” data in computers today, but the permanence—non-volatility—and low cost of Flash and hard drives. Another term for this approach that is growing in popularity is “storage class memory.”

HPE is in the process of bringing byte-addressable NVM to market. Using technologies from The Machine research project, the company developed HPE Persistent Memory—a step on the path to byte-addressable non-volatile memory, which offers the performance of DRAM and battery-based persistence in a traditional DIMM form factor. The company launched HPE Persistent Memory in the HPE ProLiant DL360 and DL380 Gen9 servers.

Fabric: In this context, “fabric” refers to the communication vehicle that facilitates data transfer between the different elements of the computer system.

Today, each computer component is connected using a different type of interconnect: memory connects using DDR, hard drives via SATA, flash drives and graphics processing units via PCIe and so on.

Memory-Driven Computing takes a very different approach: Every component is connected using the same high-performance interconnect protocol. This is a much simpler and more flexible way to build a computer. One key reason it’s faster is that data is accessed one byte at a time using the same simple commands used to access memory: just “load” and “store.” This eliminates the necessity to move many large blocks of data around and is much more efficient.

The fabric is what ties physical packages of memory together to form the vast pool of memory at the heart of Memory-Driven Computing. The result of our fabric research is working in the prototype we announced in November. But we’re not the only people in the industry who see the necessity for a fast fabric. We’re contributing these findings to an industry-led consortium called Gen-Z, which is tasked with developing an industry standard for this kind of technology. Now on to photonics.

Photonics: Current computer systems use electrons traveling in copper wire to transmit data between compute components. The problem is that there’s only so much data you can force down a copper wire, as much as 99 percent of the energy is lost through heat and cables are bulky.

The fix is to use light instead—a technology called photonics. Using microscopic lasers, we can funnel hundreds of times more data down an optic fiber ribbon using much less energy. The fibers are also tiny, making physical installation easier. In the latest prototype, we already replaced communications between computing boards (called nodes) with photonics interconnects. This approach allows us to more efficiently communicate larger amounts of data faster using less power and space.