Storing information in DRAM memory instead of hard disks could vastly speed up computing according to Stanford researchers. John Ousterhout’s proposed RAMCloud is based on dynamic random access memory (DRAM). In personal computers, after data is fetched from a disk or flash drive, it is temporarily stored in DRAM, which provides a program with very fast access. Data is stored as an electrical charge on a capacitor. In a data center, fetching bits from DRAM and sending them over the center’s internal network should be 100 to 1,000 times faster than getting it from a disk.
“You’ll be able to build new kinds of applications that just weren’t possible before,” says Ousterhout. “Can you ever think of a time in the history of technology that improving speed by a thousandfold happened and nothing changed?”
“HP is planning to put a replacement chip on the market to go up against flash within a year and a half,” said Williams, “and we also intend to have an SSD replacement available in a year and a half.” “In 2014 possibly, or certainly by 2015, we will have a competitor for DRAM and then we’ll replace SRAM.” HP thinks they can do two orders of magnitude improvement in terms of switching energy per bit.
This project asks the question “Are we nearing the end of the line for disk-based storage systems and, if so, what is next?”
The problem: large-scale Web applications are finding it increasingly difficult to scale disk-based systems to get the performance they need. In addition, disk technology evolution has favored capacity over access speed, which means that disks today can store enormous amounts of information but cannot provide frequent access to it.
The solution: one possible solution is to shift the home of online data from disk to DRAM. RAMCloud is a new kind of datacenter storage system, where all information lives at all times in fast DRAM and large-scale systems are created by aggregating the main memories of thousands of commodity servers. RAMCloud is interesting because it combines large scale (100-1000 TBytes) with 100-1000x faster latency than current systems (5-10 microseconds to access small amounts of RAMCloud data from application servers in the same datacenter). In addition to simplifying the creation of large-scale Web applications, we believe that RAMCloud will enable a new breed of data-intensive applications.
The project: we are creating a production-quality implementation of RAMCloud that we can release in open source form. Along the way we will have to solve numerous research issues, such as how to ensure the durability of RAMCloud data, how to achieve the very low latency we believe is possible, and how to manage a system of this size and complexity.
* Fast Recovery in RAMCloud: a draft of a paper on RAMCloud’s mechanism for recovering crashed servers in 1-2 seconds. A revised version of this paper will appear in the ACM Symposium on Operating Systems Principles in October, 2011.
* The Case for RAMCloud: a position paper that discusses the motivation for RAMCloud, the new kinds of applications it may enable, and some of the research issues that will have to be addressed to create a working system. This paper appeared in Communications of the ACM in July 2011.
* An earlier and a slightly longer version appeared in Operating Systems Review in December 2009. The RAMCloud Wiki: used by project members to share design documents, miscellaneous notes, and links to related materials.