Beyond Big Data is Big Memory Computing for 100X Speed

MemVerge™, the inventor of Memory Machine™ software, today introduced what’s next for in-memory computing: Big Memory Computing. This new category is sparking a revolution in data center architecture where all applications will run in memory. Until now, in-memory computing has been restricted to a select range of workloads due to the limited capacity and volatility of DRAM and the lack of software for high availability. Big Memory Computing is the combination of DRAM, persistent memory and Memory Machine software technologies, where the memory is abundant, persistent and highly available.

Transparent Memory Service

Scale-out to Big Memory configurations.
100x more than current memory.
No application changes.

Big Memory Machine Learning and AI

* The model and feature libaries today are often placed between DRAM and SSD due to insufficient DRAM capacity, causing slower performance
* MemVerge Memory Machine bring together the capacity of DRAM and PMEM of the cluster together, allowing the model and feature libraries to be all in memory.
* Transaction per second (TPS) can be increased 4X, while the latency of inference can be improved 100X

MemVerge was founded on the vision that every application should run in memory. The advent of Intel® Optane™ persistent memory makes it possible for applications of any size to forgo traditional storage in favor of petabyte-size pools of shared persistent memory. Designed for these big memory lakes, MemVerge Memory Machine software provides powerful data services such as ZeroIO™ Snapshot and memory replication, addressing application data persistence directly in memory. Compatible with existing and future applications, MemVerge technology will revolutionize data center architecture and make data-centric workloads such as artificial intelligence (AI), machine learning (ML), trading and financial market data analytics and high-performance computing (HPC) easier to develop and deploy. Enterprises can now train and infer from AI/ML models faster, work with larger data sets in memory, complete more queries in less time and consistently replicate memory between servers.

Big Memory is Backed by Industry Leaders

MemVerge is announcing $19 million in funding from new investors including lead investor Intel Capital as well as Cisco Investments, NetApp and SK hynix, with additional participation from existing investors Gaorong Capital, Glory Ventures, Jerusalem Venture Partners, LDV Partners, Lightspeed Venture Partners and Northern Light Venture Capital. The investment will be used to advance the development of MemVerge’s Memory Machine software and build out the company’s sales and marketing functions.

SOURCES – Memverge, Intel
Written by Brian Wang, Nextbigfuture.com

8 thoughts on “Beyond Big Data is Big Memory Computing for 100X Speed”

  1. In the real world a couple of hundred of gigabytes of memory is more than enough for most business applications. Few businesses have the computing needs of Google and Facebook.

  2. Having more RAM is necessary for improving drugs and time to market. The supercomputers eliminate trial and error, there is not much cost of simulating complex molecules compared to coming up with them and finding out if they’re feasible. Just for the COVID-19, supercomputer Summit was able to find 80 proteins that can bond with the virus in a couple days in what would’ve instead been months with less powerful supercomputers. Manufacturing isn’t the issue with these molecules, scientists build parameters into their calculations so the molecule is feasible, they aren’t going to search for a molecule for a drug that requires a million atm of pressure.

  3. Here bud, I think I went on a bit of a tangent. Just read this, it shows that DRAM is faster than Optane. The real thing I was trying to get across was the how Optane can either; compliment RAM and storage by acting as an intermediary, or Optane can just replace SSDs all together. It would be worth replacing SSDs with Optane, the only downside being that Optane costs more although this is repaid over time because it doesn’t degrade.
    https://phoenixnap.com/kb/optane-memory-vs-ssd-vs-ram

  4. Optane is limited to Intel, although they usually sell patents to other companies for a pretty penny. I do agree it is expensive, but it has the potential to replace storage and RAM so it is worth the price imo.

  5. Virtual RAM has been around forever. And while Optane was supposed to be 1000x faster than flash…it isn’t nearly that. It is more durable. But Optane too is not new. It was released in April 2017.
    It certainly is not better than actual RAM. Even the slowest server RAM will be far superior. And you can put a lot of RAM on some of the new server boards. The new EPYC servers can handle up to 4.1 TB of RAM. What AI problem is going to require that? You might be able to come up with drugs with very complex molecules…good luck actually manufacturing them.

  6. I’m not so sure. Looks like a very expensive proposition. Optane memory (if this is what’s it about) is expensive, limited to Intel and not yet scalable to the levels mentioned here.
    If the killer application is AI inference and training, it’s more cost effective with distributed massive parallel computing. At least for training. Inference seems to be getting owned by FPGAs and not general purpose computers.
    There has been good progress with distributed algorithms the last two years.

Comments are closed.