Ultra Low Energy Computer Memory for AI Learning in Medical and Other Applications

The CEA-Leti-based team’s uses machine learning on RRAM to get 100,000 times less energy usage. They use randomness instead of trying to prevent it. This allows in-situ learning to be realized in a highly efficient fashion through the application of nanosecond voltage pulses to nanoscale memory devices. Compared to a CMOS implementation of its algorithm, the approach requires five orders of magnitude less energy. That is the rough equivalence of the difference in height between the tallest building in the world and a coin lying on the ground.

RRAM has been applied to in-memory implementations of backpropagation algorithms to implement in-situ learning on edge systems. However, because backpropogation requires high-precision memory elements, previous work has largely focused on how RRAM randomness can be mitigated – often necessitating energy-intensive techniques.

This hyper energy-efficient permanent memory will bring AI learning to edge-computing systems. This is impossible using existing commercial approaches. They will be able to implant learning memory into medical system that locally updates its operation based on the evolving state of a patient. To run a representative test of learning at the edge in such an environment, the team experimentally applied RRAM-based MCMC to train a multilayer Bayesian neural network to detect heart arrhythmias from electrocardiogram recordings – reporting a better detection rate than a standard neural network based on a von Neumann computing system.

The system could be used as the foundation for the design and fabrication of a standalone and fully integrated RRAM-based MCMC sampling chip, for applications outside the laboratory.. That achievement will finally open the door to edge learning and an entirely new set of applications.

Last Weeks AI Breakthrough

Last week, there was an AI breakthrough where 24 billion synapses and 4 million neurons were simulated on a desktop PC with one GPU. The smart memory and more efficient brain modeling will accelerate progress in AI.

Dr James Knight and Prof Thomas Nowotny from the University of Sussex’s School of Engineering and Informatics used the latest Graphical Processing Units (GPUs) to give a single desktop PC the capacity to simulate brain models of almost unlimited size. This work will make large brain simulations accessible to researchers with tiny budgets.

The Ultra-Energy Efficient Memory Research Paper

Nature Electronics – In situ learning using intrinsic memristor variability via Markov chain Monte Carlo sampling

Abstract
Resistive memory technologies could be used to create intelligent systems that learn locally at the edge. However, current approaches typically use learning algorithms that cannot be reconciled with the intrinsic non-idealities of resistive memory, particularly cycle-to-cycle variability. Here, we report a machine learning scheme that exploits memristor variability to implement Markov chain Monte Carlo sampling in a fabricated array of 16,384 devices configured as a Bayesian machine learning model. We apply the approach experimentally to carry out malignant tissue recognition and heart arrhythmia detection tasks, and, using a calibrated simulator, address the cartpole reinforcement learning task. Our approach demonstrates robustness to device degradation at ten million endurance cycles, and, based on circuit and system-level simulations, the total energy required to train the models is estimated to be on the order of microjoules, which is notably lower than in complementary metal–oxide–semiconductor (CMOS)-based approaches.