Breakthroughs in Neuromorphic Computing Could Speed Computers and AI By Ten Times

Researchers at Sandia National Laboratories have collaborated with Stanford University and University of Massachusetts, Amherst have made recent breakthroughs in neuromorphic computing and the broader fields of organic electronics and solid-state electrochemistry.

The new system is parallel programming of an ionic floating-gate memory array, which allows large amounts of information to be processed simultaneously in a single operation. The research is inspired by the human brain, where neurons and synapses are connected in a dense matrix and information is processed and stored at the same location.

Sandia researchers demonstrated the ability to adjust the strength of the synaptic connections in the array using parallel computing. This will allow computers to learn and process information at the point it is sensed, rather than being transferred to the cloud for computing, greatly improving speed and efficiency and reducing the amount of power used.

Through machine learning technology, mainstream digital applications can today recognize and understand complex patterns in data. For example, popular virtual assistants, such as Amazon.com Inc.’s Alexa or Apple Inc.’s Siri, sort through large streams of data to understand voice commands and improve over time.

With the dramatic expansion of machine learning algorithms in recent years, applications are now demanding larger amounts of data storage and power to complete these difficult tasks. Traditional digital computing architecture is not designed or optimized for artificial neural networks that are the essential part of machine learning.

Conventional semiconductor fabrication technology has reached its physical limits — chips can simply not be shrunk further to meet the demand for high energy efficiency.

With conventional computer chips, information is stored in memory with high precision but has to be shuttled through a bus to a processor to execute tasks, causing delays and excess energy consumption.

“With the ability to update all of the data in a task simultaneously in a single operation, our work offers unmistakable performance and power advantages,” said Sandia researcher Elliot Fuller. “This is projected to improve machine learning while using a fraction of the power of a standard processor and 10 times higher speed than the best digital computers.”

The fast speed, the high endurance and low voltage critical for low energy computing are demonstrated in this work.

This becomes especially important in current and future applications such as driverless cars, wearable devices and automated assistant technology. As society increasingly relies on these applications for health and safety functions, improved accuracy and speed in computing without relying on cloud computing becomes critical.

This technology introduces a novel redox transistor approach into conventional silicon processing. The redox transistor — a device which functions like a tiny rechargeable battery — relies upon polymers that use ions to store information, and not just electrons, as with conventional silicon-based computers.

Future Sandia research will focus on understanding the fundamental mechanisms that govern how redox transistor devices operate with the goal of making them more reliable, faster and easier to combine them with digital electronics. Researchers are also interested in demonstrating larger, more complex circuits based on this technology.

Science – Parallel programming of an ionic floating-gate memory array for scalable neuromorphic computing

Abstract
Neuromorphic computers could overcome efficiency bottlenecks inherent to conventional computing through parallel programming and read out of artificial neural network weights in a crossbar memory array. However, selective and linear weight updates and less than 10 nanoampere read currents are required for learning that surpasses conventional computing efficiency. We introduce an ionic floating-gate memory (IFG) array based upon a polymer redox transistor connected to a conductive-bridge memory (CBM). Selective and linear programming of a transistor array is executed in parallel by overcoming the bridging voltage threshold of the CBMs. Synaptic weight read-out with currents less than 10 nanoampere is achieved by diluting the conductive polymer in an insulating channel to decrease the conductance. The redox transistors endure over billion ‘read-write’ operations and support over 1 megahertz ‘read-write’ frequencies.

SOURCES- Sandia Labs, Journal Science
Written By Brian Wang, Nextbigfuture.com

2 thoughts on “Breakthroughs in Neuromorphic Computing Could Speed Computers and AI By Ten Times”

  1. Was thinking something similar, but I’ll go this far, call me up when we have a timeline for mass production. It could be soon it could be far; there are a whole lot of “coulds” in this article. I do appreciate it as an effort to keep increases in computing power going up.

Comments are closed.