Researchers at the Stanford School of Engineering have delivered a nanoelectronic synapse that might drive a new class of microchips that can learn, adapt and make probability-based decisions in complex environments. Their work might one day lead to real-time brain simulators that enhance our understanding of neuroscience.
It takes today’s state-of-the-art supercomputer eight-and-a-half minutes to simulate just five seconds of normal human brain activity. Meanwhile, that supercomputer will consume 140,000 times as much electricity as the brain – 1.4 million watts to ten to be exact – to do the work. For sheer processing power and efficiency, nothing quite compares to the human brain.
This is a follow up on June, 2011 coverage of the Nanoletters paper The new information is the Stanford press release.
The researchers are not the first to venture down this path, but they are the first to succeed at creating synaptic devices small enough, with a low-enough energy consumption, and created with a mature technology so as to anticipate commercial viability down the road.
“This development could lead to electronic devices that are so small and so energy efficient that we might be able to make nanoelectronic versions of certain parts of the brain to study how they work,” said Wong, a professor of electrical engineering. “While you can’t alter a biological brain, a synthetic device such as this would allow researchers to change the device parameters to reveal how real brains function.”
The Stanford team’s device emulates synaptic plasticity using a technology known as “phase-change material,” the same technology that allows DVDs and CDs to store information. When juiced with electricity, these materials change their physical characteristics and therefore their electrical conductivity in tiny increments – more electricity, more change.
Rather than the two states of a transistor, however, the Stanford team has demonstrated an ability to control the synaptic device in 1 percent increments – like a lightbulb on a dimmer – meaning each phase-change synapse can convey at least 100 values.
The device can be manufactured using existing commercial equipment with readily available materials.
“Using well-understood manufacturing processes, we can construct a cross-point architecture allowing three-dimensional stacking of layers that could one day approach the density, compactness and massive parallelism of the human brain,” said Kuzum.
The researchers do not, however, foresee their new chips replacing existing ones. Instead, they say, they will lead in promising and exciting new directions that are currently out of reach.
“Our long-term goal is not to replace existing chips, but to define a fundamentally distinct form of computational devices and architectures. These new devices and architectures will excel at distributed, data-intensive algorithms that a complex, real-world environment requires, the sort of algorithms that struggle through today’s processing bottlenecks,” said Kuzum.
Among the most intriguing possibilities of these synaptic devices is greater parallelism. The brain is very good at juggling many types of sensory information simultaneously, something computers do very poorly. A supercomputer, by comparison, does not owe its great power to the speed of its processors so much as to splitting up big problems among many processors, each working on a small part of the problem. A more brain-like architecture might allow much smaller chips to think in parallel on many things at the same time.
And where might this lead us? It could lead to real-time brain simulations for use in neuroscience that may augment our understanding acquired from biological measurements