DARPA has funded the Synapse Program which seeks to break the programmable machine paradigm and define a new path forward for creating useful, intelligent machines. As compared to biological systems for example, today’s programmable machines are less efficient by a factor of one million to one billion in complex, real-world environments. The Synapse program is trying to build an electronic element like a brain synapse and then scale to brain levels. They are targeting 220 trillion synapses for a human cerebral cortex, which is 400 times larger than a recently completed rat cortex simulation. A successful synapse circuit will be 20 or more times better than a transistor for simulating a brain.
In the next 9 months (Nov 2008-Aug 2009), the research teams will focus on demonstrating nano-scale, low power synapse-like devices and on beginning to uncover the functional microcircuits of the brain.
Synapses are junctions between neurons. In mouse and rat brains, there are roughly 10,000 times more synapses in the brain than neurons. Strength/efficacy/efficiency of synapses is subject to change (plasticity) as the animal interacts with the environment, and these synaptic junctions are hyothesized to encode our individual experience. The computation, communication, memory, power, and space requirements for representing brain in software or hardware seem to scale with the number of synapses. Thus, brain is much less a neural network, and more correctly, a synaptic network.
IBM’s Dharmendra S Modha who has led a team to emulate a rat cortex:
here are three reasons why the time is now ripe to begin to draw inspiration from structure, dynamics, function, and behavior of the brain for developing novel computing architectures and cognitive systems.
* First, neuroscience now seems to have matured, and enough quantitative data is available for formulating hypotheses of brain function and dynamics.
* Second, supercomputing is now ready to undertake extremely large-scale simulations.
* Third, nanotechnology is evolving to the point that we may be able to represent essential computational function of synapses and neurons in hardware to rival brain’s power and space.
If we succeed, then we will be able to give birth to novel cognitive systems, computing architectures, programming paradigms and numerous practical applications and perhaps to entirely new industries. IBM press release captures it nicely. “The end goal: ubiquitously deployed computers imbued with a new intelligence that can integrate information from a variety of sensors and sources, deal with ambiguity, respond in a context-dependent way, learn over time and carry out pattern recognition to solve difficult problems based on perception, action and cognition in complex, real-world environments.”
Project Details from Wired Magazine Site
The researchers’ goal is first to simulate a human brain on a supercomputer. Then they plan to use new nano-materials to create logic gates and transistor-based equivalents of neurons and synapses, in order to build a hardware-based, brain-like system. It’s the first attempt of its kind.
In October, the group bagged a $5 million grant from Darpa — just enough to get the first phase of the project going. If successful, they say, we could have the basics of a new computing system within the next decade.
“The idea is to do software simulations and build hardware chips that would be based on what we know about how the brain and how neural circuits work,” says Christopher Kello, an associate professor at the University of California-Merced who’s involved in the project.
The human cortex has about 22 billion neurons and 220 trillion synapses, making it roughly 400 times larger than the rat scale model. A supercomputer capable of running a software simulation of the human brain doesn’t exist yet. Researchers would require at least a machine with a computational capacity of 36.8 petaflops and a memory capacity of 3.2 petabytes — a scale that supercomputer technology isn’t expected to hit for at least three years.
One of the main challenges to building this system in hardware, explains Boahen, is that each neuron connects to others through 8,000 synapses. It takes about 20 transistors to implement a synapse, so building the silicon equivalent of 220 trillion synapses is a tall order, indeed.
“You end up with a technology where the cost is very unfavorable,” says Boahen. “That’s why we have to use nanotech to implement synapses in a way that will make them much smaller and more cost-effective.”
Boahen and his team are trying to create a device smaller than a single transistor that can do the job of 20 transistors. “We are essentially inventing a new device,” he says.
Meanwhile, at the University of California-Merced, Kello and his team are creating a virtual environment that could train the simulated brain to experience and learn. They are using the Unreal Tournament videogame engine to help train the system. When it’s ready, it will be used to teach the neural networks how to make decisions and learn along the way.
Besides the IBM team that has created a rat brain emulation on an IBM supercomputer there are seven other members from four universities. Here are four of them
1. Prof. Kwabena Boahen, Neuromorphic Engineer, Stanford
Ph.D. Computation and Neural Systems, Caltech (1997). Nationally recognized pioneer in neuromorphic engineering; innovations include chips that emulate the retina, thalamus, hippocampus, visual cortex, and retinotectal map formation; 60 publications.
2. Prof. Stefano Fusi, Physicist and Theoretical Neuroscientist, Columbia University
He discovered and solved the fundamental problem of memory forgetting in electronic synapses. He is the author of 38 journal papers, some of them published on Nature Neuroscience and Neuron.
3. Prof. Rajit Manohar, Computer Scientist, Cornell
Ph.D. Computer Science, Caltech (1998); Leader in asynchronous VLSI design; inventor of GHz-speed FPGA technology and ultra low power processors; ~10 issued patents and 50 published papers.
4. Prof. Christopher Kello, Cognitive Scientist, Univ of California at Merced
Ph.D. Psychology, University of California, Santa Cruz (1996); Associate Professor of Cognitive Science, University of California, Merced; Internationally recognized leader in neural network modeling of high-level cognition (i.e. human language).
Paul Allen’s Human-brain Atlas Project
A spinal-cord atlas is close to completion. Paul Allen’s Institute will launch the first phase of a human-brain atlas, a four-year project, in 2010. In 2006, the institute had completed an atlas of gene expression in the mouse brain.
The scientists used state-of-the-art technology to dissect a mouse brain, photographed it sliver section by section, then reassembled it in a computer database that would allow easy access. But it was the speed at which the project was accomplished and what they did with it afterwards that changed the game.
They released it to the public. Over the internet. Free.
Proposed 5 year, $20 million Mesoscopic Scale Mouse Connectivity Brain Map
The mesoscopic scale refers to the length scale at which one can reasonably discuss the properties of a material or phenomenon without having to discuss the behavior of individual atoms, and concepts of averages such as density and temperature are useful. For solids and liquids this is typically a few to ten nanometers, and involves averaging over a few thousand atoms or molecules.
Here we advocate for a concerted effort to fill this gap, through systematic, experimental mapping of neural circuits at a mesoscopic scale of resolution suitable for comprehensive, brain-wide coverage, using injections of tracers or viral vectors.
Brian Wang is a Futurist Thought Leader and a popular Science blogger with 1 million readers per month. His blog Nextbigfuture.com is ranked #1 Science News Blog. It covers many disruptive technology and trends including Space, Robotics, Artificial Intelligence, Medicine, Anti-aging Biotechnology, and Nanotechnology.
Known for identifying cutting edge technologies, he is currently a Co-Founder of a startup and fundraiser for high potential early-stage companies. He is the Head of Research for Allocations for deep technology investments and an Angel Investor at Space Angels.
A frequent speaker at corporations, he has been a TEDx speaker, a Singularity University speaker and guest at numerous interviews for radio and podcasts. He is open to public speaking and advising engagements.