Hardware for artificial intelligence

The information that I am providing here suggests that within 5 years (by 2012) a researcher with about $20,000 should be able to buy custom hardware to simulate 64 to 250 million neurons in real time or general purpose hardware (the latest Nvidia/Intel GPGPUs for a 10 million neuron simulation (scaling teraflops for the GPGPUs versus current Blue Gene/L 8,000,000 neurons at 1/6th real time). $400K for 1 billion neurons $4 million for 10 billion, $40 million for 100 billion. (A real time human brain simulation could be achieved with a 2011-2012 supercomputer. a 100 to 200 million grand challenge type project would be highly likely to succeed.) The price will fall in half each year after 2012. It could happen faster and cost less if we are even more clever. It should be about 2015-2018 for petaflop level performance at about $20k. There are also possibities for faster development using Ovonic quantum control devices or using simulations like those provided by CCortex. The simulation of neurons could exceed these estimates with more efficient programming of the simulations. The quality and precision of the simulated neurons is still being improved.

Rough estimates:
2012 for a full real time human brain simulation. (100 billion neurons)
2018 for that simulation to be less than an average annual salary of someone in the developed countries. ($60,000/year at that time)

Wikipedia discusses the estimates on the hardware needed for a human brain simulation

digg_url= ‘http://advancednano.blogspot.com/2007/07/hardware-for-artificial-intelligence.html’; digg_skin =’compact’;
reddit_url=’http://advancednano.blogspot.com/2007/07/hardware-for-artificial-intelligence.html’reddit_url=’advancednano.blogspot.com/2007/07/hardware-for-artificial-intelligence.html’

Simulated human brain model estimate from Ray Kurzweil 10 petaflops.
Newest IBM blue gene/P will have 3 petaflops at the topend (costs about 200 million).
There are nine current computing projects (such as BlueGene/P) to build more general purpose petaflops computers all of which should be completed by 2008.

Most other attempted estimates of the brain’s computational power equivalent have been rather higher, ranging from 100 million MIPS to 100 billion MIPS. Furthermore, the overhead introduced by the modelling of the biological details of neural behaviour might require a simulator to have access to computational power much greater than that of the brain itself.

Software. Software to simulate the function of a brain would be required.

Understanding. Finally, it requires sufficient understanding thereof to be able to model it mathematically. This could be done either by understanding the central nervous system, or by mapping and copying it. Neuroimaging technologies are improving rapidly, and Kurzweil predicts that a map of sufficient quality will become available on a similar timescale to the required computing power. However, the simulation would also have to capture the detailed cellular behaviour of neurons and glial cells, presently only understood in the broadest of outlines.
Once such a model is built, it will be easily altered and thus open to trial and error experimentation. This is likely to lead to huge advances in understanding, allowing the model’s intelligence to be improved/motivations altered.
Recent article about controlling neurons with light that will help speed up the science of understanding the workings of the brain

The Blue Brain project has used a supercomputer, IBM’s Blue Gene platform, to simulate a neocortex consisting of approximately 8,000,000 neurons and 50 billion interconnecting synapses. The eventual goal of the project is to use supercomputers to simulate an entire brain.
IBM simulated 8 million neurons on a Blue Gene/L earlier this year

The lastest results digital mouse brain that needs about 6 seconds to simulate 1 second of real thinking time. That’s still a long way from a true mouse-size simulation, and it runs on a Blue Gene/L supercomputer with 8,192 processors, four terabytes of memory, and 1 Gbps of bandwidth running to and from each chip. This is one eighth the 65,000 processors for the 280 teraflop version.

The human brain has roughly 100 billion neurons operating simultaneously, connected by roughly 100 trillion synapses. By comparison, a modern computer microprocessor uses only 1.7 billion transistors. Although estimates of the brain’s processing power put it at around 10**14 neuron updates per second, it is expected that the first unoptimized simulations of a human brain will require a computer capable of 10**18 FLOPS.


The problem of brain simulation is that there are seven levels of investigation and 10 orders of magnitude (10 to 100 billion neurons)

Nvidia has released some cheap teraflops which will be improved next year for double precision. Intel will also be introducing Larabee anther general purpose graphic processing unit. These special machines speed up neuron simulation 100 times and molecular modeling by 240 times. Intel will also be introducing 80 core chips.

At the start of 2008, Nvidia will provide 12 teraflops for about $60K and 2 teraflops for $10K. The 12 teraflops would be pretty close to the power of the Blue gene supercomputer 22.8 teraflops which simulated 10,000-60,000 neurons. Another simulation ran 8 million neurons on a Blue Gene/L earlier in 2007

The IBM work by Dharmendra Modha simulated 8 million neurons at ten times slower than real time.

The new highly parallized approaches seems to be doubling every 12 months. Flash memory is improving faster than Moore’s law as well which will help speed up what was disk heavy searches and reduce the power used.

Custom analog hardware seems like the cheaper route to brute forcing AGI.


The link above discusses a Stanford effort to simulate 64 million neurons by about 2011 (5 years after the 2006 presentation). Real time cortex scale simulations.


Hardware from 2005

Here is an update from Feb 2007 on the Boahen work in MIT Technology Review

A mouse brain houses over 16 million neurons, with more than 128 billion synapses running between them.

There is an effort for a very large (billions of neuron)simulation by a company CCortex


CCortex accurately models the billions of neurons and trillions of connections in the human brain with a layered distribution of spiking neural nets running on a high-performance supercomputer. This Linux cluster is one of the 20 fastest computers in the world with 500 nodes, 1,000 processors, 1 terabyte of RAM, 200 terabytes of storage, and a theoretical peak performance of 4,800 Gflops.

I think the CCortex simulation is not as accurate as the simulation that is being performed on the custom hardware.

Ovonic quantum control devices are possible transistor replacements from the inventor and billion dollar company behind PRAM and the nickle hydride battery. The quantum control device is more neuron like. The goal is print those out reel to reel. If that works (and he has big companies working with him) then multi-billion and multi-trillion neuron simulations could happen very quickly.

FURTHER READING
Accelerating Future discusses the odds of Artificial General Intelligence (AGI)

There is a also a suggested reading list for those interested in AGI

The state of cognitive enhancement

There are also the other AI work like numenta

Somewhat related: Today there was news of progress that checkers has been weakly solved. All positions with 10 or fewer pieces on either side were calculated. Once you go from 12 to 20 pieces down to 10 then the computer plays perfectly and at either draws or wins. Checkers at wikipedia A brute force solution to checkers.

IEEE spectrum discusses the checkers solution Checkers on a 12X12 board has 5 x 10**20 possible situations.

First, he constructed databases of endgames, building backward from all the possible wins, losses, or draws that checkers could conclude with. A so-called backward-searching algorithm built the path of situations that would have led to these endgames all the way to the point where there were 10 game pieces on the board. The result is a database of 39 trillion positions compressed using a homebrew algorithm into an average of 237 gigabytes for an average of 154 positions per byte of data.

The next step was to use a forward-search technique, such as the ones chess software typically rely on to figure out how to get to those 10-piece situations from the beginning of the game, when all 24 pieces are on the board. Schaeffer and his colleagues used a technique called “best first” to prioritize searching various positions and lines of play. At a given position in the game there are several possible moves that can be made. Instead of exploring all of these moves to their final outcomes using deep search, Schaeffer’s team used Chinook to provide a measure of what the strongest line of play would be—what would most likely result in a win in the fewest moves. This line of play was evaluated first. If it did result in a win, then there was no need to search any other parallel lines of play, because the entire line was already known to result in a strong win. Since a win was achieved so quickly, it means the losing side made a mistake and did not play perfectly. Entire lines of play branching from various positions were eliminated this way, vastly reducing the number of lines that had to be deeply explored. By applying such a technique, Schaeffer’s team was able to solve checkers using the least amount of effort. Of the 5 x 10**20 possible positions, Schaeffer needed to evaluate only 10**14 to prove that checkers, played perfectly, results in a draw.

Chess has somewhere in the order of 10**40 to 10**50 positions.

Comments are closed.