Much will be learned from $1.3 billion artificial brain project either from success or failure

Henry Markram believes technology has finally caught up with the dream of AI: Computers are finally growing sophisticated enough to tackle the massive data problem that is the human brain. But not everyone is so optimistic. “There are too many things we don’t yet know,” says Caltech professor Christof Koch, chief scientific officer at one of neuroscience’s biggest data producers, the Allen Institute for Brain Science in Seattle. “The roundworm has exactly 302 neurons, and we still have no frigging idea how this animal works.” Yet over the past couple of decades, Markram’s sheer persistence has garnered the respect of people like Nobel Prize–winning neuroscientist Torsten Wiesel and Sun Microsystems cofounder Andy Bechtolsheim. He has impressed leading figures in biology, neuroscience, and computing, who believe his initiative is important even if they consider some of his ultimate goals unrealistic.

Markram has earned that support on the strength of his work at the Swiss Federal Institute of Technology in Lausanne, where he and a group of 15 postdocs have been taking a first stab at realizing his grand vision—simulating the behavior of a million-neuron portion of the rat neocortex. They’ve broken new ground on everything from the expression of individual rat genes to the organizing principles of the animal’s brain. And the team has not only published some of that data in peer-reviewed journals but also integrated it into a cohesive model so it can be simulated on an IBM Blue Gene supercomputer.

The big question is whether these methods can scale. There’s no guarantee that Markram will be able to build out the rest of the rat brain, let alone the vastly more complex human brain. And if he can, nobody knows whether even the most faithful model will behave like a real brain—that if you build it, it will think. For all his bravado, Markram can’t answer that question. “But the only way you can find out is by building it,” he says, “and just building a brain is an incredible biological discovery process.” This is too big a job for just one lab, so Markram envisions an estimated 6,000 researchers around the world funneling data into his model.

The EU has bet $1.3 billion on Markram for a ten year project.

NBF- You learn things by trying to perform the full scale work. Like the Wright brothers building hundreds of versions of gliders and planes.

To add to the brain-mapping mix, President Obama in April announced the launch of an initiative called Brain (commonly referred to as the Brain Activity Map), which he hopes Congress will make possible with a $3 billion NIH budget. (To start, Obama is pledging $100 million of his 2014 budget.) Unlike the static Human Connectome Project, the proposed Brain Activity Map would show circuits firing in real time. At present this is feasible, writes Brain Activity Map participant Ralph Greenspan, “in the little fruit fly Drosophila.”

Even scaled up to human dimensions, such a map would chart only a web of activity, leaving out much of what is known of brain function at a molecular and functional level. For Markram, the American plan is just grist for his billion-euro mill. “The Brain Activity Map and other projects are focused on generating more data,” he writes. “The Human Brain Project is about data integration.” In other words, from his exalted perspective, the NIH and President Obama are just a bunch of postdocs ready to work for him.

Markram understood that it would take trillions of dollars, not billions, to experimentally model every part of the human brain. “Other people in the field were saying that we didn’t know enough to start,” he says. (The Allen Brain Atlas’ Christof Koch, for one. Markram’s first mentor, Rodney Douglas, for another.) “What I realized was that you can get to the unknowns indirectly. It’s like putting together a puzzle with lots of missing pieces. If you can see the pattern, you can fill in the gaps.” Markram calls the process predictive reverse-engineering, and he claims that it has already allowed him to anticipate crucial data that would have taken years to generate in a wet lab. For example, only about 20 of the 2,970 synaptic pathways in one small part of the rat neocortex have been experimentally measured. Detecting a pattern, he was able to fill in parameters for the remaining 2,950 pathways and to observe them working together in a simulation. Then he measured several in the wet lab to validate his reverse-engineered data. The simulation proved correct.

Markram thinks that the greatest potential achievement of his sim would be to determine the causes of the approximately 600 known brain disorders. “It’s not about understanding one disease,” he says. “It’s about understanding a complex system that can go wrong in 600 different ways. It’s about finding the weak points.” Rather than uncovering treatments for individual symptoms, he wants to induce diseases in silico by building explicitly damaged models, then find workarounds for the damage. Researchers have done the same with lab animals for decades, observing their behavior after giving them lesions. The power of Markram’s approach is that the lesioning could be carried out endlessly in a supercomputer model and studied at any scale, from molecules to the brain as a whole.

He plans to give the EU an early working prototype of this system within just 18 months—and vows to “open up this new telescope to the scientific community” within two and a half years—though he estimates that he’ll need a supercomputer 100,000 times faster than the one he’s got to build the premium version. Ever the optimist, he believes that Moore’s law (and the European Union) will deliver him that raw power in about a decade. However, he’ll also need far more data than even his industrial-strength Blue Brain lab can collect. Shortly after arriving at Lausanne, Markram developed workflows that extracted experimental results from journals, strip-mining thousands of neuroscience papers only to find that the data was too inconsistent to use in a model. For a while, that looked like one of his biggest hurdles. But he has since been building standardized protocols for many of the labs participating in the Human Brain Project. His timing may be just right, with the data glut expected from the Allen Brain Atlas, the Human Connectome Project, and the Brain Activity Map. According to Brown University neuroscientist John Donoghue, one of the key figures in the Obama-sanctioned initiative, “the two projects are perfect complements. The Human Brain Project provides a means to test ideas that would emerge from Brain Activity Map data, and Brain Activity Map data would inform the models simulated in the Human Brain Project.”

If you liked this article, please give it a quick review on ycombinator or StumbleUpon. Thanks