State of cognitive enhancement

A pdf by Nick Bostrom and Anders Sandberg that surveys the state of cognitive enhancement in 2006

Nick Bostrom’s website

The paper reviews ways to train ourselves to be more intelligent/expert, drugs for enhancement /nootropic drugs, genetic modification, enhancing devices like computers, and brain /computer interfaces.

I think collaboration and collective productivity as in corporations has been somewhat discounted but communication and tool advancement could also make interesting breakthroughs in that area.

An older paper by nick Bostrom talks about the basic computational power needed for human level intelligence. Our most power supercomputers are in the middle of that estimated range. AI software lags. Access to the supercomputers for this purpose was lagging but there is the brain institute project .

Fairly large scale brain simulation projects have begun. 10,000 neurons were simulated. However, the project is not for artificial intelligence but to study brain structure The Brain Institute at the Ecole Polytechnique Fédérale de Lausanne (EPFL) in Zurich, researchers have built neocortical columns using supercomputing systems from SGI and IBM. They have a IBM Blue gene/L supercomputer with a peak speed of at least 22.8 Teraflops using 8000 processors. They think it will 10-15 years for the hardware to advance to a full brain simulation using their approach If Ovonic cognitive control devices are successfully developed this could happen sooner as they are more neuron like. Also, the use of GPUs and other hardware enhancements could accelerate hardware advancement

Red Herring discusses other cognitive computing projects that are started or being discussed The biggest being discussed is the Decade of the mind project. James Albus, senior fellow at the U.S. National Institute of Standards and Technology, says the NIST plans a project dubbed Decade Of The Mind, which calls for handing out up to $4 billion in funding to companies or universities doing research in mind-based computing.

Artificial Development is building CCortex, an simulation of the Human Cortex and peripheral systems, running on a computer cluster. They do not seem well funded enough to meet their ambitious goals.

There is also Steve Chen’s Third brain project to create a biosupercomputer

If we project forward 10 years. It seems a strong possibility is that we could have far better understanding of the human brain and systems that are 10,000 times more powerful and various means to enhance human intelligence by 2 to 100 times without triggering a real superintelligence that is not “strong superintelligence. I foresee “weak superintelligence”, which is human intelligence at high speed could provide an evolving pathway to strong superintelligence. It could be a safer path. Many could have access to “weak superintelligence” in the form of tighter coupling to advance computers and nootropic and genetic enhancement. Some in the singularity AI world have indicated that darwinian dynamics would not apply I think the software end is lagging and we will get “weak superintelligence” first and for an extended period. During this extended period darwinian dynamics would be applicable.

If the optimization of intelligence is speeding up and automating normal intelligence with the occasional insight into superior processes, then we would have a broadly advancing wave to strong superintelligence. This would not have many of the dangers that other foresee.

Scroll down slightly from this link and you will see a diagram of AGI plotted as an exponential line against a flat line for human intelligence. Widespread augmentation would make the human intelligence line one that is increasing as well

The danger has been expressed as a scenario where one superintelligence so outclasses all others that it rapidly reaches breakthrough after breakthrough so that its lead rapidly increases and becomes untouchable before any others can detect or respond while response would be effective. Leaving all at the mercy of the one superAI.

In comparing this to money, the superAI danger is like there exists an intelligence motherload (a buried mountain of gold which would be equal to a general theory of intelligence that leads to a far more rapid iterative intelligence improvement). Get to it and you are superrich while others are peasants. Alternatively if everyone (or large numbers) is able to get richer at a fast pace it would be more difficult for one to get dominance.

If we have a world of augmented intelligence then an important element (then as now)is securing vital resources. Getting your intelligence augment contaminated or pirated or turned against you would be bad.

tags:
congnitive enhancement
artificial intelligence
transhumanist
supercomputer
super intelligence

More reading:
Michael Anissimov on friendly AI

Michael Anissimov tracking AGI projects and work

List of AGI projects in 2006