AI the 1950s and now or did you make that nematode smarter than humans yet?

Lee Gomes of the Wall Street Journal asks :

But don’t singularity people know that AI researchers have been trying to make such machines since the 1950s, without much success?

Lee is referring to the goal of making machines smarter than people.
The singularity he refers to is the technological singularity.

The Technological Singularity is the hypothesized creation, usually via AI or brain-computer interfaces, of smarter-than-human entities who rapidly accelerate technological progress beyond the capability of human beings to participate meaningfully in said progress.

In 1960, the a new computer was the PDP-1 and throughout the fifties IBM had the The IBM 700/7000 series

digg_url= ‘’; digg_skin =’compact’;

An IBM 704

Todays computers are a over a billion times more powerful than those machines.
The most powerful computers of today have about a petaflop of performance That is one billion megaflops. The first megaflop computer did not exist until 1964 with the CDC 6600.

Today, Nvidia is bringing multiple teraflops of processing power to individuals using the Tesla GPGPU computers

So the early AI pioneers had the equivalent task of producing artificial intelligence using less brainpower than a nematode

A nematode which has about one MIP of processing power

Scale of equivalent brainpower, this is from “When will computer hardware match the human brain?” by Hans Moravec written in 1997

Currently AI is widely used for programmed financial trading, which is a highly lucrative and influential activity. It also receives a lot of money for development, research and improvement

We are now in the range of mouse brain level of hardware.

My predictions on artificial general intelligence (AGI) are that hardware does matter as an enabling capability.

Some have indicated that AGI could be achieved with less raw processing capability, but this only scales down so far. Clearly there are lower limits to how efficiently someone can make the hardware or the neurons perform. Early AI workers clearly had no chance to succeed in the goal of AGI. It is now getting progressively easier as the hardware becomes better. We still do not know when it will become easy enough. I think we will need far more processing power than the human processing equivalent. My reason is that Microsoft creates better versions of the Excel computer program by wasting a lot of computer resources. It is easier to program a certain capability with a wasteful use of resources than with maximum efficiency. Wasting resources makes the complexity of the programming simpler. Once a certain level of AGI capability is achieved then the system can start helping to make the implementation more efficient.

UPDATE: The Cleverness of the AI programmers matters
I forgot to mention that the efficiency of algorithms has improved as well since the 1950s and 1970s For certain problems like factoring large numbers there has been dramatic improvement in algorithms. In 2007 the quadratic sieve is the best public algorithm for factorizing integers under 100 digits. In 1977 Pollard’s rho algorithm was among the best factoring algorithms.
The Blue Gene supercomputer running the old 1977 algorithm would take 12 years to factor a 90 digit prime. An Apple IIc from 1977 would take 3.3 years to factor a 90 digit prime.

Algorithmic efficiency in searching the space of 5 X 10**20 possible checkers position allowed only 1 out of every 5 million possible positions to be checked in order to solve checkers

Algorithms for quantum computers and quantum computers could greatly increase the efficiency of searching large combinatorial possibilities. If we have large scale quantum computers available that could be another alternative path to solving aspects of artificial intelligence. This goes to the concept that artificial intelligence does not have to solve the problem of intelligence in the same way as biological systems. If the problems are solved faster and in an improved fashion then you can get superior results and performance.

Am I concerned that a 1995 vintage VCR could be programmed into an AI ? No.

How about a supercomputer in ten years with a thousand petaflops with a million qubit quantum computer coprocessor and another billion neuron coprocessor rendered in hardware ? Hmm. Perhaps.

How about in twenty years with a molecular nanotechnology MNT enabled machine with a trillion petaflops, a trillion qubit quantum coprocessor and a thousand trillion neurons integrated device and one integrated with nanosensors and mobile agents ? I for one welcome our AGI (artificial general intelligence) overlords etc… Partially kidding. I expect us to use these more powerful systems just as today we use mundane AI and the Google system to enhance our productivity. We will have a tighter coupling and higher bandwidth communication with multiple intellgent systems. I do not compete with Deep blue playing chess (or even some PC based chess programs), so in future we will need to adjust to compatible roles with the new technology.

Artificial intelligence in the 1990s

Artificial intelligence in the 2000’s