Deep learning on Computronium versus the human brain

Cognitive Scientist Joscha Bach was asked what ingredients of human-level artificial intelligence we seem to be missing, and how to improve AI forecasts more generally.

  • Before we can implement human-level artificial intelligence (HLAI), we need to understand both mental representations and the overall architecture of a mind
  • There are around 12-200 regularities like backpropagation that we need to understand, based on known unknowns and genome complexity
  • We are more than reinforcement learning on computronium: our primate heritage provides most interesting facets of mind and motivation
  • AI funding is now permanently colossal, which should update our predictions
  • AI practitioners learn the constraints on which elements of science fiction are plausible, but constant practice can lead to erosion of long-term perspective
  • Experience in real AI development can lead to both over- and underestimates of the difficulty of new AI projects in non-obvious ways

According to different sources, brain seems to be worth between about 3 x 10^13 FLOPS and 10^25 FLOPS. The median estimate is 10^18 FLOPS.

Some use Traversed Edges Per Second (TEPS) instead of floating point operations. TEPS measure a computer’s ability to communicate information internally. Some can also estimate the human brain’s communication performance in terms of TEPS, and use this to meaningfully compare brains to computers. We estimate that the human brain performs around 0.18 – 6.4 * 10^14 TEPS. This is within an order of magnitude more than existing supercomputers in 2015.

At current prices for TEPS, they estimate that it costs around $4,700 – $170,000/hour to perform at the level of the brain. Their best guess is that ‘human-level’ TEPS performance will cost less than $100/hour in seven to fourteen years. It would be about eight more years beyond that to get to $1/hour for human level TEPS performance.

Molecular mechanical nanocomputer designs are theoretically 100 billion to 100 trillion times more energy efficient than todays supercomputers.

Ralph Merkle, Robert Freitas and others have a theoretical design for a molecular mechanical computer that would be 100 billion times more energy efficient than the most energy efficient conventional green supercomputer. Removing the need for gears, clutches, switches, springs makes the design easier to build.

Existing designs for mechanical computing can be vastly improved upon in terms of the number of parts required to implement a complete computational system. Only two types of parts are required: Links, and rotary joints. Links are simply stiff, beam-like structures. Rotary joints are joints that allow rotational movement in a single plane

A molecular model of a diamond-based lock, ¾ view

This would be 1 million to 1 billion times less than the $1/hour level.

So a technological brute force acceleration looks likely in the 15 to 35 year timeframe. We will at least have some improvements on deep learning and reinforcement learning. Substantial trillion+ qubit general purpose quantum computers and all optical computers will also be available.

What would we get in terms of artificial intelligence if the insights into algorithms lags the hardware. With really good hardware how much can performance and capabilities lag ?