Kurzweil Responds to Paul Allen: Don’t Underestimate the Singularity

Last week, Paul Allen and a colleague challenged the prediction that computers will soon exceed human intelligence. Now Ray Kurzweil, the leading proponent of the “Singularity,” offers a rebuttal.

Ray Kurzweil feels that Paul Allen did not read Ray’s book the Singularity is Near because that book contains the details and evidence of Ray’s case for the Singularity which go unaddressed in Paul Allen’s article.

I discussed the Paul Allen article and my general view of the Singularity last week

I think that far greater than human artificial intelligence is possible. I think the early versions will require a vast excess of computing hardware power relative to the computing power of the human brain. The Artificial Intelligences will outperform on tasks and capability by having a billions or trillions of times more hardware computing capability. This will make it easier to code because developers can waste compute cycles and memory to achieve the desired results. I am very confident about the continuing hardware improvements.

AGI hardware does not have to be the same size as the human brain.
AGI can use more than the 20 watts of the human brain.

Hot air balloons, blimps, airplanes, rockets did not have to mimic birds. All that mattered was the performance metrics and capabilities. We have only recently developed some flying machines (small UAVs) that mimic the flapping of bird or insect wings.

Also, Artificial general intelligence can still have some specializations and optimization. We can have more than one AGI working together. We do not have just one human.

There are no limits to the creativity or workarounds that can be used to allow AGI to outperform human brains.

Allen writes that “the Law of Accelerating Returns (LOAR). . . is not a physical law.” I would point out that most scientific laws are not physical laws, but result from the emergent properties of a large number of events at a finer level. A classical example is the laws of thermodynamics (LOT). If you look at the mathematics underlying the LOT, they model each particle as following a random walk. So by definition, we cannot predict where any particular particle will be at any future time. Yet the overall properties of the gas are highly predictable to a high degree of precision according to the laws of thermodynamics. So it is with the law of accelerating returns. Each technology project and contributor is unpredictable, yet the overall trajectory as quantified by basic measures of price-performance and capacity nonetheless follow remarkably predictable paths.

If computer technology were being pursued by only a handful of researchers, it would indeed be unpredictable. But it’s being pursued by a sufficiently dynamic system of competitive projects that a basic measure such as instructions per second per constant dollar follows a very smooth exponential path going back to the 1890 American census. I discuss the theoretical basis for the LOAR extensively in my book, but the strongest case is made by the extensive empirical evidence that I and others present.

Ray explains the context of Moore’s law as many paradigms

Allen writes that “these ‘laws’ work until they don’t.”

Ray responds as the end of each particular paradigm became clear, research pressure grew for the next paradigm.

The technology of transistors kept the underlying trend of the exponential growth of price-performance going, and that led to the fifth paradigm (Moore’s law) and the continual compression of features on integrated circuits. There have been regular predictions that Moore’s law will come to an end. The semiconductor industry’s roadmap titled projects seven-nanometer features by the early 2020s. At that point, key features will be the width of 35 carbon atoms, and it will be difficult to continue shrinking them. However, Intel and other chip makers are already taking the first steps toward the sixth paradigm, which is computing in three dimensions to continue exponential improvement in price performance. Intel projects that three-dimensional chips will be mainstream by the teen years. Already three-dimensional transistors and three-dimensional memory chips have been introduced.

This sixth paradigm [3 dimensional structures for computer hardware] will keep the LOAR going with regard to computer price performance to the point, later in this century, where a thousand dollars of computation will be trillions of times more powerful than the human brain. And it appears that Allen and I are at least in agreement on what level of computation is required to functionally simulate the human brain.

Ray defends software and algorithm speedup

Ray then defends software and algorithm improvement citing Report to the President and Congress, Designing a Digital Future: Federally Funded Research and Development in Networking and Information Technology” by the President’s Council of Advisors on Science and Technology –

The algorithms that we use today for speech recognition, for natural language translation, for chess playing, for logistics planning, have evolved remarkably in the past decade … Here is just one example, provided by Professor Martin Grötschel of Konrad-Zuse-Zentrum für Informationstechnik Berlin. Grötschel, an expert in optimization, observes that a benchmark production planning model solved using linear programming would have taken 82 years to solve in 1988, using the computers and the linear programming algorithms of the day. Fifteen years later—in 2003—this same model could be solved in roughly one minute, an improvement by a factor of roughly 43 million. Of this, a factor of roughly 1,000 was due to increased processor speed, whereas a factor of roughly 43,000 was due to improvements in algorithms! Grötschel also cites an algorithmic improvement of roughly 30,000 for mixed integer programming between 1991 and 2008.

Ray explains we do not need to understand and copy every part of the brain

Allen writes: “Every structure [in the brain] has been precisely shaped by millions of years of evolution to do a particular thing, whatever it might be. It is not like a computer, with billions of identical transistors in regular memory arrays that are controlled by a CPU with a few different elements. In the brain, every individual structure and neural circuit has been individually refined by evolution and environmental factors.”

Allen’s statement that every structure and neural circuit is unique is simply impossible. That would mean that the design of the brain would require hundreds of trillions of bytes of information.

Ray says we only try to understand some of it and leverage redundancy and find other ways to do things. We use the biological as inspiration to help get us around issues we face developing artificial systems.

Allen’s “complexity brake” confuses the forest with the trees. If you want to understand, model, simulate, and re-create a pancreas, you don’t need to re-create or simulate every organelle in every pancreatic Islet cell. You would want, instead, to fully understand one Islet cell, then abstract its basic functionality, and then extend that to a large group of such cells. This algorithm is well understood with regard to Islet cells. There are now artificial pancreases that utilize this functional model being tested. Although there is certainly far more intricacy and variation in the brain than in the massively repeated Islet cells of the pancreas, there is nonetheless massive repetition of functions.

Allen mischaracterizes my proposal to learn about the brain from scanning the brain to understand its fine structure. It is not my proposal to simulate an entire brain “bottom up” without understanding the information processing functions. We do need to understand in detail how individual types of neurons work, and then gather information about how functional modules are connected. The functional methods that are derived from this type of analysis can then guide the development of intelligent systems. Basically, we are looking for biologically inspired methods that can accelerate work in AI, much of which has progressed without significant insight as to how the brain performs similar functions. From my own work in speech recognition, I know that our work was greatly accelerated when we gained insights as to how the brain prepares and transforms auditory information.

The way that these massively redundant structures in the brain differentiate is through learning and experience. The current state of the art in AI does, however, enable systems to also learn from their own experience. The Google self-driving cars (which have driven over 140,000 miles through California cities and towns) learn from their own driving experience as well as from Google cars driven by human drivers. As I mentioned, Watson learned most of its knowledge by reading on its own.

It is true that Watson is not quite at human levels in its ability to understand human language (if it were, we would be at the Turing test level now), yet it was able to defeat the best humans. This is because of the inherent speed and reliability of memory that computers have. So when a computer does reach human levels, which I believe will happen by the end of the 2020s, it will be able to go out on the Web and read billions of pages as well as have experiences in online virtual worlds. Combining human-level pattern recognition with the inherent speed and accuracy of computers will be very powerful. But this is not an alien invasion of intelligence machines—we create these tools to make ourselves smarter. I think Allen will agree with me that this is what is unique about the human species: we build these tools to extend our own reach.

If you liked this article, please give it a quick review on ycombinator or StumbleUpon. Thanks

1 thought on “Kurzweil Responds to Paul Allen: Don’t Underestimate the Singularity”

  1. Hi Brian. Nice article.

    I myself am wishful for something like the Singularity, for life extension purposes. I am fascinated by the back-and-forth discussion here. When I read P. Allen’s criticisms, I found myself agreeing with him. I figured much of his objection – that the human brain is extremely complex and that the modelling of neuronal connections was merely scratching the surface, combined with the stop-and-start nature of scientific understanding – was rather insurmountable. I felt Ray had no way to respond.

    However I am quite buoyed by RK’s response. I feel [as far as his response to PA is concerned] he is correct that it is unimportant to understand every aspect of the brain’s biological operation to gain insight about it that can be leveraged when designing machines to replicate it’s effects. And these “effects” on machines, thanks to their near limitless speed, accuracy, information trawling and indefatigability, will produce astonishing results at impressive speed and scale.

    I know this is a 10 year old post, but I wonder what you [and others] think of Tim Dettmers following post, and whether you have read it:

    https://timdettmers.com/2015/07/27/brain-vs-deep-learning-singularity/

    He explains the complexities involved quite thoroughly.

    I would be interested in your responses.

Comments are closed.