A pro-Intelligence Explosion case from Richard Loosemore and Ben Goertzel

One of the earliest incarnations of the contemporary Singularity concept was I.J. Good’s concept of the “intelligence explosion,” articulated in 1965 and it one where a process of being able to make recursively improving intelligences is achieved.

Anders Sandberg listed out a partial list of potential bottlenecks:

1. Economic growth rate
2. Investment availability
3. Gathering of empirical information (experimentation, interacting with an environment)
4. Software complexity
5. Hardware demands vs. available hardware
6. Bandwidth
7. Lightspeed lags

Clearly many more can be suggested. But which bottlenecks are the most limiting, and how can this be ascertained?”

Richard Loosemore and Ben Goertzel try to address the seven listed points.

What Constitutes an “Explosion”?

We, like Good, are primarily interested in the explosion from human-level AGI to an AGI with, very loosely speaking, a level of general intelligence 2-3 orders of magnitude greater than the human level (say, 100H or 1,000H, using 1H to denote human-level general intelligence). This is not because we are necessarily skeptical of the explosion continuing beyond such a point, but rather because pursuing the notion beyond that seems a stretch of humanity’s current intellectual framework.

In a 1,000H world, AGI scientists could go from high-school knowledge of physics to the invention of relativity in a single day (assuming, for the moment, that the factor of 1,000 was all in the speed of thought—an assumption we will examine in more detail later).

Defining Intelligence (Or Not)

“intelligence explosion” is a qualitative concept, we believe the commonsense qualitative understanding of intelligence suffices. We can address Sandberg’s potential bottlenecks in some detail without needing a precise measure, and we believe that little is lost by avoiding the issue.

1. Economic growth rate and investment availability

Since the majority of financial trading on the US markets is now driven by program trading systems, it is likely that such AGI technology would rapidly become indispensible to the finance industry (typically an early adopter of any software or AI innovations). Military and espionage establishments would very likely also find a host of practical applications for such technology.

Even in a country with no economic growth or a recession there would be advantages to investing in AGI once a reasonably powerful even pre-human level system were developed.

2. Inherent Slowness of Experiments and Environmental Interaction

We do not have concrete reasons to believe that this will be a fundamental limit that stops the intelligence explosion from taking an AGI from H (human-level general intelligence) to (say) 1,000 H. Increases in speed within that range (for computer hardware, for example) are already expected, even without large numbers of AGI systems helping out, so it would seem that physical limits, by themselves, would be very unlikely to stop an explosion from 1H to 1,000 H.

3. Software Complexity

This seems implausible as a limiting factor, because the AGI could always leave the software alone and develop faster hardware. So long as the AGI can find a substrate that gives it a thousand-fold increase in clock speed, we have the possibility for a significant intelligence explosion.

Arguing that software complexity will stop the first self-understanding, human-level AGI from being built is a different matter. It may stop an intelligence explosion from happening by stopping the precursor events, but we take that to be a different type of question. As we explained earlier, one premise of the present analysis is that an AGI can actually be built. It would take more space than is available here to properly address that question.

even if software complexity remains a severe difficulty for a self-understanding, human-level AGI system, we can always fall back to arguments based on clock speed [hardware].

4. Hardware Requirements

If the first AGI had to be implemented on a supercomputer, that would make it hard to replicate the AGI on a huge scale, and the intelligence explosion would be slowed down because the replication rate would play a strong role in determining the intelligence-production rate. However, as time went on, the rate of replication would grow, as hardware costs declined

5. Bandwidth

AGIs could communicate with one another using high-bandwidth channels. This is inter-AGI bandwidth, and it is one of the two types of bandwidth factors that could affect the intelligence explosion.

Quite apart from the communication speed between AGI systems, there might also be bandwidth limits inside a single AGI, which could make it difficult to augment the intelligence of a single system. This is intra-AGI bandwidth.

The first one—inter-AGI bandwidth—is unlikely to have a strong impact on an intelligence explosion because there are so many research issues that can be split into separably-addressible components. Bandwidth between the AGIs would only become apparent if we started to notice AGIs sitting around with no work to do on the intelligence amplification project, because they had reached an unavoidable stopping point and were waiting for other AGIs to get a free channel to talk to them. Given the number of different aspects of intelligence and computation that could be improved, this idea seems profoundly unlikely.

Intra-AGI bandwidth is another matter. One example of a situation in which internal bandwidth could be a limiting factor would be if the AGI’s working memory capacity were dependent on the need for total connectivity—everything connected to everything else—in a critical component of the system. If this case, we might find that we could not boost working memory very much in an AGI because the bandwidth requirements would increase explosively. This kind of restriction on the design of working memory might have a significant effect on the system’s depth of thought.

6. Lightspeed Lags

Bature was forced to use the pipes-and-ion-channels approach, that leaves us with plenty of scope for speeding things up using silicon and copper (and this is quite apart from all the other more exotic computing substrates that are now on the horizon). If we were simply to make a transition membrane depolarization waves to silicon and copper, and if this produced a 1,000x speedup (a conservative estimate, given the intrinsic difference between the two forms of signalling), this would be an explosion worthy of the name.

7: Human-Level Intelligence May Require Quantum (or more exotic) Computing

There is currently no evidence that the human brain is a system of the nature of a quantum computer. Of course the brain has quantum mechanics at its underpinnings, but there is no evidence that it displays quantum coherence at the levels directly relevant to human intelligent behavior. In fact our current understanding of physics implies that this is unlikely, since quantum coherence has not yet been observed in any similarly large and “wet” system. Furthermore, even if the human brain were shown to rely to some extent on quantum computing, this wouldn’t imply that quantum computing is necessary for human-level intelligence

Their Conclusion

There is currently no good reason to believe that once a human-level AGI capable of understanding its own design is achieved, an intelligence explosion will fail to ensue.

The operative definition of “intelligence explosion” that we have assumed here involves an increase of the speed of thought (and perhaps also the “depth of thought”) of about two or three orders of magnitude. If someone were to insist that a real intelligence explosion had to involve million-fold or trillion-fold increases in intelligence, we think that no amount of analysis, at this stage, could yield sensible conclusions.

[Widespread] AGI with intelligence = 1000 H might well cause the next thousand years of new science and technology to arrive in one year

If you liked this article, please give it a quick review on ycombinator or StumbleUpon. Thanks