Quantifying and defining hard versus soft takeoff of AGI

There was a survey of experts who work or have worked in the field artificial general intelligence and of people with some level of futurist prominence. They were asked to predict whether AI (actually AGI artificial general intelligence) would be hard takeoff or soft takeoff. However, there is no quantification or solid definition of hard versus soft.

There were some still imprecise definitions at lesswrong.

A soft takeoff refers to an AGI that would self-improve over a period of years or decades. This could be due to either the learning algorithm being too demanding for the hardware or because the AI relies on experiencing feedback from the real-world that would have to be played out in real time. Possible methods that could deliver a soft takeoff, by slowly building on human-level intelligence, are Whole brain emulation, Biological Cognitive Enhancement and software based strong AGI.

A hard takeoff refers to AGI expansion in a matter of minutes, days, or months. It is a fast, abruptly, local increase in capability. This scenario is widely considered much more precarious, as this involves an AGI rapidly ascending in power without human control. This may result in unexpected or undesired behavior (i.e. Unfriendly AI). It is one of the main ideas supporting the Intelligence explosion hypothesis.

It would appear that a “takeoff” of less than one year is hard and a takeoff of over one year is soft. Although some can put in a 1-10 year takeoff as semihard, over ten years as soft and less than one year as hard.

So we can quantify the timeframes with some orders of ten levels.

Hard takeoff level 4 : 0.0001 years to 0.001 years
Hard takeoff level 3 : 0.001 years to 0.01 years
Hard takeoff level 2 : 0.01 years to 0.1 years
Hard takeoff level 1 : 0.1 years to 1.0 years
Semi hard 1.01 years to 10 years
Soft 10 years to 100 years
Supersoft 100 years to 1000 years
Glacial 1000 years to 10000 years

Then we need to define the start and end points and ways to measure the magnitude of the change.

Lesswrong goes into some aspects of this by discussing recursive self improvement and cascades.

Resource overhangs: Rather than resources growing incrementally by reinvestment, there’s a big bucket o’ resources behind a locked door, and once you unlock the door you can walk in and take them all

Cascades are when one development leads the way to another – for example, once you discover gravity, you might find it easier to understand a coiled spring.

Cycles are feedback loops where a process’s output becomes its input on the next round. As the classic example of a fission chain reaction illustrates, a cycle whose underlying processes are continuous, may show qualitative changes of surface behavior – a threshold of criticality – the difference between each neutron leading to the emission of 0.9994 additional neutrons versus each neutron leading to the emission of 1.0006 additional neutrons. k is the effective neutron multiplication factor and I will use it metaphorically.

Recursion is the sort of thing that happens when you hand the AI the object-level problem of “redesign your own cognitive algorithms”.

“Optimization slope” is the goodness and number of opportunities in the volume of solution space you’re currently exploring, on whatever your problem is;

“Optimization resources” is how much computing power, sensory bandwidth, trials, etc. you have available to explore opportunities;

“Optimization efficiency” is how well you use your resources. This will be determined by the goodness of your current mind design – the point in mind design space that is your current self – along with its knowledge and metaknowledge

Starting point problems

There is a problem with defining on when do we start the clock on the takeoff. There are forms of artificial intelligence now. Google is a company that is driven by a search and advertising algorithms. There are people involved in the recursive improvement process and the algorithm usually does directly drive the creation of an improved successor. Although big data and search have been used to enable some of the mapping and language translation applications. Moore’s law for computer hardware improvement has some level of recursive improvement.

If the definition is that the starting point must be greater than or equal to human level general intelligence. This is difficult to quantify and the breadth of comparison, equivalence or superiority needs defining.

The impact of the takeoff is also dependent on how far the intelligence explosion runs. How many orders of magnitude beyond the seed level does it go. To get beyond the resources of any corporation then the AGI needs to be part of controlling something with a valuation of over $1 trillion and to get to a large nation state then it needs to get to $10 trillion and to a global level it is $100 trillion and to go beyond is $1000 trillion and more.

Hardest takeoff scenarios

The hardest takeoff scenarios presume that molecular nanotechnology or super-advanced printable electronic technology is available. The AGI needs to be able to quickly revamp its substrate.

Hard and soft take off is discussion is gathered at wikipedia.

J. Storrs Hall believes that “many of the more commonly seen scenarios for overnight hard takeoff are circular – they seem to assume hyperhuman capabilities at the starting point of the self-improvement process” in order for an AI to be able to make the dramatic, domain-general improvements required for takeoff. Hall suggests that rather than recursively self-improving its hardware, software, and infrastructure all on its own, a fledgling AI would be better off specializing in one area where it was most effective and then buying the remaining components on the marketplace, because the quality of products on the marketplace continually improves, and the AI would have a hard time keeping up with the cutting-edge technology used by the rest of the world.

Ben Goertzel agrees with Hall’s suggestion that a new human-level AI would do well to use its intelligence to accumulate wealth. The AI’s talents might inspire companies and governments to disperse its software throughout society. The AI might buy out a country like Azerbaijan and use that as its base to build power and improve its algorithms. Goertzel is skeptical of a very hard, 5-minute takeoff but thinks a takeoff from human to superhuman level on the order of 5 years is reasonable. He calls this a “semihard takeoff”.

Nature of intelligence domains and capability

Many domains are limited. There is perfect games of checkers and chess if you solve for those games completely. The human checkers champion Tinsley only made about a dozen errors over twenty years of games. This was verified after checkers was solved. Many other knowledge domains are limited.

Some have discussed the need for super-human intelligence to solve problems like viable commercial nuclear fusion, climate change, large scale space access, molecular nanotechnology etc.. These are all solvable with human level intelligence.

Potential near term boosts in Computer capability

Optalysis optical computing could provide 100s to a million times boost in compute power over what would be expected with advanced CMOS computing

Advanced quantum computers could provide a large boost in computing capability and DWave systems is already being leveraged to enhance machine learning

Deep learning and deep reinforcement learning is the current hot topic and gaining interesting results

Algorithmic Intelligence Quotient seems to be the suite of tests for rapidly measuring whether changes are improving or degrading a system.

Molecular nanotechnology and quantum dot computing (quantum dot cellular automata) could enable new computing paradigms.

HP is investing a lot into memristors and optical on chip communication. Although this is a more modest 1 to 2 orders of magnitude improvement.

SOURCES – reducing-suffering, less-wrong, wikipedia,