Interview of Artificial General Intelligence Researcher Itamar Arel by Sander Olson

Here is an interview with Dr. Itamar Arel interview by Sander Olson. Dr. Arel runs the Machine Intelligence Lab at the University of Tennessee. Dr. Arel believes that “baby” AIs are possible within 3 years and computers with human-level intelligence are feasible within ten. Dr. Arel is further convinced that the individual components necessary for AI have largely been developed, and that building an Artificial General Intelligence (AGI) should cost only $10-15 million.

Question 1: What were your thoughts on the Singularity Summit?

Answer: I consider it a complete success. There were twice as many attendees as in previous years, and it was more practically oriented than in earlier years. This year’s conference was more focused, and that may explain the increased attendance.

Video of Arel from Singularity Summit 2009
Here is a clip from Dr. Arel at the 2009 Singularity Summit:

Question 2: You made some rather provocative arguments at the singularity summit .

Answer: At the singularity summit, I argued that the technologies needed to drive Artificial General Intelligence (AGI) are readily available. Given sufficient funding, we could create “baby” AGIs within 3-5 years and human level AGIs within a decade. At this point what is needed is a focused engineering effort rather than a dramatic breakthrough.

Question 3: On what grounds do you base these bold predictions?

Answer: I run the Machine Intelligence Lab at the University of Tennessee. We focus our attention on building intelligent machines, but we have a unique approach. We believe that general sentience is based on two critically important subsystems. These subsystems, when combined, could lead to general sentience and intelligence.

Question 4: Tell us more about these subsystems.

Answer: We call the first subsystem the situation inference subsystem. The agent or system infers the state of the world with which it interacts. This is obtained using a subsystem which utilizes a technique known as deep machine learning. The inferred state information is passed to a second subsystem which maps situations to actions using reinforcement learning. Deep machine learning, which is necessary for situation inference subsystems, has recently been developed and should be ready within 3-5 years. So the pieces of the puzzle are already around for us to make a quantum leap in thinking machines.

Question 5: So are the advances more software or hardware related?

Answer: It is actually both. The two subsystems – the inference engine and the decision making engine – can be realized in software. But the great advancements in VLSI technology now allows us to pack billions of transistors on a die and thus implement these systems in custom hardware. Each transistor can correspond to a synaptic gap, so we are within a few orders of magnitude of a mammal brain.

Question 6: You have argued that the Turing Test is an insufficient guide to assessing true intelligence. What would a computer need to do in order to convince you that it was both sentient and intelligent?

Answer: I am co-chairing a workshop that explicitly deals with establishing an AGI roadmap. This is a first of its kind effort that is designed to generate a roadmap that wouldn’t simply strive to create a machine that would pass the Turing test. Rather, we want to devise a better way of evaluating machine intelligence. The goal is to present an increasingly challenging series of tasks to a computer. A computer that passed all of these tests would be deemed human-level intelligent.

Question 7: What role are you playing in creating this AGI roadmap?

Answer: I am the co-organizer of the conference, along with Ben Goertzel. The term “AI” now pertains to narrow AI – AI that performs specific tasks well. By contrast, AGI systems would need to exhibit a broad intelligence, quickly learn new tasks, and be able to readily adapt to an unstructured environment. The goal of the AGI conferences is to specify and codify a series of metrics by which we could measure AGI performance and the challenges that need to be surmounted in order to create a truly intelligent machine.

Question 8: Is reverse-engineering the human brain a necessary prerequisite for AGI?

Answer: There are two schools of thought on this subject. One school of thought advocates reverse-engineering the brain as a necessary precursor to creating a sentient machine. This is often referred to as “whole brain emulation”. The other school of thought argues that replicating the human brain is an unnecessary task that would take decades. I agree with the later – there are quicker and easier ways to impart intelligence to a machine.

Question 9: Much AI involves giving “weights” to values. But how useful is such a system in the real world?

Answer: An AGI system will need to interact with its environment in a manner similar to humans. There will probably need to be some sort of positive feedback mechanism, so we will need to discover a way to give a computer a “rush” from doing a task correctly. This reward system will need to be both internal and external.

Question 10: What funding levels would be required to bring about AGI within a decade?

Answer: Although I cannot provide a precise number, the funding requirements would be relatively modest. Assembling a team together and equipping them with the necessary compute power could probably done for $10 million.

Question 11: Are you troubled by critics of AI arguing that sentience is a quantum phenomena?

Answer: I am not. I had a stimulating conversation with Stewart Hameroff who was arguing for quantum effects associated with consciousness. He argues that there are synchronized regions of neural activity that lead to consciousness. But he and I agreed that it should be possible to emulate any such quantum effects with digital logic. So that was very encouraging for me.

Question 12: If you were given a petaflop supercomputer, could you create an AGI now?

Answer: The computational resources are actually readily available. We could probably achieve rudimentary AGI with a fairly modest cluster of servers. That is one of the main advantages of not trying to emulate the human brain – accurately simulating neurons and synapses requires prodigious quantities of compute power.

Question 13: Do you believe that AGI will quickly and necessarily lead to superintelligence?

Answer: At this point it isn’t clear how long it will take to transition from human level AI to greater-than-human intelligence. For many tasks superintelligence simply isn’t needed. It is logical to assume that once we achieve human level AI, that superintelligence will follow relatively quickly. Transitioning from human level AI to a superintelligent AI might simply be a matter of upgrading hardware.

Question 14: Assuming sufficient funding, how much progress do you anticipate by 2019?

Answer: With sufficient funding, I am confident that a breakthrough in AI could be demonstrated within 3 years. This breakthrough would result in the creation of a “baby” AI that would exhibit rudimentary sentience and would have the reasoning capabilities of a 3 year old child. Once a “baby” AI is created, funding issues should essentially disappear since it will be obvious at that point that AGI is finally within reach. So by 2019 we could see AGI equivalent to a human adult, and at that point it would only be a matter of time before superintelligent machines emerge.

FURTHER READING
AGI Roadmap wiki