Here is an interview with Ben Goertzel. The interview is by Sander Olson, who has also interviewed Dr. Richard Nebel. This link is to all Sander Olson interviews on this site). Dr. Goertzel has a PhD in mathematics and is currently one of the world’s top Artificial General Intelligence (AGI) researchers. In this interview, Dr. Goertzel makes some noteworthy points:
– There is an 80% chance of creating a sentient AI within 10 years with proper funding. Adequate funding would only be $3 million per year. Even without this modest funding Goertzel is still confident (80% probabilitiy) that AGI will arrive within 20 years.
-The pace of AI research is clearly accelerating – forums and conferences are occuring at an increasing pace, and corporations are increasingly interested in AI
– Several industries, including robotics and search engines, could become major drivers of AI research in the next few years.
-Goertzel is working with J Storrs Hall and others to create an artificial intelligence roadmap similar to the Foresight/Batelle roadmap unveiled last year. The creation of this roadmap will be challenging but should further spur AI development.
Note: An interview with Fusion researcher Eric Lerner will be completed shortly.
Question: Your company, Novamente, is doing research on Artificial General Intelligence (AGI). How is that research progressing?
Answer: AGI is a long term pursuit aimed at achieving human level and eventually superhuman level intelligence. But given the enormousness of the challenge you shouldn’t expect to be interviewing a superhuman AI anytime in the next several years. However, we do believe that we are developing a system that has a reasonable chance of achieving human-level intelligence and sentience within the next decade, especially if our project gets properly funded.
Question: Tell us about the Novamente Cognition Engine (NCE). Does it really possess the ability to improve itself?
Answer: There is a fine line between learning and self-improvement – to an extent any life form or system that learns is capable of improving itself. The cognition engine differentiates itself from most current AI systems because it changes its knowledge and its strategies as it progresses. But it doesn’t yet have the power of full self-improvement, in the sense of being able to rewrite all its own code according to its own ideas. That will come a little later!
Question: The NCE already controls a dog in second life. How long before it controls a convincing human?
Answer: It is already possible to create an AI bot that can pass for human in many casual online interactions, but that is not our objective. Rather, we are striving to create a program that is genuinely intelligent and which would pass rigorous testing. We are years from that goal. How many years it takes will depend on funding and the effectiveness of our algorithms. But this project should be doable within a decade, given adequate funding.
Question: What funding levels do you consider adequate?
Answer: We are capable of operating on a shoestring budget, so an annual budget of $3 million per year should be sufficient, including staff and hardware. Maybe even less; depending on the hardware requirements, which aren’t yet fully clear, we might be able to make do with half that. The only input costs are for skilled labor, perhaps a dozen researchers, and for sufficiently large and capable server farms. Finding a source of funding for high-risk, long-term research is a continuing challenge. But AGI really could be developed on a shoestring budget, relative to a lot of other technologies.
Question: You don’t believe that reverse engineering the brain is the quickest way to achieving AI. Why do you believe that other approaches are superior?
Answer: Ray Kurzweil’s argument that brain-scanning technologies will improve exponentially in the next few decades is plausible, but some of his exponential growth curves are more reliable than others. There simply aren’t enough data points to be able to make extrapolations about the future accuracy of brain scanning with a high degree of confidence. By contrast, the Moore’s law and digital computing trends are more clearly established and will directly benefit all AI approaches. I think that AGI via brain scanning and emulation will work, I just think there’s a possibility to create powerful AGI faster via using other methods, like the ones we’re working on.
Question: J. Storrs Hall has argued that the hardware necessary for general intelligence already exists. Do you agree?
Answer: Yes. I would be surprised if one could run an AGI program on my macbook, but I wouldn’t be surprised if google or Amazon’s server farms have sufficient computational capacity to achieve human-level intelligence. Current AI researchers may be constrained by their lack of access to sufficient computer power, but Moore’s law will eventually eliminate that problem. Although better hardware always helps, the primary problem at this point is software and algorithms, not hardware.
Question: Is the pace of AGI research accelerating?
Answer: The pace is clearly accelerating. Ten years ago talks on general artificial intelligence at conferences were virtually nonexistent. Now conferences and symposia are springing up on AGI at an ever increasing pace. The biggest problem with AGI research at this point is funding. But with the increasingly broad interest that we’re seeing, increasing funding may come. Another problematic issue is the lack of metrics for measuring progress. How do you know when you are a quarter of the way to your goal? This is something I’m putting some effort into lately.
Question: What is the likelihood of the development of AGI in the next 10, 20, 30 years?
Answer: The answer to that depends largely on the resources that society dedicates to the problem. Assuming current funding levels, I would guess a 70% chance within the next twenty years, and a 98% chance of general AI occurring within the next 30 years. But with generous funding there is an 80% chance of creating AGI within the next ten years. And superhuman AIs will very likely emerge within a few years of human-level AIs.
Question: What is the single biggest impediment to AGI research?
Answer: Funding is currently the biggest impediment. The ideas already exist, but building complex software is a nontrivial task. Microsoft employs hundreds of coders to develop an operating system. By contrast we have a handful of engineers working on AGI. We would also benefit from having our own server farms, which are expensive to build and maintain.
Question: During the next decade, what will be the main driver of AI research?
Answer: At a certain point, service robotics is going to take off. If the robotics industry manages to solve the issues regarding low level processing – walking without falling, manual dexterity, object recognition, and so forth – quickly enough, then service robotics could become a major driver of AI research. Another major driver could be online search and question answering applications, which would benefit enormously from having natural language searches. Another possible driver of AGI could be the finance industry, since that industry already makes extensive use of narrow AI systems – before long some of the visionaries of the financial world may realize the overwhelming advantages of being able to utilize a general AI system.
Question: Will it be possible to achieve AGI by combining numerous narrow AI programs?
Answer: Although general intelligence systems might make use of narrow AI programs, something besides a combination of narrow AI programs needs to be involved in order to have a true general intelligence. Even if you integrated together a lot of great narrow AI programs, without the transfer of knowledge from one of the narrow-AI programs to another, the system wouldn’t be able to derive insights and reason in a general way. And this kind of “transfer learning” is what general intelligence is all about. A grab-bag of narrow-AI algorithms would also lack a sense of self, which is a prerequisite of any true AGI.
Question: The Battelle/Foresight nanotechnology roadmap was recently unveiled. Is there a similar roadmap for AGI?
Answer: There isn’t yet, and we are trying to remedy that. A colleague and I are organizing a workshop for fall 2009 with the aim of formulating an AGI roadmap. This is an important step, since there are currently more approaches to AGI than AGI researchers, and the commonalities are sometimes obscured by the different terminologies different researchers use. J Storrs Hall, one of the key formulators of the nanotechnology roadmap, is going to be involved with the AGI roadmap as well. The creation of an AGI roadmap should be a boon to the field of artificial intelligence.