Ben Goertzel is on a mission to build an “AI Toddler”

Ben Goertzel has been pondering questions of artificial intelligence since he was a teenager. During the past two decades, Goertzel has been developing and perfecting the OpenCOG AI software system. Dr. Goertzel believes that, with sufficient resources and time, OpenCOG could be used to create true machine intelligence. In an effort to expedite the development of OpenCOG, Goertzel has launched a crowdfunding project on indiegogo. This project has the ambitious aim of creating a “Sputnik moment” for the AI field by developing an robot with the proximate intelligence of a child. In an interview with Sander Olson for Next Big Future, Goertzel discusses how the field of AI has changed since 2009, how Artificial General Intelligence could be developed with relatively meager funding, and why the development of machine intelligence may be closer than most people believe.

Ben Goertzel interview (2013)

Question: I first interviewed you in 2009. How have things changed since then, in the world of AGI?

During the past four years, attitudes within the AGI community have changed dramatically. Many individuals and even some corporations now openly claim to be working on AGI related projects. Narrow AI still commands the lion’s share of money, and it is still difficult for an AGI project to garner funding, in either academia or industry. But one can clearly see the burgeoning acceptance of the concept of AGI – and of the notion that general intelligence can be created, potentially in the fairly near term, without resorting to reverse-engineering the brain.

Various supporting technologies have also advanced a lot. Cloud computing is a lot cheaper and easier to use now, which helps AGI projects a lot. And robots get better each year – which has spurred me to start a collaboration with roboticists David Hanson and Mark Tilden, aimed at using OpenCog to control humanoid robots. My thinking is that if we can create an OpenCog powered robot with the rough general intelligence of a human toddler, this will massively energize the AGI field and help turn a lot of the world’s resourced toward AGI. Actually we’re running an Indiegogo campaign to help get more funds to accelerate that “robot toddler” project – so I’d like to call your readers’ attention to that.

Question: But have there been any important technical developments in AGI since 2009?

Although there haven’t been any dramatic breakthroughs in AGI during the past four years, there has been steady progress along many different fronts. To name just one, the concept of Deep Learning, of hierarchical pattern recognition, has become much more popular – a number of deep learning algorithms have gotten more streamlined and have been scaled up, and there have been some high-profile demonstrations of the concept (Andrew Ng and Google’s demo of a simple deep learning neural net recognizing patterns in Google’s video trove, for instance). More and more deep learning applications have been deployed. In my own work, Itamar Arel’s Destin deep learning system has been integrated with OpenCOG, initially to help with computer vision, in the context of our new project using OpenCog with David Hanson’s humanoid robots.

Question: What about Watson? What role has IBM’s Watson computer done to advance AGI?

Watson does not represent an advance for the AGI field. The techniques used to create Watson are familiar and well-established. Watson is basically a huge expert system with a knowledge base, fueled by information extraction.

However, it is a wonderful, large-scale, integrated software system making use of high-end hardware. And it did, I think, change the public’s perception of the AGI field.

For the AGI research community, a computer that could become a grandmaster at Go, or prove complex math theorems without close human guidance, would be more of an accomplishment than winning at Jeopardy. But for the general public, the image of Watson decisively beating Jennings and Rutter was an awesome publicity move for IBM and for AGI in general.

Question: So OpenCOG is comparable to Watson?

OpenCOG, once properly developed, will be a completely different sort of animal than Watson. Watson is ultimately a one-trick pony – and an architecture that can be used to create a series of other one-trick ponies, with a lot of human energy and innovation required for each. For instance, to make a Watson type system for the medical field, as IBM is now striving to do, is not mainly a matter of giving Watson medical data and having it learn. It involves a massive amount of domain-specific human effort. This is because Watson is not a general intelligence – it lacks the ability to generalize. But the whole point of an AGI architecture like OpenCog is that it will have the ability to achieve its goals in complex environments (like the real world) by GENERALIZING from its prior experience, rather than having to be specifically reprogrammed for each new kind of situation it has to deal with.

And we could develop OpenCOG into a full-fledged AGI for a fraction of the resources that were deployed to create Watson. I’m happy that the OpenCog project does have some funding now, thanks to the Hong Kong government and my consulting company Novamente LLC, and donations from a few generous folks such as the Epstein Foundation. But compared to many other fields of research – like neuroscience or narrow AI — AGI projects as a whole now have very meager funding. It’s better than nothing but the field badly needs a Sputnik moment.

Right now, we’re using OpenCog to control video game type characters in 3D virtual worlds. This may lead to commercial gaming applications eventually, but our main goal is just to have a relatively simple context in which to experiment with integrating all the different components of OpenCog. Our game world has tens of thousands of blocks, with trees and buildings and furniture and staircases and so forth built with blocks. This is an environment where we can prototype the interaction of the system’s core cognitive capabilities without as many nitty-gritty difficulties as one finds when dealing with robotics….

Various OpenCog components are also being used for practical applications in varioius domains – but those are uses of the components on their own, not integrated into the core architecture. Biomind LLC is using OpenCog tools to analyze genetics data; Poulin Holdings is using them to do various projects for US government agencies, etc.

But we haven’t yet built any application using all the ideas in the OpenCog design – the current codebase has maybe 40% of the stuff in the overall design. So the current achievements are still quite limited compared to what we believe will be possible. But step by step, we’re building out the system according to the overall design – we’re getting there.

Question: So how does the AGI community create a “sputnik event”?

I have been working with David Hanson specifically to create such an event. It may take us a while yet, but it will happen.

In the last few years, commercially available robots have gotten much much better. Hanson’s Robokind robots, which we’re working with, are one example, though not the only one. (The Aldebaran Nao robots have also gotten much more robust in the last few years, for example.) If we combine the robotic perception and movement advances made over the past five years with the lifelike faces that Hanson provides and the broad intelligence of OpenCOG, we could create an event that fundamentally changes public perception regarding the near-term potential of AGI.

Question: What details can you provide regarding this robotic collaboration?

The robot we are using for our current OpenCog/robotics project is a Hanson Robokind. It uses standard servo-motors for movement, and has eyes that saccade (move around) like human eyes.. This robot will have a human face, and has “frubbery” skin that is capable of extremely evocative emotional expressions. The main objective will be to give actual embodiment to the OpenCOG mind. This robot, once we’ve done enough work with it, will have the approximate intelligence of a human toddler. And then once we’ve gotten to the toddler level, we will move on from that base of everyday commonsense intelligence and seek to make the robot smarter and smarter.

We have some funding for this project already, and we’ve been running an Indiegogo crowdfunding campaign aimed at rustling up a little more.

Question: So this indiegogo campaign will be used to hire senior researchers to work on improving OpenCOG for this robotic toddler?

We believe that we can fully develop OpenCOG with a fraction of the developers that created Watson. Watson had 25 full-time employees for four years. We have five employees, and we need to hire some more people for the team. This should enable us to create what we’re thinking of AGI Sputnik event.

For now we’re looking at taking a single Robokind robot and giving it roughly human toddler like intelligence using OpenCog. Eventually we would like to collaborate with Hanson Robokind, Mark Tilden (of Robo Sapien fame), and others to make better and better robot bodies along with increasingly intelligent robot minds.

Question: How quickly could this toddler AI develop into a human-adult level AGI?

Well, it’s hard to say with certainty. It’s always easier to say what’s possible, than how long it will take. But I can tell you my educated guess. Once you have a robotic toddler, you are probably years rather than decades away from having an adult level AI. Creating a 3 year old involves solving many of the problems inherent in artificial intelligence. I’m fairly confident that, with reasonable funding and assuming no really strange hitches arise, the whole process of scaling from a toddler to an adult would take no more than five years. This does not mean, however, that the adult level AI would have all the exact capabilities of a human adult. It might well lack the social and emotional intelligence of a human adult. But it would likely be more intelligent than a human adult in terms of various learning, reasoning and memory capabilities. It would be its own kind of human-level mind.

Question: Will this robot take advantage of cloud computing?

Yes, we are already using the Amazon compute cloud for some of our OpenCog work. That’s the most practical way to get a lot of compute power, at present. Of course, we could use a building full of IBM mainframes, or if we had a dedicated hardware R&D group, we could optimize the hardware for the task. But the cloud is much cheaper and feasible. Indirectly, we are limited by hardware limitations, and we could probably double or more our rate of progress with optimized hardware. But the main limiter to our progress at this point is software, which is why we are concentrating our efforts on that.

Question: Your book, Building Better Minds, will be published soon. Does this book provide a blueprint on how to build an intelligent machine?

The book I’ve been calling Building Better Minds is a 1,000 page book on how to build an AGI, according to the OpenCog approach. It is largely complete but is going through revisions. Actually I’m debating changing the title, but anyway the book is basically complete and is sitting on my hard drive. I’ve chosen a publisher and am discussing the details of the publication contract with them at the moment. OpenCOG consists of large portions of both engineering and basic science. Building Better Minds is a scientific explanation of how OpenCOG can be used to effectively create advanced general intelligence.

Question: Have you pondered the concept of having an x-prize for AGI?

I have briefly discussed the idea with Peter Diamandis, who heads the x-prize foundation, and with a number of others. I am not yet convinced it’s a great idea. The problem I worry about is: No matter what task you choose to center the prize around, it would probably be too easy to hack an AI or robot to perform the specific task required to win the prize. I suspect the x-prize approach is better for more mature technologies, such as making a mobile medical device. Once an AGI toddler is achieved, a number of x-prizes would then make sense. For instance, an x-prize to pass a child IQ test would be appropriate shortly after an AGI toddler is demonstrated. Or a home service robot x-prize, a robot biologist x-prize, a robot conversationalist x-prize somewhat similar to the Loebner Prize, etc.

Question: You mention the Loebner Prize – that’s about computer conversation, right? About passing the Turing test. So why not have an x-prize with a specific goal of having a computer pass the Turing test?

Passing the Turing test at this point is simply too hard for any current AI system. The Loebner prize already offers $100,000 for a computer that can pass the Turing test, and has offered the prize since 1990. Trying to work directly toward a computer that can hold a human adult level conversation is not, in my view, a productive research direction at present. It’s better to work toward that indirectly – for instance by trying to create a robot toddler, that has a more limited scope of conversation, but actually understands what it is talking about (because it’s talking about the things it’s seeing and doing). Once you have a system that can talk about very simple things from its everyday life with reasonable understanding, then you’re ready to start working on a system that can have full adult-level understanding. But trying to push straight for human adult level conversation, Turing test style, in a naïve way, just leads you into making chat bots or expert systems, which in my view are pretty much dead ends if your goal is AGI.

Question: Assuming that you achieve your AGI toddler goals, what then?

The AGI toddler will be a whole OpenCOG system, with something workable in place for each of the many components of the OpenCog design. We will at that point need to make advances in the functionality of each OpenCog component, as well as upgrades to the robotic body in order to accommodate a wider variety of environments. Getting to a robot toddler is a research project, since the robotic toddler is not going to be practically useful. But once the toddler gets up and running, developing it further should be relatively straightforward (though still involving a host of interconnected research and development problems, to be sure).

Question: In 2009, you argued that with sufficient funding, there was an 80% chance of developing true artificial intelligence within a decade. Are you still making this claim?

Yes, I am more confident than ever that true artificial intelligence could be developed in fairly short order with sufficient resources. Unless OpenCog and all other comparable projects fail to achieve and maintain enough funding to complete their work, I am confident that we will have human-level AI by 2025. Maybe sooner.

The main thing slowing AGI progress at this point is not any deep fundamental lack of understanding on the part of the research community. There are certainly plenty of technical problems to be solved to create AGI, but these can be solved via steady progress in science and engineering. There are many paths to AGI that could work – I think OpenCog is a very promising one, but it’s not the only possibility. To name just one other project I find promising, Demis Hassabis’s company Deep Mind, in the UK, seems to be on an interesting trajectory – though more brain-emulation-oriented than OpenCog, and more so than I would like to be personally given our current rather poor knowledge of the neural foundations of general intelligence.

The main thing slowing down progress toward AGI right now is that society is treating AGI as a marginal pursuit rather than as an important, critical research area like neuroscience, cancer research, chip design or search engine improvement. AGI is a hard science problem, but it’s also a large-scale engineering problem, and these take serious resources. And this gets back to my thinking about an AGI Sputnik moment. Once there is some sufficiently thrilling, scientifically and emotionally compelling demonstration of AGI, then the world will start to take AGI seriously, and we’ll start to see way faster progress. I think it’s going to happen soon.

The work David Hanson and I are doing with the OpenCog team has clear potential to lead to an AGI Sputnik, in the form of a robot toddler. There may also be other pathways to an AGI Sputnik, but that’s the one that’s clearest to me, since it’s the one I’m working on.

If you liked this article, please give it a quick review on ycombinator or StumbleUpon. Thanks