Ben Goertzel interviews Josh Hall and Pei Wang for H+ magazine

Artificial General Intelligence researcher Ben Goertzel has interviewed nanotech/AI researcher Josh Hall and AGI researcher Pei Wang for Humanity+ magazine. In the Pang interview, Dr. Pang interview Pang claims that sentient machines could be only a decade away. In the Hall interview, Hall argues that advanced machine intelligence and desktop nanofactories will emerge sometime in the 2020s. Hall has published both Nanofuture, as well as Beyond AI, and is currently writing a book on machine ethics.

Part of the Pei Wang Interview
Ben: One approach that has been proposed for surmounting the issues facing AGI research, is to draw on our (current or future) knowledge of the brain. I’m curious to probe into your views on this a bit.

Regarding the relationship between neuroscience and AGI, a number of possibilities exist. For instance, one could propose:

A) to initially approach AGI via making detailed brain simulations, and then study these simulated human brains to learn the principles of general intelligence, and create less humanlike AGIs after that based on these principles; or

B) to thoroughly understand the principles of human intelligence via studying the brain, and then use these principles to craft AGI systems with a general but not necessarily detailed similar to the brain; or

C) to create AGI systems with only partial resemblance to the human brain/mind, based on integrating our current partial knowledge from neuroscience with knowledge from other areas like psychology, computer science and philosophy; or

D) to create AGI systems based on other disciplines without paying significant mind to neuroscience data.

I wonder, which of these four approaches do you find the most promising, and why?

Pei: My approach is roughly between the above D and C. Though I have gotten inspirations from neuroscience on many topics, I do not think to build a detailed model of neural system is the best way to study intelligence.

Pei: My project NARS has been going on according to my plan, though the progress is slower than I hoped, mainly due to the limit of resources.

What I’m working on right now is: real-time temporal inference, emotion and feeling, self-monitoring and self-control.

If it continues at the current pace, the project, as currently planned, can be finished within 10 years, though whether the result will have “human-level AGI” depends on what that phrase means — to me, it will have.

Ben: Heh…. Your tone of certainty surprises me a little. Do you really feel like you know for sure that it will have human-level general intelligence, rather than needing to determine this via experiment? Is this certainty because you are certain your theory of general intelligence is correct and sufficient for creating AGI, so that any AGI system created according to this theory will surely have human-level AGI?

Pei: According to my theory, there is no absolute certainty on anything, including my own theory!

What I mean is: according to my definition of “intelligence”, I currently see no major remaining conceptual problem. Of course we still need experiments to resolve the relatively minor (though still quite complicated) remaining issues.

From the J Storrs Hall Interview

BG: OK, not much in terms of stuff you can buy – but what progress do you think has been made, in the last decade or so, toward the construction of “real nanotechnology” like molecular assemblers and utility fog? What recent technology developments seem to have moved us closer to this capability?

JSH: Well, in the research labs you have some really exciting work going on in DNA origami, manipulation / patterning of graphene, and single-atom deposition and manipulation. In the top-down direction – “Feynman’s path” – you have actuators with sub-angstrom resolution and some pretty amazing results with additive e-beam sintering.

BG: What about utility fog? Are we any closer now to being able to create utility fog, than we were 10 years ago? What recent technology developments seem to have moved us closer to this capability?

JSH: There’s actually been some research projects in what’s often called “swarm robotics” at places like CMU, although one of the key elements is to design little robots simple and cheap enough to build piles of without breaking the bank. I think we’re close to being able to build golf-ball-sized Foglets – meaning full-functioned ones – if anyone wants to double the national debt. You’d have to call it “Utility Hail”, I suppose.

BG: OK, I see. So taking a Feynman-path perspective, you’d say that right now we’re close to having the capability to create utility hail – i.e. swarms of golf-ball sized flying robots that interact in a coordinated way. Nobody has built it yet, but that’s more a matter of cost and priorities than raw technological capability. And then it’s a matter of incremental engineering improvements to make the hail-lets smaller and smaller until they become true foglets.

Whereas the Drexler-path approach to utility fog would be more to build upwards from molecular-scale biological interactions, somehow making more easily programmable molecules that would serve as foglets – but from that path, while there’s been a lot of interesting developments, there’s been less that is directly evocative of utility fog. So far.

JSH: Right. But things are developing fast and nobody can foresee the precise direction.

In 2001 you stated that you thought the first molecular assemblers would be built between 2010 and 2020. Do you still hold to that prediction?

JSH: I’d say closer to 2020, but I wouldn’t be surprised if by then there were something that could arguably be called an assembler (and is sure so to be called in the press!). On the other hand, I wouldn’t be too surprised if it took another 5 or ten years beyond that, pushing it closer to 2030. We lost several years in the development of molecular nanotech due to political shenanigans [] in the early 20-aughts and are playing catch-up to any estimates from that era.

BG: In that same 2001 interview you also stated “I expect AI somewhere in the neighborhood of 2010,” with the term AI referring to “truly cognizant, sentient machines.” It’s 2011 and it seems we’re not there yet. What’s your current estimate, and why do you think your prior prediction didn’t eventuate?

JSH: I made that particular prediction in the context of the Turing Test and expectations for AI from the 50s and 70s. Did you notice that one of the Loebner Prize chatbots actually fooled the judge into thinking it was the human in the 2010 contest? We’re really getting close to programs that, while nowhere near human-level general intelligence, are closing in on the level that Turing would have defended as “this machine can be said to think”. IMHO. Besides chatbots, we have self-driving cars, humanoid walking robots, usable if not really good machine translation, some quite amazing machine learning and data mining technology, and literally thousands of narrow-AI applications. Pretty much anyone from the 50s would have said, yes, you have artificial intelligence now. In my book Beyond AI I argue that there will be at least a decade while AIs climb through the range of human intelligence. My current best guess is that that decade will be the 20s – we’ll have competent robot chauffeurs and janitors before 2020, but no robot Einsteins or Shakespeares until after 2030.

Older presentation by J Storrs Hall
J. Storrs Hall: “Roadmaps to Nanotech and AGI” at Foresight 2010 Conference from Foresight Institute on Vimeo.
If you liked this article, please give it a quick review on ycombinator or StumbleUpon. Thanks