Ben Goertzel 2020 Interview on Artificial General Intelligence

Sander Olson has provided a new, original 2020 interview with Artificial General Intelligence expert and entrepreneur Ben Goertzel. Ben is the founder of SingularityNET.

Question: Hanson Robotic’s Sophia robot has garnered considerable attention. Are there any other robots being developed by Hanson robotics using OpenCog software?

Yeah — my colleagues at SingularityNET / Singularity Studio / OpenCog and I are currently collaborating on a research project with Hanson Robotics involving using transformer neural nets and neural-symbolic OpenCog AI to help with Sophia’s dialogue system.

We are also at the early stages of working out a potential collaboration with Hanson Robotics and some partners from the medical space, regarding a wheeled, medical-oriented humanoid social robot aimed at eldercare and other patient-facing applications in hospitals, nursing homes and so forth. We are in discussion with hospitals and elder-care facilities in Hong Kong,Taiwan and China as well as the US. The goal is to provide social/emotional connection to patients, as well as medical question answering and helping hospital staff to evaluate patient state, look out for danger situations, and so forth. In the context of neurodegenerative disease specifically, we are looking at using these robots to test cognitive abilities, as well as various techniques to reduce cognitive decline. If all goes as hoped we would start out with this gradually with some trial robots probably in medical facilities in Asia, and then move on to scalable production from there. Once the plans get more definite we will certainly make some public announcements on this. But I think the need is clear, both in general in terms of understaffing of medical facilities, and in particular right now in terms of the special burdens COVID-19 is placing on medical facilities (and the particular need to minimize human contact in eldercare facilities).

Question: In 2011 OpenCog came out with a roadmap that predicted an “OpenCog-based artificial scientist, operating a small molecular biology laboratory on its own, designing its own experiments and operating the equipment and analyzing the results and describing them in English”, as well as “full-on human-AGI” by 2021. What happened?

To put it very simply, while my OpenCog colleagues and I have done a lot of interesting things in the period since 2011, we have not yet obtained a sizable amount of funding for our AGI R&D work. Of course, some other parties with different approaches to AGI have been very well capitalized — but while their approaches have been more suitable for meeting the psychological needs of investors and the business model needs of large corporations, they are ultimately not workable in terms of achieving AGI. Ultimately the idea that you can take methods that have worked for particular narrow AI problems and scale them up to achieve AGI by adding more processor power and more data — just isn’t gonna work. You do need a lot of processing power, and you do need a lot of data — but you also need a cognitive architecture and a collection of interlocking learning and reasoning algorithms that are well thought out based on a theoretical and practical understanding of general intelligence. My colleagues and I have this, but we haven’t had OpenAI or Deep Mind scale resources. Those projects with massive resourcing, while staffed by brilliant people, have been pursuing inadequately well thought out designs and approaches.

So in short, the industry has spent huge sums on flawed approaches that will never on their own achieve AGI. If OpenCog had secured budgets similar to Deepmind and OpenAI we would probably at least be close to fulfilling our 2011 predictions — or we might even have exceeded them. Corporations like Google and Microsoft and Facebook and so forth are trying to develop AGI by tweaking and modifying the narrow-AI techniques that are geared towards making them money. These techniques will encounter diminishing returns in the next decade, and that should open opportunities for alternate approaches — which will finally lead to more cognitively sound approaches like OpenCog getting the resources they need.

Within the SingularityNET Foundation’s AI initiative we have managed to make some decent progress toward OpenCog-based AGI — but SingularityNET is itself a startup project with a different focus, so there’s only so much it can do. At SingularityNET we’ve created a decentralized, blockchain-based platform for AI — which is going to be incredibly helpful for AGI, in terms of enabling it to operate in a fully distributed manner without need for any central controller. SingularityNET platform can help an OpenCog AGI — or other sorts of AGIs — operate as open-ended intelligences rather than centrally-controlled systems, which will ultimately increase their general intelligence. But to really build an AGI based on the OpenCog design requires more effort focused in that direction than SingularityNET has been been able to do. I mean, building a global decentralized AI platform is a lot of work unto itself, overlapping with but distinct from the work of implementing and teaching specific AGI systems…

Question: The Singularitynet foundation has recently updated its roadmap, specifically with regard to OpenCOG 2.0. What details can you divulge about OpenCOG 2.0?

We are currently in an intensive phase of requirements analysis and technical design for the next version of OpenCog, which we’re calling OpenCog Hyperon. We have a crack team of Russian researchers and developers starting to focus their time on this, led by Alexey Potapov and Vitaly Bogdanov in St. Petersburg, along with a distributed team including folks in SingularityNET’s Hong Kong office and our Ethiopian firm iCog Labs.

We are also hoping to enlist a global community of volunteer programmers in the effort. At my current guesstimate, it will probably take 1-2 years to get it completed — IF we can garner either significant volunteer programmer support or at least modest funding oriented specifically to the effort. Of course if SingularityNET’s decentralized marketplace is economically successful enough it could then fund OpenCog Hyperon development on its own, but at the moment SingularityNET’s business aspect is taking a while to mature, so we are looking at other ways to augment SingularityNET’s contributions and accelerate OpenCog Hyperon development.

Question: What are the main components of OpenCOG 2.0?

Basically, OpenCog’s AI design allows a bunch of different AI algorithms to cooperate together in dynamically updating a weighted, labeled hypergraph knowledge store called the Atomspace. There is a lot of mathematical and cognitive theory worked out over the last decades, regarding how to make multiple algorithms — like neural nets, reasoning engines, evolutionary algorithms — work together effectively in this context to achieve goals and recognize patterns and imagine and create new ideas etc.

So regarding the upgrade from legacy OpenCog to OpenCog Hyperon, there are two main components — a new large scale distributed version of the Atomspace; and the “Atomese 2” programming language. Right now a number of my colleagues are focusing on the distributed Atomspace, and along with Alexey Potapov and other colleagues I am focusing more personal energy on the Atomese 2 language. The goal with Atomese 2 is to make it more efficient and simple for developers to experiment with versions of AI algorithms within the OpenCog design, and also to better support meta-programming — where the AI algorithms, implemented in Atomese 2, write new Atomese 2 code thus improving the system’s intelligence via self-modification.

Will OpenCOG 2.0 leverage GPUs, or be CPU only?
Unlike the neural net approaches that are currently so popular, OpenCOG does not run particularly well on GPUs. But that could change in the future as the architecture evolves. There are tools like Gunrock that allow graph algorithms to exploit GPUs reasonably well — but making OpenCog use these would require a lot of work that hasn’t been done yet, and even once that work is done, GPUs wouldn’t be the silver bullet for OpenCog type designs that they have been for neural nets. The crux of the issue is that OpenCog is founded more on large scale graph (and hypergraph and metagraph) processing than on vector arithmetic, whereas GPUs are golden mostly for accelerating vector and matrix math. You can of course project graph operations into vector and matrix math, but you lose some efficiency (as well as elegance) in the translation.

OpenCOG can, however, make extensive use of massive cloud computing resources and server farms. And OpenCog Hyperon is going to do this vastly better than the current OpenCog version.

Question: Tell us about True AGI

OK, so the SingularityNet Foundation — the organization I’m currently running — is a non-profit platform designed as a marketplace to buy, sell, and develop AI services, and to allow different AIs to cooperate and collaborate to solve problems — fully decentralized and running on the blockchain. SingularityNET has released a whole bunch of open-source code aimed at creating platforms for decentralized AI…. It has also developed a bunch of AI code that uses this platform to do stuff — including biomedical AI, audio processing, natural language processing, finch and on and on.

Going forward we are looking to focus SingularityNET Foundation more strongly on the underlying decentralized protocols — as that is a huge area in itself — and spin off some of the upper-layer projects the Foundation is working on into some new entities that will be minority-owned by the Foundation but separately managed and able to grow in their own directions.

Along these lines we currently have three for-profit corporations that are spin-offs of SingularityNet — True AGI, Singularity Studios, and Rejuve.

Singularity Studio is building enterprise AI software using unique neural-symbolic methods for data analytics and process automation, and back-ended on SingularityNET’s platform. Singularity Studio’s tools would provide much of the AI back-end for the nursing assistant robotics project I mentioned a few minutes ago, also. is focused on bioinformatics and crowdsourcing data for longevity analysis.

True AGI is concentrating on the core of the AGI problem, using SingularityNET as a platform and also other open-source platform technologies as appropriate. TrueAGI will help with OpenCog Hyperon, and also develop proprietary code aimed at making it easy to use OpenCog Hyperon for large-scale commercial applications.

So for instance the first version of the nursing assistant robots would use the current OpenCog version along with deep neural nets and other appropriate technologies — but once TrueAGI has finished its initial development work, these robots could be upgraded massively in intelligence via integrating TrueAGI’s AI engine with Singularity Studio’s product code. All of which of course would live in the cloud on the back end, not visible to the end-users of the robots except via their increased intelligence and capabilities.

All of these spinoffs, via using SingularityNET platform on the back end, would increase utilization of the platform and thus accelerate SingularityNET’s growth — lots of virtuous cycles going on here. What we need is beneficial AGI running on an open and decentralized platform and providing value to all people and every domain of industry.

If we can achieve that we can get the beneficial Singularity that so many of us have been foreseeing for most of our lives.

Question: How is OpenCOG being used with Rejuve?
OpenCOG is now being used to analyze genomics research and clinical trial data. We have been getting some pretty interesting results on cancer clinical trial data lately, and we plan on applying the same methods to Covid-19 trials regarding anti-viral cocktails once sufficient data is available. We’ve also been looking at data regarding the genomics of supercentenarian and discovering some quite intriguing things.

But right now we are constrained by the data made available by the corporate and government labs that are generating it. To really see the power of our advanced AI in the medical space, we need to feed it more and better data. Rejuve is a membership organization in which the members band together to feed their personal medical data to AI tools that then discover new things based on this data. Any money made from these discoveries feeds partly back to the members. SingularityNET and OpenCog are tools used for the AI data analytics process.

SingularityNET platform and OpenCog AI apply to every vertical market and every aspect of human pursuit, and of course I can’t personally put my hand into all of them —but human life extension is an application of particular personal and professional interest to me. Partly because I enjoy my life and have a strong interest in not dying.

And partly because it’s an incredibly fascinating intellectual puzzle. And partly just out of compassion for everyone who is suffering and dying around the world. Wouldn’t it be great if we could just eliminate death and disease and torment and suffering from human life? It sounds outrageous but in the historical scope we are now very close to achieving this. We just need to take the last few steps — and we can take them much faster if we apply cross-paradigm AI and decentralized data and value management to the problem. Proto-AGI technology is ideally suited to do learning and reasoning across multiple biological datasets pertaining to multiple levels of the organism — it’s a perfect match with holistic systems biology, which is what we need to take the last steps toward cracking aging. With a bit of luck Rejuve will be able to leverage its members’ data and passion for life to make some key contributions here.

We are also doing some immediate-term stuff with Rejuve using AI to process signals from medical smartphone peripherals, to identify infections like COVID at the presymptomatic stage. This is both something useful right now — it can save a lot of lives — and it’s part of the same basic tech Rejuve will use to analyze its’ members
body states toward discovery of new longevity therapies.

Question: Is it realistic to believe that these startup corporations such as True AGI and Rejuve can generate enough profits to cover the massive development costs for AGI?

I don’t believe that the development costs for AGI necessarily need to be massive. I wrote a blogpost a couple of years ago in which I speculated that AGI might be developed for as little as $25 million — in truly dedicated, focused funding oriented just toward AGI and not other related projects. I still think this seems about right. Obviously this is a lot of money for the average person but it’s not huge by the standards of the
tech industry.

It might even be achievable much more cheaply than that. Perhaps if I had exclusively focused on coding AGI myself for several decades — instead of dividing my time between AGI and a lot of other related things like bio-AI, decentralized AI platforms, etc. — we would have AGI now. So far for my whole career I have been working on AGI around the edges of other projects, because it’s these other related projects that have gotten funded. Some of these other projects have been successful, some less so, but unfortunately none has yielded a big enough windfall to enable me to just fund a reasonable-scale AGI R & D project on my own.

I’m not complaining — I’ve gotten to work on and lead a bunch of really exciting projects, and I’ve gotten to spend a fair fraction of my own time on AGI theory and prototyping. Compared to many humans on the planet I’ve had an amazingly fortunate situation. But in spite of decades of related work and the current increase in excitement about AGI, I haven’t yet managed to pull together a really well resourced AGI project like, say, the OpenAI or Deep Mind guys have.

To answer your question directly though — I think the role of AI in the world economy is utterly different than it was 20 or even 5 years ago. Right now, in 2020 and going forward, is it possible to fund revolutionary AGI development via providing commercial value with related advanced AI and proto-AGI tools. Absolutely. The tech has matured but the business world has also matured, so it’s just way easier to sell and integrate AI solutions. Basically the time for AGI has now come — just as 5-10 years ago the time for deep neural nets had finally come.

Hardware resources required for human-level AGI are hard to estimate accurately until more of the AGI software work is complete, but I think with the right organizational models one can obtain hardware resources via partnerships and collaborations without giving up control of the AGI.

Question: You’ve mentioned the semiconductor startup Graphcore and their AI chips. Specifically, you’ve argued that graphing the computational functions of OpenCog’s algorithms into hardware could yield significant performance improvements. How much would it cost to create such specialized chips, and what performance increases would result from such chips?

Yeah — at a rough guess, the OpenCOG architecture could probably be improved by 10x-100x by incorporating bespoke Application Specific Integrated Circuits (ASICs) oriented towards the types of graph processing it does most heavily. Graphcore is awesome but is currently oriented more toward floating-point graph operations than toward the discrete logical graph operation which OpenCOG uses more heavily. I am fairly confident that within 5 years or so, an ASIC geared more strongly towards such discrete graph operations will become available. The cost of making such a chip is considerably less now than it would have been a decade ago, but is still beyond the current resources of SingularityNET or OpenCog. At this point my plan is to develop the software first, and have the optimized hardware come later.

There are so many things like this that clearly would smooth and accelerate progress toward beneficial AGI — but that are getting worked on far too slowly by the tech industry. Because global tech development is currently driven by short term profit for large corporations, and secondarily by national security and hegemony considerations on the part of governments — not by the quest to maximize general intelligence nor beneficialness of AI technology, nor the quest to maximize human good.

Question: OpenAi’s GPT-3 has recently been unveiled and garnered a lot of excitement. Do you think that GPT-3 has any potential to fundamentally advance the field of AI?

The major advance in that field was a program that came from Google called Bi Directional Encoder Representations from Transformers (BERT). BERT is a natural language engine published by Google in 2018. GPT-3 is a variation on that developed by OpenAI. GPT-3 allows massive neural nets to do a variety of tasks poorly. But GPT-3 still lacks any fundamental semantic model, so it tends to spew out garbage. I wrote an article on GPT-3, describing its profound limitations. But even though GPT-3 does not appear to have really broad commercial applications, and is not on a path towards AGI, it is still an impressive invention. I would have been proud to have developed it.

Question: AutoML has also garnered considerable attention recently for its ability to automatically generate code. Does autoML constitute a major AI advance?
AutoML — use of ML to configure and adapt and learn ML methods — does represent a major AI advance. The underlying concept isn’t new at all — we used to call it mta-learning — but with the vastly greater computational resources now available this technology is starting to take off.

Most AutoML today is just using ML to tune the parameters of other ML algorithms — which is definitely a big improvement over having humans do all the parameter tuning, but doesn’t go far enough where AGI is concerned.
But more advanced forms of self-programming are maturing fast too. E.g. evolutionary algorithms have capable of writing fairly sophisticated algorithms, e.g. sorting programs, for quite some time. Recently researchers have extended this to ML tools that configure neural nets for you automatically, automatically synthesize ML algorithms, etc.

Ironically, though — translating a vague set of goals to a precise set of requirements is more difficult for an AI than translating a precise set of requirements into usable code.

Making vague requirements precise may be an AGI-hard problem — but writing out requirements very precisely is slow and difficult for humans. So for this reason, it seems that automatic software program generation may have relatively limited impact until we get closer to AGI with its ability to make the vague precise. Coming up with an algorithm based on a precise spec is not most of what actual software developers do…

Anyway, although it doesn’t appear that standard current AutoML tech will fundamentally move the bar regarding AGI, it is plausible it will prove to be an enabling technology towards AGI — and it’s clear that more advanced forms of meta-learning and self-programming are going to be critical for AGI.

Question: Many people envision AGI evolving in a manner similar to biological evolution – from insects to rats to dogs to chimps to humans. Why don’t we have dog level AI yet?

A dog without a body would not be all interesting from an AI perspective, since so much of the canine’s brain is geared towards responding to sensory inputs and driving motor actions. If one could create a robot with a dog’s body and connect an AI to its sensors and actuators, that could advance AI and AGI greatly — but would also be quite a tremendous pursuit in terms of hardware engineering, requiring multiple coordinate breakthroughs in robotics…

It would undoubtedly be beneficial to build a robot baby but we don’t yet have the technology to make humanlike skin, humanlike whole-body holistic movement and response, and so forth. A sense of touch, and kinesthesia, are vital to human perception and development — among so many factors. A baby is also largely about its body, and talking about a virtual AI baby without a somewhat baby-like body verges on nonsensical.

So we are taking a different path, although if we are not successful with our current approaches we may eventually adopt a more embodiment focused paradigm. I think embodiment is really valuable for AGI, but we’re now looking at developing core cognitive algorithms in a non-embodiment-centered way and then embedding it in
various sorts of bodies for it to learn what they have to offer.

Question: You wrote five years ago of stages of AGI development. In the initial “V2” stage, you would show a compelling demo of AGI’s potential. In the subsequent “Sputnik” stage, you planned on demonstrating a robot exhibiting clear sentience. In the “Apollo” stage, you would have true human-level AI. In 2015, you expressed confidence that the V2 stage might be reached by 2020. Why are you now predicting that the V2 stage could be reached by 2025?

Yeah — V2 meant the analogue of the V2 rocket, the first system that made clear space flight was going to be possible using basic rocketry. So the AGI V2 would be the first proto-AGI system that makes abundantly clear AGI is on the way.

I spent a bunch of 2016 talking to high net worth individuals seeking philanthropic donations for developing beneficial AGI based on the OpenCog design. The aim was to get enough donation money to build the V2 of AGI, in the form of a humanoid robot with a toddler-like general intelligence.

The short of it is, I didn’t succeed, though I had a lot of interesting conversations. So after that failed fundraising push, I ended up putting more time into the Sophia robot, and then into founding SingularityNET and developing a decentralized AI platform — which I think can be extremely valuable but doesn’t in itself solve the core scalable-AGI-algorithm problems that I’m aiming to solve w/ OpenCog Hyperon.

Anyway the projections you’re citing from 2015 were basically predicated on success of my 2016 push for donations to fuel that stage of OpenCog development.

Being older now and hopefully slightly wiser, I have mostly given up on the philanthropic route — it would seem human altruism and generosity, while very real phenomena, are simply not rational nor visionary enough to enable funding a beneficial-AGI effort via donations.

So now the True AGI spinoff is focused on developing basically that same V2 demo I talked to you about in 2015, but in the context of a for-profit company. The core AGI code — OpenCog Hyperon — will be open source but there will also be some highly valuable proprietary tooling enabling use of Hyperon to control social robots, virtual smartphone assistants, and other intelligent devices. So we are appealing now to a mix of profit motive and deeper motives, in pulling TrueAGI together.

OpenAI also developed, as an organization, with a sort of mix of altruism, open source and commercial goals — but I don’t think they managed to synergies these in a really effective way. My colleagues and I have put a lot more thought into the organizational and community architecture as well as into the AGI architecture, and I think we are now at a point where we can advance rapidly on the commercial applications, open source
development AND core R&D aspects, with all these aspects reinforcing each other cooperatively. And once the AGI has advanced far enough, you’re going to see the decentralized blockchain-based infrastructure we’ve built in SingularityNET start playing an extremely critical role here….

And we have made a lot of related progress in the last 5 years, even though we haven’t done what I was hoping to do had that 2016 donation push succeeded.

I have a far stronger AGI team working with me now than in 2015 — some fantastic old- timers who have been working with me on AGI prototypes and ideas for 1-2 decades, plus an extraordinary team led by Alexey Potapov in St. Petersburg…. We have more of the reasoning and learning algorithms needed for AGI worked out and prototyped than we did in 2015. We have better software tools at our disposal now — some like SingularityNET platform that we’ve built ourselves, and a whole bunch from the broader community. The business ecosystem is more mature, making it more viable to create a business like TrueAGI.

Once we have a V2 demo — meaning a working system that demonstrates clearly we are on the path to AGI — then the subsequent stages should come quickly after. There is more reason than ever to be confident of AGI development in the next decade – AGI will be developed much faster than most people expect. Honestly my colleagues and I have had the needed ideas for beneficial AGI for quite some time— but now that the world is starting to catch up to some aspects of our thinking, I am optimistic that in this next phase we are going to be able to finally move our practical implementations forward at the scale and pace they require to show the dramatic success they are capable of. OK, I admit I tend to be on the optimistic side — but I think that’s part of what it takes to create the greatest revolution in the history of humanity!

4 thoughts on “Ben Goertzel 2020 Interview on Artificial General Intelligence”

  1. OpenCog needs to demonstrate more compelling examples to motivate investors and academic interest. Conversational AI or Common Sense come to mind. In each case, millions of mental representations (MR) interact with millions of rule agents (RA). MRs can be any properties, types (classifications), sequences (plans), hierarchies (objects), and associations used to simulate the state of the world and enable the mind. MRs themselves are fleeting, cached, constantly in need of re-assertion by RAs. MRs must retain the when, where, how, and why behind their existence.

    But RAs pose a much more difficult problem: how to constantly create, update and delete mental rules. RAs are everywhere, responsible for associative search (using MR query templates against MR states), prediction (of alternate MR states), probabilistic reasoning, making plans, and doing things (generating action-based MRs).

    How best to manage RAs? By passing language fragments back and forth to constantly modify them. "All cars have engines according to my Dad, although he's not the most reliable source" and "Correction, some cars have electric motors according to my Mom". Beliefs (MRs) are a provisional reconciliation of a multitude of competing RAs, each of which must consider the source and context. MRs expire quickly, and must constantly be refreshed.

    While efficiency of MR and RA storage and execution is vital (requiring a million+ node compute cluster), getting the interaction right between them is key.

  2. Ok. This is the first AGI article I read that I felt excited about – and it didn’t dwell on SkyNet-Terminator-type outcomes or defining an AGI’s right to intelligent-species’ autonomous status or such pseudo-philosophical drivel. The whole “…artificial scientist, operating a small molecular biology laboratory on its own, designing its own experiments and operating the equipment and analyzing the results and describing them in English..” is mesmerizing. It’s sad that he lingers on inadequate funding in a sour-grapes tone but such is the attitude of being hyper-academic and contemptuous of business/ money-making requirements – such as is the lament of most labs. I hope he aspires to make AGI more ubiquitous, useful to the world, and ultimately marketable/ manufacturable — which may require him to be a bit less citation-/first-glory-driven and a little more Musk-pragmatic.

  3. i don’t think true AGI will ever happen until a company (or companies
    and govts) focus on true “sensor fusion” of all kinds of robotic sensors
    (touch, taste, smell, feeling, hearing, etc.) and incorporating all of
    those sensors into a true humanoid, bipedal, with two arms/hands also
    sensing (as humans). -Lots of work to do to really make that kind of
    robot possible….

  4. Learn regular expressions and install Notepad++ then search & replace those errant carriage returns and line feeds… that will clean this up.

Comments are closed.