AGI Expert Ben Geortzel Gives His Updated View on the Path to SuperIntelligence

Ben Goertzel is a cognitive scientist, artificial intelligence researcher, CEO and founder of SingularityNET, leader of the OpenCog Foundation, and the AGI Society, and chair of Humanity+. He helped popularize the term ‘artificial general intelligence’. Goertzel was the Chief Scientist of Hanson Robotics, the company that created Sophia the Robot. He was a Director of Research of the Machine Intelligence Research Institute. He is also chief scientist and chairman of AI software company Novamente LLC.

The history of the use of the term Artificial General Intelligence is here. In 2002, Cassio Pennachin and Ben were editing a book on approaches to powerful AI, with broad capabilities at the human level and beyond, and we were struggling for a title. The provisional title was “Real AI” but I knew that was too controversial. Shane Legg, an AI researcher who had worked for Ben previously, came up with Artificial General Intelligence. Ben used it for the book and adopted it for his various talks and speeches. In 2010, Ben would meet Maryland researcher named Mark Gubrud and found out Mark used the term AGI in an article in 1997.

Another major developer of AI and the AGI concepts in Peter Voss. Peter has the AI company Aigo.ai and previously created the company Smart Action. He described his views on AGI in 2017. Peter indicated that after 2000 several of the AI researhcer felt that hardware, software, and cognitive theory had advanced sufficiently to rekindle the original dream of human like AI and beyond. At that time they found about a dozen people actively doing research in this area, and willing to contribute to a book to share ideas and approaches. After some deliberation, three of them (Shane Legg, Ben Goertzel and myself) decided that ‘Artificial General Intelligence’, or AGI, best described their shared approach.

Ben is very impressed with ChatGPT and the generative AIs. However, he feels the systems are unable to go that much beyond the training data sets. The training datasets (much of the internet and databases of almost all published books) are impressive and reasoning from those datasets if very useful and valuable.

Ben would not be surprised if the follow on to these system combined with robotic automation could obsolete 95% of jobs. However, there are some jobs where we will choose to maintain human interaction. His example is that people still choose to view live music even if recorded music is more perfect.

The reason is a lot of what people are paid to do is repetition and variations on what has been done before.

It takes to deploy something that has been proven to work. He gives an example of a system that automates the McDonald’s drive through.

Three Paths to AGI

Ben discusses three paths to AGI. He thinks that Generative AI will plateau without reaching very broad super intelligence.

First route is actually trying to simulate the brain. The digital neurons are not very close to the biological neuron. Ben describes how this path could be dramatically improved.

Second route is artifical life type approach. Ben thinks that this could be promising if had the speed and scale innovations applied to neural networks.

Third route is his OpenCog Hyperon route. He is hybridizing neural, symbolic and evolutionary systems.

Symbolic meaning logical reasoning, but not necessarily old fashioned sort of crisp predicate logic.
It is probabilistic, fuzzy, intuitionistic, paraconsistent logic. It is a sort of means probabilistic and fuzzy. Paraconsistent means it can hold two inconsistent thoughts in its head at one time without going ape shit. Intuitionistic pretty much means it builds up all its concepts from experience and observation.

Still logic theorem prover.

We are trying to deal with symbolic stuff by actual logic theorem proving. We are using neural nets for recognizing patterns in large volumes of data and synthesizing patterns from that, which they have obviously shown themselves to be quite good at. We’re using evolutionary systems, genetic programming type systems for creativity because I think mutation and crossover are a good paradigm for generating stuff that leverages what was known before, but also goes beyond it. But again, it depends on what is the level of representation at which you’re doing the mutating and crossing over. So we’re integrating neural, symbolic and evolutionary methods. Each of the systems are not each in separate boxes.

They are making this large distributed knowledge metagraph. A metagraph is like a graph, but you can have links that span more than two nodes, like three, four, five, or 100 nodes, and you can have graphs pointing to whole subgraphs.

A hypergraph is a graph which has n-ary as well as binary links. A metagraph goes beyond, you can have links pointing to links or links pointing to general subgraphs. Ben has a distributed knowledge metagraph, there’s an in-RAM version of the knowledge metagraph also. They represent neural nets, logic engines, and evolutionary learning inside the same distributed knowledge metagraph. You just have this big graph, parts of it represent static knowledge, parts represent active programs. The active parts run by transforming the graph, and the graph represents the intermediate memory of the algorithms also. So you have this big self-modifying, self-rewriting, self-evolving graph, and the initial state of that graph is that some of it represents neural nets, some of it represents symbolic logic algorithms, some of it represents evolutionary programming, some of it just represents whole bunches of knowledge which could be fed in from databases, they could be fed in by knowledge extraction from large language models, or they could be fed in from pattern recognition on sense perception.

Going deeper than this into what we’re doing with HyperON involves more math than I could go into here, especially without the presentation or anything. Ben wrote a paper the General Theory of General Intelligence. He goes into how you take neural learning, probabilistic programming, evolutionary learning, logic theorem proving, you represent them all in a common way using a sort of math called Galois connections.

He uses Galois connections to boil these AI algorithms all down to fold and unfold operations over metagraphs.

3 thoughts on “AGI Expert Ben Geortzel Gives His Updated View on the Path to SuperIntelligence”

  1. I don’t see too many strong AGI ever being made unless they aren’t like people. To be ‘like people’ they would either have to be truly self-aware (not learning trees) and self-motivated, or else downloads of human minds, however that might be managed (possibly by starting with a human brain and replacing a few cells at a time with nano-machines).

    But other than that, it may not be possible to make them self-motivated. Without hormones, and glands, and hungers, and thirsts, and a thousand other things that influence our behavior constantly, what would motivate them other than human commands?

    I mean, yes, we could have them generate a list of things we might think they would want to do, and then have them generate random numbers to see which they are going to choose as goals, but that would be insane (and probably insanely dangerous) other than as some form of carefully controlled science experiment.

    Seems more likely they will sit like a genie in a bottle, waiting for the owner to rub it and make a wish. And the nature of that wish would be limited primarily by resources available, time available, legal concerns (maybe?), laws of physics, and the limits of the AI’s own intelligence. So be careful of what you wish for.

  2. “digital neurons are not very close to the biological neuron”

    Ever since I started hearing about ‘neural nets’ decades ago I’ve been wondering to what extent the electronic ones actually resemble biological ones.

    Can anyone point me to something which would make that clear?

Comments are closed.