Time for Iterations and Costs to Go From Inferior AGI to Superhuman AGI

There is a large gap between the current crop of task-specific narrow AI tools and Artificial General Intelligences (AGIs) envisioned by futurists and SF authors.

Ben Goertzel has been working towards AGI. There is also OpenAI and other companies and projects to develop AGI.

There would need to be a baby AGI that would be trained and improved over years.
Ben has talked about five years to get to idiot savant level partial AGI.
He has talked about ten years to get to human-level AGI.
He teaching the human-level AGI to be able to program and reprogram itself and to make its own hardware.

I would note that huge companies and the entire multi-trillion information technology industry is focused on increasing programmer productivity and iterating on the improvement of the hardware. This pathway is hyper-competitive. One small group or project would not be able to achieve a dominant and sustainable lead.

A small group can create and develop and more profitable and faster improving system. This effort would need to gather more resources (ie make more money and get more funding).

Ben believes that after you have human-level AGI you would then make many copies of it to multiply the artificial intelligence with billions of copies.

However, this is limited. If the first AGI’s need $100 million supercomputer resources or more, then there would need to be many iterations to lower the costs. You could not make billions of copies where the hardware costs were $100 million. It could take another 10-20 years to drop the costs and improve the AGI’s to 1000 times human level. This assumes the AGI software architecture was not limited without more extensive reworking.

There is no assurance that the S-curves of multiple improvements in AGI will be fast and smooth. Or that there will be a final s-curve where the relay race of improvement does not have plateauing problems.

There would also be abundant specialized and task-specific super AI competing with AGIs.

Superhuman AGI will be emerging where there is abundant superhuman task and multi-task capable superhuman narrower AI. There will be many companies and may pretty good AGIs.

There will be improving iterations of the factories and research for making better hardware.

Currently, the major AI generations have been 10-20 years for each. Neural nets, Expert Systems, Deep Learning, Reinforcement learning etc…

Ben Goertzel says true AGI will require advances in (at least) four different aspects.

1. It will require coordination of different AI agents at various levels of specificity into an overall complex, adaptive AI network — which is the problem addressed by the SingularityNET blockchain-based AI framework.

2. it will require bridging of the algorithms used for low-level intelligence such as perception and movement (e.g. deep neural networks) with the algorithms used for high-level abstract reasoning (such as logic engines).

3. it will require embedding of AI systems in physical systems capable of interacting with the everyday human world in richly nuanced ways — such as the humanoid robots being developed at Hanson Robotics.

4. it will require the development of more sophisticated methods of guiding abstract reasoning algorithms based on history and context (an area lying at the intersection of AGI and automated theorem proving).

All of these aspects of the AGI problem are topics of active research by outstanding teams around the world, making it plausible that AGI at the human level and beyond will be achieved during our lifetimes.

31 thoughts on “Time for Iterations and Costs to Go From Inferior AGI to Superhuman AGI”

  1. this may have been said in the above article (i did not read the whole thing)… -i think “sensor fusion” of all human senses is also needed for AGI… at present (from what I have read), AI is used to interpret various individual types of sensors (so speech recognition/understanding/speaker-recognition, etc. AI all use a digital stream of bytes to interpret a digital stream of bytes into phonemes, syllables, words, sentences, etc…. -likewise, image recognition, etc. AI uses a digitized stream of pixels still and in motion to interpret what objects are being seen, etc.)… -all of these various “sensory input” need to be fused into a synthesized whole…. -probably already thought about, but my two cents…

  2. Goerzel is one of the top researchers in the field of AI. I don’t necessarily agree that his approach is the best way to get to AGI, but I think he knows what an algorithm is.

  3. I’m waiting for one that can do structured analysis and design. That will be a SAD day.

  4. Some of the top thinkers in AI including Kurzweil, Bostrom, and Moravec all believe we are fast approaching and will soon surpass the hardware limitations needed to get to human level agi. All of them believe we’ll reach this point by the 2020’s. We may already have passed that point at a functional level which is all you really need. We will approach the neuronal level of the human brain this decade. Moravec’s paradox seems to support my view that higher-order intelligence is easier to achieve computationally than sensory-motor functions that current narrow ai research is largely focused on.

  5. I think the solution to the ‘goals danger’ problem is the one illustrated in that AI classic, “The Jetsons”. Namely, you put a human in the loop who isn’t very clever but has a great life and little ambition, with the sole “important” job of pushing a ‘start’ button every day to let the AI (R.U.D.I.) proceed with its assigned daily work.

    Maybe you also have a team of scientists/engineers/hackers/psychologists secretly monitoring what ‘George’ and R.U.D.I. do. Represented in the show, I’m convinced, by Henry Orbit – the suspiciously intelligent ‘janitor’ George occasionally interacts with about any drama happening in his life.

  6. Again, awareness doesn’t imply motivation to work or not to work. That will be something we program into it. People somehow think that AGI means thinking like a human. This is wrong.

  7. Therefore the better question is whether it is ethical to turn the AGI off. Even if self-aware (and this may be hard to test), it still depends on whether it cares. Whether it cares will depend on whether we program it to care.

  8. True. We are assuming that steady evolution from a simple jellyfish will eventually get us to a T-Rex, but we may be on the wrong track entirely.

  9. The major differences in the way law treats human persons and corporate persons are:
    1– Corporations can not vote.
    2– Corporations are much more subject to the death penalty, dismemberment, slavery and other punishments that humans (in civilized countries) are not.
    Though in the case of corporations, those are called “bankruptcy” “forced divestment” and “nationalization”. Because there is such a thing as taking analogies too far.

  10. Yes, but AGI doesn’t currently exists and no one knows how to get there from here. Everyone is effectively working on progressing artificial narrow super intelligence, that’s definitely in the cards over the next few decades. 
    Does progress in that area translates to progress in AGI? There is no way to know that at this point.

  11. The other difference between nukes and AGI is that nukes only give you an advantage once you’ve got them, they work, and you’ve demonstrated that you have them.

    AGI should give an advantage at every step in the process. If your AI system is functioning at all you can use it to process data, do analysis, come up with better chemistry/medicines/trading… generally benefit. And any improvement gives improved results.

    At each step there is a strong motivation to take the next step.

  12. “What if X? Does this mean Y?”

    “It’s OK because it’s not X”

    You’re missing the point of a hypothetical question.

  13. Goertzel means well. But with the approach he spells out, he will always be working toward AGI until he croaks. Algorithm is not a magic word that you can wave a wand over and presto change-o! intelligence appears. He keeps using that word, but I dunna think the word means what he thinks it means.

  14. Goertzel means well. But with the approach he spells out, he will always be working toward AGI until he croaks. Algorithm is not a magic word that you can wave a wand over and presto change-o! intelligence appears. He keeps using that word, but I dunna think the word means what he thinks it means.

  15. It’s not murder because it’s not alive and the level of consciousness is not even comparable and you can always turn it on again

  16. The problem with bans is that bad actors don’t comply with them, and they’re often difficult to enforce. We more or less managed with nuclear, because it’s relatively easy to trace and difficult to achieve. But even there we have bad actors who managed to make bombs. Only thing that stops them from using them now is strong deterrence. But AGI is much harder to track and much easier to hide until it’s too late. So you end up with AGI made by bad actors, and nothing to counter it.

    I think the open source approach is much safer. They may be some bad actors abusing it, but there will almost certainly be many more good actors implementing checks, balances, and counter-measures.

    If we’re faced with a bad AGI, we need good AGIs to counter it.
    (The same principle applies to nanotech and other new supertech.)

  17. Corporations are considered persons, and corporations are basically AI’s. The corporate charter, bylaws, procedures, organization system and laws it must follow are all just natural language machine code.

    Of course, corporations use real humans as processing nodes. But it should be noted that the matrix it employs humans in gets them to modify and override their normal priorities – the corporation does not represent their values or personalities. It is its own being.

    Also, a corporation does not technically need human processing nodes, legally speaking. It just needs a charter and some legally valid ownership.

    So an AI could, legally speaking, be considered a valid person just as soon as it gets some shareholders on board and files its articles of incorporation.

  18. Awareness doesn’t imply it cares about being turned off.

    So it’s not murder if you kill someone who’s suicidal? Interesting legal defense theory.

  19. Any AGI related research must be totally banned And regulated in much more aggressive way than even nuclear weapons .

  20. One difficulty with AGI is motivation. We have inbuilt motivation to help us do most of what we do. It has evolved this way over millions of years to help pass our genes on to the next generation. e.g. self-preservation, socialisation, curiosity, etc etc. Will there be a goal we program into AGI? This can be very dangerous in terms of unforeseen consequences. If there is no goal, it will just sit there doing nothing.

  21. Interesting ethical questions. Would a self-aware AI be required to work? Would it be allowed to vote? Own property and copyrights? Donate unlimited money to a political campaign? Could it be held criminally liable and be “jailed”? Basically, be a “person”?

  22. That is a bold prediction. I like it. 10 years until AGI approaching human level. I will most likely make it past 2060 so I guess I will see you on the other side of the singularity.

  23. General intelligence is incredibly easy. All you need to do is combine enough specialized intelligences together and you have AGI. Narrow AI is already quickly approaching human parity in most areas, and even surpassing human intelligence in some areas. NLP, perception, and robotic motion planning and control are all broad areas in AI that are already reaching human level parity. AI isn’t going to exactly match human intelligence, nor do we need or want it to. I think Kurzweil was right on the money where we will have AGI approaching human-level right around 2029-2030.

  24. Who called that change in slope “inflection point”? Was it prof. Goertzel or was it Mr. Wang?

  25. If its self aware then is it murder if you switch it off? Will there be dumps of essentially useless ‘Idiot’ AIs that still need to be given resources to continue living?

Comments are closed.