Time for Iterations and Costs to Go From Inferior AGI to Superhuman AGI

There is a large gap between the current crop of task-specific narrow AI tools and Artificial General Intelligences (AGIs) envisioned by futurists and SF authors.

Ben Goertzel has been working towards AGI. There is also OpenAI and other companies and projects to develop AGI.

There would need to be a baby AGI that would be trained and improved over years.
Ben has talked about five years to get to idiot savant level partial AGI.
He has talked about ten years to get to human-level AGI.
He teaching the human-level AGI to be able to program and reprogram itself and to make its own hardware.

I would note that huge companies and the entire multi-trillion information technology industry is focused on increasing programmer productivity and iterating on the improvement of the hardware. This pathway is hyper-competitive. One small group or project would not be able to achieve a dominant and sustainable lead.

A small group can create and develop and more profitable and faster improving system. This effort would need to gather more resources (ie make more money and get more funding).

Ben believes that after you have human-level AGI you would then make many copies of it to multiply the artificial intelligence with billions of copies.

However, this is limited. If the first AGI’s need $100 million supercomputer resources or more, then there would need to be many iterations to lower the costs. You could not make billions of copies where the hardware costs were $100 million. It could take another 10-20 years to drop the costs and improve the AGI’s to 1000 times human level. This assumes the AGI software architecture was not limited without more extensive reworking.

There is no assurance that the S-curves of multiple improvements in AGI will be fast and smooth. Or that there will be a final s-curve where the relay race of improvement does not have plateauing problems.

There would also be abundant specialized and task-specific super AI competing with AGIs.

Superhuman AGI will be emerging where there is abundant superhuman task and multi-task capable superhuman narrower AI. There will be many companies and may pretty good AGIs.

There will be improving iterations of the factories and research for making better hardware.

Currently, the major AI generations have been 10-20 years for each. Neural nets, Expert Systems, Deep Learning, Reinforcement learning etc…

Ben Goertzel says true AGI will require advances in (at least) four different aspects.

1. It will require coordination of different AI agents at various levels of specificity into an overall complex, adaptive AI network — which is the problem addressed by the SingularityNET blockchain-based AI framework.

2. it will require bridging of the algorithms used for low-level intelligence such as perception and movement (e.g. deep neural networks) with the algorithms used for high-level abstract reasoning (such as logic engines).

3. it will require embedding of AI systems in physical systems capable of interacting with the everyday human world in richly nuanced ways — such as the humanoid robots being developed at Hanson Robotics.

4. it will require the development of more sophisticated methods of guiding abstract reasoning algorithms based on history and context (an area lying at the intersection of AGI and automated theorem proving).

All of these aspects of the AGI problem are topics of active research by outstanding teams around the world, making it plausible that AGI at the human level and beyond will be achieved during our lifetimes.