Josh Hall has an interesting part 3 for his Singularity series of articles. Here he talks about the ballparking the number of researchers, developers and resources who are doing the work to advance computer hardware and Moore’s law at 300,000 people and some tens of billions of dollars.
The proportion of scientists and engineers in the US population can be estimated at 1%, and those in cognitive-science related fields as 1% of that. Thus we can estimate the current rate of improvement of AI as being due to the efforts of 30,000 people. There is a wide margin for error, including the fact that there are many cognitive scientists outside the US.
It is not clear what a sustainable rate of reinvestment would be for an AI attempting to improve itself. In the general economy, it would require the same factors of production — capital, power, space, communication, and so forth — as any other enterprise, and so its maximum reinvestment rate would be its profit margin. Let us assume for the moment a rate of 10%, 1000 times the rate of investment by current human society in AI improvement.
We estimate human productivity at intelligence improvement by assuming that the human cognitive science community are improving their models at a rate equivalent to Moore’s Law. As this is the sum effort of 300,000 people, each human’s productivity coefficient is 0.000002.
Let us now consider a fanciful example in which 30,000 cognitive science researchers, having created an AI capable of doing their research individually, instantiate 30,000 copies of it and resign in favor of them. The AIs will be hosted on commercial servers rented by the salaries of the erstwhile researchers; price per MIPs of such a resource will be assumed to fall, and thus resources available at a fixed income to rise, with Moore’s Law.
At the starting point, the scientific efforts of the machines would equal those of the human scientists by assumption. But the effective size of the scientific community would increase at Moore’s Law rates. On top of that, improvements would come from the fact that further research in cognitive science would serve to optimize the machines’ own programming. Such a rate of increase is much harder to quantify, but there have been a few studies that tend to show a (very) rough parity for Moore’s Law and the rate of software improvement, so let us use that here. This gives us a total improvement curve of double the Moore’s Law rate. This is a growth rate that would increase effectiveness from the 30,000 human equivalents at the start, to approximately 5 billion human equivalents a decade later.
So it looks like once greater than human AI is achieved then the way to maximize the results is for a large government to increase resources invested in AI development and resources using the greater than human AI up into the tens and hundreds of billions as fast as possible.
This will initially be a difficult decision because the economic payback might not be clear cut. However, presumably the AIs that were getting closer and closer to human level were getting escalating investment. It would need to be not just 30,000 researchers but the government making a BIG bet on AGI.
This also goes along with my Mundane singularity articles which include concrete/inkjet printing of buildings. To get maximum and significant benefit, a society has to restructure their economy and bet a lot of resources on the new technology paradigms.
At some point in the maturity of the new technology, someone has to ramp them up and make some big bets. Just like people had to ramp up factory assembly lines, steam and combustion engines.
The same thing goes for the greening the desert or clean energy. People, companies and governments have to bet and build big to have big impact.
Space technology is the same in that you do not get the big benefits from space unless you make big bets and are make powerful and significant efforts to industrialize space. Scalable plans are needed where low cost robotics are developed to create lunar (or construction on Mars or asteroids) cement to build large scale energy and industry.
If you do not take into account how much activity is already occuring then you do not move enough resources into higher growth areas and your overall economic growth is mostly unchanged.
Brian Wang is a Futurist Thought Leader and a popular Science blogger with 1 million readers per month. His blog Nextbigfuture.com is ranked #1 Science News Blog. It covers many disruptive technology and trends including Space, Robotics, Artificial Intelligence, Medicine, Anti-aging Biotechnology, and Nanotechnology.
Known for identifying cutting edge technologies, he is currently a Co-Founder of a startup and fundraiser for high potential early-stage companies. He is the Head of Research for Allocations for deep technology investments and an Angel Investor at Space Angels.
A frequent speaker at corporations, he has been a TEDx speaker, a Singularity University speaker and guest at numerous interviews for radio and podcasts. He is open to public speaking and advising engagements.