stewart armstrong talked about how we are predicting AGI (he worked with Kaj Sotala)
Bottom line – we are bad at predicting AGI. (artificial general intelligence)
He just says we should expand error bars. instead of 2040 say 2017 to 2113.
I think we need to work better at defining what we mean by AGI. Decompose more sub-goals and parts of the problem. Decompose the benefits and downsides that we can and should expect. Do the same thing for other large scale technological possibilities.
what performance should we expect?
what do we get?
Fields arranged by predictions
Less pure to more pure
AGI Predictors, historians, sociologists, economist, psychologists, biologists, chemists, physics, math
Expert Opinion, Past Examples, scientific method, deductive
Real objective criteria
James Shanteau – competence in experts, research on experts
Good expert predictors Bad predictors
Experts agree on stimuli disagree
feedback available no feedback
problem decomposable not decomposable
Grind is easy, insight hard
How long to do something grinding along
Moore’s law hence AGI
Moore’s law is grind
257 AGI related predictions (Singularity Institute)
95 are timeline
Tranformed into median predictor
Experts, non-experts
7 predictions past 2100
Maes Garreau law – happens just before you die
Not based on expected lifespan at time of prediction
15-25 years. one third prdict 15-25
Not soon, not too far.
No evidence have any predictive advantage
spread uncertainty
Current best timeline predictions, whole brain emulations
Very decomposed, justified grind, clear assumptions and scenarios
integrated new data, multiple paths to get there
No overhang, 1 overhang, 2 overhang.
uncertainty over the century.
Simplified Omohundro-Yudkowsky thesis
Behaving dangerously …
Many AGI designs have the potential for unexpected dangerous behavior
AGI programmers should demonstrate to moderate skepitcs that their design is safe
Is the thesis wrong, in your opinion ?
Our own opinions are not strong evidence
Philosophy has some useful things to stay
AGI timeline predictins are problematic.
If you liked this article, please give it a quick review on ycombinator or StumbleUpon. Thanks

Brian Wang is a Futurist Thought Leader and a popular Science blogger with 1 million readers per month. His blog Nextbigfuture.com is ranked #1 Science News Blog. It covers many disruptive technology and trends including Space, Robotics, Artificial Intelligence, Medicine, Anti-aging Biotechnology, and Nanotechnology.
Known for identifying cutting edge technologies, he is currently a Co-Founder of a startup and fundraiser for high potential early-stage companies. He is the Head of Research for Allocations for deep technology investments and an Angel Investor at Space Angels.
A frequent speaker at corporations, he has been a TEDx speaker, a Singularity University speaker and guest at numerous interviews for radio and podcasts. He is open to public speaking and advising engagements.