True AGI expected in 2026 (possibly 2027), with superintelligence by ~2030 surpassing all human intelligence combined.
100X Intelligence density increase will reduce the HBM memory bottleneck.
10X AI gains every year going forward.
Reusable rockets will be 30 times faster than planes, move more cargo and there will ten times more giant rockets than large planes.

Brian Wang is a Futurist Thought Leader and a popular Science blogger with 1 million readers per month. His blog Nextbigfuture.com is ranked #1 Science News Blog. It covers many disruptive technology and trends including Space, Robotics, Artificial Intelligence, Medicine, Anti-aging Biotechnology, and Nanotechnology.
Known for identifying cutting edge technologies, he is currently a Co-Founder of a startup and fundraiser for high potential early-stage companies. He is the Head of Research for Allocations for deep technology investments and an Angel Investor at Space Angels.
A frequent speaker at corporations, he has been a TEDx speaker, a Singularity University speaker and guest at numerous interviews for radio and podcasts. He is open to public speaking and advising engagements.
The major AI engines are already smarter than 99% of humans at most things that can be queried about (99%?). So the singularity, while interesting, won’t change much at this point.
The problem isn’t answers at this point, it’s implementation in the real world. This can be the physical world, but also the political/policy world, where resistance to change is very high and attachment to the status quo – even opposing views of the status quo where neither/none of the views works particularly well – is stronger than logic, benefits, or other positives of change.
Humans are always the bottleneck, with AI more so than ever.
[ but, humans are the reason, why AI (or AGI) exists.
Humans are maybe the bottleneck on the ‘output’ side of AI, but also a source of information (combined with natural surroundings, but exceptional ideas from only humans (written language?, ‘wheel and hub’, ‘consciousness’, and yes maybe ‘what to add’?)?
No one measures politics with scientific ‘efficiency’ scales, but the ‘overall’ measure of growth of wealth (mainly visible from growth on $, GDP), on individual or statistical perspectives(?)
What’s the exponential input, without billions of humans and all other form of life?
Technical (genetic? without even (yet?) controlling the technical side, considering effects?) randomness, seems a limited approach?
Are AI models separating/differentiate between human ideas and natural surroundings input data, e.g. from sensors? (thx) ]
As one out of 8 Billion Turkeys it’s hard to get positively excited about the farmer sharpening the AI axe.