Ark Invest forecasts a massive drop in the cost to train generative AI models and a massive increase in the size of AI models. AI models will increase to hundreds of trillions of parameters. The limitation will not be the training cost. The limitation will be the training data and those with better and proprietary training data will be the winners.
The increase in model size is currently trending to limits in the quality and accuracy of the AI. However, thousands of companies and billions of dollars are being spent to surpass current limits.
The current foreseeable limits seem to be near or beyond human capabilities. This will make the future AI very economically valuable and impactful.
The human brain has 100 billion neurons and more than 100 trillion parameters in a biological-neural-network system. In 2022, Graphcore laid out a roadmap to IPU technology that will be used for an AI supercomputer to deliver the following capabilities:
Over 10 Exa-Flops of AI floating point compute
Up to 4 Petabytes of memory with a bandwidth of over 10 Petabytes/second
Support for AI model sizes of 500 trillion parameters
3D wafer on wafer logic stack
Fully supported by our Poplar® SDK
Expected cost: ~$120 million (configuration dependent)
Graphcore has been funded with $682 million.
Brian Wang is a Futurist Thought Leader and a popular Science blogger with 1 million readers per month. His blog Nextbigfuture.com is ranked #1 Science News Blog. It covers many disruptive technology and trends including Space, Robotics, Artificial Intelligence, Medicine, Anti-aging Biotechnology, and Nanotechnology.
Known for identifying cutting edge technologies, he is currently a Co-Founder of a startup and fundraiser for high potential early-stage companies. He is the Head of Research for Allocations for deep technology investments and an Angel Investor at Space Angels.
A frequent speaker at corporations, he has been a TEDx speaker, a Singularity University speaker and guest at numerous interviews for radio and podcasts. He is open to public speaking and advising engagements.
6 thoughts on “Near Term Future of Generative AI”
The near term future involves getting AI to teach. It can pass exams, great, but can it help a human pass an exam?
The darker side of that would then be tailored propaganda. Neo-Nazi groups would be all over that, “teaching” their followers conspiracy nonsense. They already prey successfully on the terminally mediocre, if they could convince intelligent people then we’d have a massive problem on our hands.
It’s actually other side propagandizing that’s the problem: The people creating these systems and vetting their performance are almost uniformly left-wing, to the extent they have politics. There’s been a fair amount of coverage on how this is politically skewing their output.
It’s no surprise that smart people, even most people, skew left of centre. The problem is minority groups such as the aforementioned Neo-Nazis (or “alt-right” as they prefer to be called now) working out of self-interest rather than common interest as they are prone to do.
“most people skew left of centre”?
How do you define “centre” such that this makes sense?
AGI is not a sophisticated answering or drawing or video creation program. AGI is an AUTONOMOUS entity capable of surviving INDEPENDENTLY of its creator. None of the programs can do this nor do we have any idea of how to create this, yet.
Definitely a technological singularity. We have no more idea of what this may mean for everything that follows than the nomads who planted some grass seeds (probably so they could brew more beer) could imagine what would follow from agriculture (city states, writing, math, social stratification, and so on).
Consider that these AGI could also be sharing information with each other constantly. So, while a human airline pilot might get 700 to 1000 flight hours per year, a thousand of them would get a thousand times that. Plus, each new one would start with all the experience of all the ones that came before.
That could eventually be hard to compete with.
Comments are closed.