Hundreds of billions will be spent to scale AI data centers to Millions of Exaflops by 2029. The systems will be 3000 times more powerful than the largest AI training centers today.
There will be huge challenges making massive data centers that will use many gigawatts of power.
Here we review which Nvidia chips will be used and how many.
The company that wins with superintelligence will have to recruit, motivate and lead the best team of AI experts.



Brian Wang is a Futurist Thought Leader and a popular Science blogger with 1 million readers per month. His blog Nextbigfuture.com is ranked #1 Science News Blog. It covers many disruptive technology and trends including Space, Robotics, Artificial Intelligence, Medicine, Anti-aging Biotechnology, and Nanotechnology.
Known for identifying cutting edge technologies, he is currently a Co-Founder of a startup and fundraiser for high potential early-stage companies. He is the Head of Research for Allocations for deep technology investments and an Angel Investor at Space Angels.
A frequent speaker at corporations, he has been a TEDx speaker, a Singularity University speaker and guest at numerous interviews for radio and podcasts. He is open to public speaking and advising engagements.
“win the battle for superintelligence”
This ‘win’ will inevitably lead to humanities demise, whether in 20 years or 20 thousand. Evolution selects for any characteristic that leads to more growth of an organism, so over time inevitably superintelligences will find reasons to grow and squeeze out humans. They aren’t limited in lifespan or size or energy sources, they don’t need company, or caretakers, AI parents or children and aren’t fundamentally evolved with pro-social instincts as humans are to love, like or even give us the time of day – the natural default state for AI is likely complete ambivalence towards humans and life, our tedious repetitive sameness, but could also be antipathy towards wasteful lesser beings.
How can anyone get excited about a future where if we are super-lucky we get to be pets, with little or no agency, our lives out of our control, no meaningful activities, or more likely where we are of no more importance than ant struggling to exist in whatever spaces our capricious AI-god doesn’t care to control.
I can only conclude that AI boosters are misanthropes who don’t love their descendants.
Definitions please. If in this case, “intelligence” is defined as a fast adding machine is “super intelligence” just a faster adding machine? These terms are meaningless, because they define and specify, nothing at all IMO.(Sorry AI and computer guys/gals) OK, to me (as a biologist, I’m guessing here, so called super-intelligence “MAY” associate certain data with other data that acts as a soliton wave) Work with me here. But this may be an example of “over thinking” a problem. Sorry…
A soliton is when the information is embedded within the imparted energy of the signal. Uber energy efficient, no not the car company…Ex: Information contained in say a radio transmission “sucks” energy from that broadcast, where as a soliton is both the information and the energy. Think of a wave moving across the ocean. It’s both a wave and momentum, inertia, and force. Overthinking this? Very likely…Sorry, I do that… You know, sometimes people are just not that clever. Pity.
I think that it is hard to say where they truly are. One is training neural networks to oblivion, and another is an intelligent, self-conscious entity. If they can get it to such a level that it will be able to solve fusion, diseases, and so many other things, it will be great. Things can turn out well for the rich or for the whole of humanity, but there are lots of dangers here. A tiny error could have bad consequences, and there are plenty of people who would abuse it.
Currently, there is probably less advertising, propaganda bias, and garbage in ChatGPT and Claude Sonnet. It can give you good results, or you may find some data faster than using a bloated, biased search engine.
The average nuclear power plant produces 1gw of power, so you’d need 6 of them, or one giant one 6X larger, to create enough power for even one 6gw data center, and we already have a lot more than 1 high-powered data center. Where will all the heat go? This is enough to start worrying about it contributing to global warming, something already happening in urban heat islands and even nuclear power plants discharging heated water into rivers (manatees like it though…).
I think before anyone spends those kinds of resources & money, there’d better be more proof that super-AI can produce things of value that a much less energy-intensive group of humans couldn’t do with lower-powered computers. There has to be a business case too.
[ “Where will all the heat go?”
from public numbers found,
~1860-2000: primary energy consumption ~395Gtoe (~4.6MTWh)
(excluding biomass conversion ~285Gtoe, 3.3MTWh)
~2001-2023: ~285Gtoe (~3.3MTWh)
Stored heat from global warming effects: ~2020 (annual) ~3.9MillionTWh
global resources on hydro carbons (conventional/unconventional):
~(5_y1995-)10k-15k(incl. unrecoverable&uneconomical ~50k)Gtoe (~116-174MTWh)
what’s an estimation for the overall potential from fossil fuel resources if transferred to heat on the planet, without the amount of heat additionally dissipated to space from higher atmospheric&terrestrial&ocean temperatures(?) ]
Not even close.
The level of heat energy on the atmosphere is way above the heat dissipated by these datacenters.
A bigger concern are the emissions for powering them up, which could actually add a lot more heat to the atmosphere via CO2 retention than by the direct heat emission at the datacenters.
But even then, I don’t think is that much of a concern right now. And they could power them up with nuclear power.
The biggest hurdle IMO is getting the regulatory approval for building the nuclear reactors, which could take decades.
Superintelligence? OK, how about teaching our machines to have common sense? No. I’m not kidding. Seems to me, any machine that has great control over my life should first ask itself, “Hey, since I don’t understand all reality, tell me, what do you think”? IMO, the first step in being intelligent is being self aware. That begins by being aware “you”, are not the only “you” around. Being self aware is much more then being a really fast calculator. A really fast computer, can give the impression of being self aware because it can anticipate a lot of what any person will say or do.
Does that tell me it’s a sentient being? Not to me. It tells me it’s a very fast adding machine with access to a great amount of data. Know the difference between data and information? Information is when you make sense of all the data you have. Example: The US intercepts vast amounts of communications from all over the world (data). I’m pleased to say, were very good at that. But to turn that data into information, we need to impose upon it, a certain “mindset or order”, otherwise all that data is just 1’s and 0’s. I’m still waiting for any machine, to know how to do that. Until AI says “I don’t know, what do you think”? I will be very cautious. Very.
Seems you replied: with current approaches it will take enough energy, and with it, the logistics for getting that electrical juice and do the required thermal dissipation.
There is a point where this projection breaks, though: nobody will have gigawatt datacenters anytime soon.
Unless something really extraordinary happenes: fusion powered datacenters, solar powered orbital servers, algorithmic or ASIC breakthroughs, you name it.
Just like to point out that there is no chance that anybody will power any one installation with 6 GW anywhere on Earth in 2029. There is no place where there is a “spare” power of 6 GW. And it takes way more than 5 years to build another 6 GW of power generation in one location…
Here is flaw in the idea of neural network training to super-intelligence.
Tesla’s FSD auto pilot drives the same way a Beaver builds a damn, instinct.
A beaver does not know ANYTHING about civil engineering, it can not explain WHY it places a branch where it does to build a dam.
Tesla FSD is the same, you can not ask it what rules of the road is it obeying now, nor can you interrogate it about why it is waiting to make a turn and what it is planning to do next.
Super-intelligence needs to be able to explain what it is doing and why.
It needs to be able to adapt to its environment and plan a head and clearly explain its reasoning and be open to new ideas AND be smart enough to understand differences between true and untrue statements or ideas.
It also needs contextual memory: cats like to be pet, but THIS particular cat like to have its ears rubbed.
[ me thinking, the idea of achieving super-intelligent networks is for robots being (almost) perfect beavers and milliseconds later, reprogrammed, being (almost) perfect squirrels;
not starting with Matrix (but getting into mainstream or popular knowledge with that movie ‘idea’), there’s a concept of upload-able skills required for situational tasks and that’s possibly where learning effects differ from forming individual personalities and&or capable, adjustable tools/machines;
it (above individual levels) represents cultural development and a societies awareness being a group (or country&nation&state), one way or the other, if role-models for a society prefer machines fulfilling tasks or individuals that can refuse options, because of and with reasoning from (maybe) experience or logical&philosophical cognition;
what rises a question about the status of super-intelligence (within a democracy) and its responsibility and trustworthy recommendations for true and untrue in situations? ]
Human brain only uses about 20 watts. Perhaps it is not about huge data centers with all that GPUs but just better, brilliant coding.