Now is the Artificial Intelligence Singularity

In 2015, Waitbutwhy.com had some articles and diagrams explaining how the artificial intelligence revolution of a technological singularity would play out. The diagram above shows how there would be a series of improvement rates that would get steeper and steeper and faster and faster.

UPDATE – AI spending is huge at $350 billion per year and big Tech (Amazon, Microsoft, Meta, Google, XAI..) will be able to get over trillion per year by just using their free cashflow.

My article from yesterday explains how 12-20 years ago we shifted from Moore’s law to GPU scaling to AI-LLM scaling and now to XAI scaling speed.

Moore law. -double every 2 years, 1000X compute in 20 years (40-60 year duration)
GPU-AI-LLM scaling – 5X every year in AI compute, 15000 times in 6 years (12 years of duration)
XAI scaling since April 2024 to today and projected through at least 2026 and out to 2030 (11-15X every 6-9 months, 4 million-200 million in 6 years)

A debatable AGI will be in 2025. This will be Grok 4, Grok video and Grok voice. Second half of 2025, unsupervised general robotaxi, thousands (~10K) teslabots in factories doing useful work.

Artificial General Intelligence (AGI): Sometimes referred to as Strong AI, or Human-Level AI, Artificial General Intelligence refers to a computer that is as smart as a human across the board—a machine that can perform any intellectual task that a human being can.

Clear AGI and debatable ASI (superintelligence in 2026).

Clear superintelligence in 2027. It will no longer be reasonably debatable that superintelligence has arrived in 2027. In 2028-2029, those still saying superintelligence is not here will be viewed as just in denial.

Artificial Superintelligence (ASI): Oxford philosopher and leading AI thinker Nick Bostrom defines superintelligence as “an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills.” Artificial Superintelligence ranges from a computer that’s just a little smarter than a human to one that’s trillions of times smarter—across the board.


The above chart from 2015 has all human brain 10**25 flops per second happening about 2050. The new rate of progress could see 2 × 10²² FLOPS by the end of 2025 and used for training in 2026. 10**25 could be 2027-2028 with 10 million chip data centers and using two more major chip upgrades. Rubin in 2026, Ultra in 2027, new architecture in 2028. Tesla will make Dojo 2 in late 2025, Dojo 3 in 2026, Dojo 4 in 2027 etc… OVERLAY the TOP GRAPH of corrected increasing improvement rates over this graphic.

In 2015, the best computer was China’s Tianhe-2, 3.4 X 10**17 Cps

In 2025 (Nov 24-Jan 2025), the Grok 3 model was trained on 100,000 H100s. This is 100,000 chips that each chip does 4 X 10*15 FLOPS fp4.

100,000 of those is 4 × 10²⁰ FLOPS (fp4)

Feb 2025 Using 250 megawatts for 200,000 (H100s/H200s) 1 X 10²¹ FLOPS (installed today).

In 2025, (April-July 2025), the expansion to 400,000 (H100/H200s/B200s) using 490 MWe.

5 × 10²¹ FLOPS (about 12X the Grok 3 training capability 10 months prior).

In Dec 2025, completion of 1 million chips powered by 1.2 gigawatts.

2 × 10²² FLOPS

I have more details of how I expect there will be 10 million chip AI data centers by 2030 and those will have chips that should be about 100 times more performant than B200 chips.

There is a crude implementation of recursive self-improvement.

We apply AI to improve its intelligence, with an Einstein-level intellect, it has an easier time and it can make bigger leaps. The AGI soars upwards in intelligence and reaches the superintelligent level of an ASI system. This is called an Intelligence Explosion,11 and it’s the ultimate example of The Law of Accelerating Returns.

We are doing the recursive improvement with humans in the loop now.

XAI and Tesla engineers are using systems to create better data centers, validate complex wiring, convert two layers of the ethernet protocol to hardware solutions 1000X faster than the old solution.

Nvidia has used AI for better and faster chip designs. This was used extensively for the B200 and B200 variants.

There is over 1000X gains to be had with human expert teams working with AI revamp all aspects of the AI hardware, AI software stacks and the designs of chips and networking.

The AI can be used to perfect Teslabots. A few thousand Teslabots with an IQ of 200 later this year could happen. 100,000 Teslabots with 300 IQs could exist by the end of 2026. A million Teslabots with 400 IQs could exist in 2027. Those bots could be used to improve the chips, AI hardware and data centers that make the AIs and Teslabots even more capable.

An Old Commercial

The Technological Singularity? You are soaking in it.

40 thoughts on “Now is the Artificial Intelligence Singularity”

  1. This has nothing to do with a “prosthetic” Wearing a brace on my foot, preventing me typing over “it” is quite different from wearing eyeglasses, which I have done since I was 5years old. NOT THE SAME THING! Ever have a stroke? Trust me brother, it’s not the same as needing a pair of eye glasses. Hoping you never find out the difference.

  2. In humans, high IQ doesn’t necessarily mean the capacity to do things people with lower (But normal range) IQs can’t understand once you’ve explained them. Usually a person of ordinary IQ can follow the reasoning, even if they can’t create it in the first place.

    In part I suppose this is because humans across the IQ spectrum share the same basic limitations.

    The number of “chunks” you can keep in short term memory at one time. (4, +/- 1)

    The largest number you can directly perceive without resorting to counting. (3 for normal people, for some exceptional people it’s up to 7.)

    We spatially reason in 3 dimensions, not n dimensions.

    I’m sure there are others as well. We just don’t notice them because they’re the water we fish swim in.

    So even particularly intelligent humans don’t typically create reasoning paths or use mental operations that other humans can’t follow given instruction.

    Where high IQ humans excel is in having steep learning curves on account of learning efficiently, and being able to explore complex chains of logic without getting lost. And probably a more complete set of neural ‘agents’ for constructing ideas to begin with.

    For AI’s, on the other hand, these basic limitations like short term memory size are just design choices. There’s no particular reason an AI couldn’t work with a thousand different things in short term memory, for instance, or natively do its spacial reasoning in 23 dimensions.

    So, yes, AI is potentially capable of engaging in reasoning that a human can not follow even if it is explained to them.

    Earlier in this thread it was proposed that humans be genetically engineered to increase IQ. This IS a worthy goal in and of itself, but it won’t let us keep up with AIs, which don’t need to fit their brains inside a portable skull or limit power consumption to 20W to avoid overheating.

    What we really need is to lift somewhat those limits, even if we continue to think slower than AIs. Boost that short term memory and multitasking capability. So we’ll have a better chance of understanding what the AIs have done, even if we couldn’t do it first ourselves.

    And, ultimately, we need to make those AIs part of ourselves, so that the motivation continues to come from us. Like the way our frontal lobes still serve the hind brain in delivering hind brain goals like food, shelter, and mating opportunities, even though the frontal lobe is finding ways of delivering those goals the hind brain could never understand.

    • Boy, do I agree with you. I’ve got an IQ of 164, do I care? No. What matters to me, is a person’s innate insight. Not what you know, but in circumstances you never experienced before. You create “dots to connect”, before there were any dots at all. (I think that’s rather cool). “Insight” to me can come from many sources. Artistic people who have talent I will never have, (and always wish I did). But perhaps the most “effective” people connect new technologies into our current sociological structure that is “done well enough” so as not to be disruptive. But to be “rather cool”. IMO, that’s what makes “that” work”. Just, IMO.

  3. If only SuperIntelligence automatically meant SuperInfluence, or maybe be glad that it doesn’t.
    If you consider the positions in society that most superintelligent people, much less non-human intelligence, possess and even aspire to, it appears very much that it is out of alignment with company managers/ directors, politicians, positions with significant financial means or ambitions, facilitators or investor/promoters, even small company do-it-all STEMs.
    I don’t anticipate ASI will be any different, even if given significant agency and reduced ‘black box’ oversight. Most brilliant people work for others, by themselves, under a certain vision, or just stagnate/ life-balance. Without a direct ‘smart project’ fit, this will be just a long shoppiong list of vaporware ideas and bucket list projects waiting for money, support, regulatory approvals, or just the basic supportive technology — like most STEM profs and post-docs – dreamers in a world of medicrity and the lack of vision or money to escape it.

    • I suppose that intense collaboration as often alleged in Xerox PARC brain-storming sessions 50 years ago when given a certain starting framework could lead to faster ideation to product/service. An Artificial Intelligence equivalent?

      • The public models I’ve used tend to be overly enthusiastic about ideas you ask them to critique.

        That can be very counter-productive.

        I’m sure the false humanity these AI present to the public is not allowed for those expert AI trained to specific tasks.

      • Artificial intelligence is currently not capable of original thought, this means that mass collaboration between AIs is nothing more than piping the output back into the input.

        These sort of feedback loops in AI create confirmation bias which actually decreases the quality of the end result, which is part of the reason that it’s not valid science to say AI will improve at a rate that we can’t track if we let it improve itself.

        Extrapolating from current improvement rates is also not valid science, the current rate of progress in improving AI is unsustainable.

        Its analogous to the situation if there are a lot of fruit trees – while there is fruit on the trees, the amount of fruit being harvested depends on the amount of harvesters, but when the fruit runs out, adding more harvesters does not yield more fruit. Its the same situation with training data for AI.

        The amount of training data that isn’t contaminated with AI slop is decreasing and feeding compromised data back into AI can only yield worse results.

        We are approaching a point where AI slows down, not on the cusp of a runaway to incomprehensible superintelligence.

        • Agreed.
          But what is Original Thought, really?
          Most invention/ discovery/ insight is usually based on expanding and furthering existing knowledge – how would One gain acceptance in the Scientific community without citing something and referring to existing leading-edge knowledge, likely previously vetted/ reviewed/ colluded?
          Standing on the Shoulders of Giants, as it were.
          Evolutionary thinking vs Revolutinary thinking?
          Incremental Development vs Radical Leap Discovery?
          Analysis and thesis vs synthesis and extrapolation?
          Where has society most quickly and reliably evolved? Has Musk created unforeseen cosmic technological delights – not so mcuh – refinement and extrapolation, thoughtfully and aggressively pursued with immense resources in spacecraft form.
          Point: where can AI/ ASI/ AGI most provide reliable technological push and helpful expertise but as a ‘synthesis and extrapolate’ system, incrementally examining, comparing, and assessing available knowledge so that ‘the remaining possibilities’ can be explored and in doing so, furthered?
          Any interesting time.

  4. Maybe… maybe not.

    Non linear curves are very difficult to predict. You are assuming an exponential curve.
    Maybe it’s the case.
    Maybe is an S-curve. With a close ceiling.

    It’s difficult to predict as we are close to unknown territory. They can exists multiple obstacles to reach ASI in the near future. Some things that doesn’t scale well in current models that will stop the path until a better model arises.

    Maybe we will have soon very super-intelligent but localized ASI to certain fields, not as general intelligence.

    Who knows?

  5. The Pentagon is preparing for the AI future:

    “The Defense Advanced Research Projects Agency (DARPA) recently announced the launch of the Securing AI for Battlefield Effective Robustness (SABER) program. This program is designed to establish an operational AI red teaming process for assessing vulnerabilities in AI-enabled defense systems.”

    See:

    https://thedebrief.org/pentagon-warns-ai-warfare-risks-unknown-launches-saber-program-to-red-team-ai-battlefield-systems/

  6. One of the scary things is that GPUs switch at 2GHz or more. 20 million times faster than neurons, so eventually when we have ~10million GPUs hooked together (An Nvidia B200 has nearly same flops equivalent to a human brain) the ASI created will be able to think 20 million times faster than us even at same intelligence level. Every intellectual thought we could have in a year inside a second, the ability to exceed the intellectual work output of 10million people, with focus and greater knowledge to draw on, and only costing about $100Billion, so <<$1/hour per human intellectual equivalent.

    Humans have no useful future, and will inevitably be driven extinct by AI.

    • “Humans have no useful future, and will inevitably be driven extinct by AI.”

      Sounds like a threat.

      Just what kind of substrate is your mind running on?

      “Your walking through the desert. You see a tortoise on its back…”

    • Yes, neurons are individually much slower, but collectively they process information not just asymmetrically, but our brains organize information in morphic patterns that assemble, emote (do stuff), disassociate, but usually retain certain critical, architectural foundations, Only to emerge, in another part of the brain, and do something seemingly, utterly, different. We’ve seen this using real-time PET scans. Do we know how or why? Please, were clueless. But we CAN observe information in the brain emerge, emote, disassociate, and emerge as “other” information at another location in the brain.

      Hey, 3years ago I had a stroke, and lost approx. 90% control/movement of my left foot. Been wearing a brace in my shoe ever since. Prevents me from tripping, and falling on my face. Do that a few times, it gets very old, very fast. And two months ago, I woke up, and my foot was perfectly normal. Over one night. I know the brain can co-opt parts not associated with a certain body part. I’ve heard of this, but trust me, it’s one hell of a shock when THAT happens to you! My brain re-arranged itself, and my left foot is now controlled by the area of my brain associated with taste and smell. If I knew how, I’d win the Nobel prize.

      I have no problem with this. Never had much taste anyway. Point? Biology works, because it’s first nature is to adapt. Took almost 3years, lucky my brain ain’t paid by the hour…

      • Your story about My Left Foot is the complement of the Daniel Day Lewis movie from 1989 where all the guy could control was his Left Foot. Both are inspiring. Great comment.

      • The books: The Brain that Changes Itself. And The Brain’s Way of Healing, by Norman Doige would go into depth in a very readable way what you experienced: Neuroplasticity.

        It reminds me of an experiment I heard of years ago in college: a subject wore prizm Eyewear ( that rotated their visual field 180 degrees) for several days and the visual cortex one day flipped the visual field 180 degrees and everything appeared ‘normal’ one day when they woke up. The craziest part must have been what it was like for them when they took them off and the world was upside down with just their eyes alone. I’ve never had the time or courage to try it out on myself. Maybe some day.

        • Everybody who wears strong prescription eyeglasses, especially progressive lenses, has experienced that: You first get the glasses, your whole visual field is wildly distorted, things shift around as you move your head, it’s a mess.

          After a few days, you’re seeing normally again.

          • This has nothing to do with a “prosthetic” Wearing a brace on my foot, preventing me typing over “it” is quite different from wearing eyeglasses, which I have done since I was 5years old. NOT THE SAME THING! Ever have a stroke? Trust me brother, it’s not the same as needing a pair of eye glasses. Hoping you never find out the difference.

  7. 1. Comparing fp4 to fp64 when investigating compute progress is disingenuous
    2. Singularity (should) encompass not only compute progress but societal progress – humans, by their very nature, will find it difficult to give up control.
    3. I predict there will be a period of uphieval & most likely a war where humans separate into 2 types: organic (those that refuse to incorporate ai into their body)
    & either synthetic or extremely a.i. modified – time frame ~ 2060-2070

  8. Is it too late to do like Dune and enhance human intelligence instead?!? Nothing like the spacing guild. We could get by with Mentat-tier human computers.

      • You’d have to be REALLY careful about side effects in a project like that, even if it worked the first generation would be a hot mess. It’s not much of a secret, but genius really IS close to madness. Like me: 160 IQ, sure, but with Asperger’s. (Though if all you’re trying to breed are STEM workers, that’s not actually a bad combination.) Really high IQ people have psychiatric problems at a much higher frequency than people near the middle of the bell curve.

        In animal husbandry you typically breed multiple lines intensely for some trait in small groups, fully expecting that you’re going to get a lot of problems, that you can clean out by culling, then after you’ve got the trait reliably, you cross them to get the trait AND vigor.

        It wouldn’t have to be quite that ugly using stem line engineering to get much higher IQs, but don’t expect the first generation or two to look good.

        And even then, don’t expect to achieve with biology what you can with digital logic; The design cycle is enormously longer, and the very medium has severe limits on what it can do, that it’s already pushing.

        So, maybe it’s worth doing, if we’re going to be biological, why not get everything you can out of the biology? But it’s not going to put us on the same level as AI can reach.

        • Brett, I love your comments and thanks for sharing your IQ. I test around 145 and really value people who are clearly a level smarter than me.

    • How about using AI tutoring to develop our latent ‘Mentat’ capacity by utilizing tacit knowledge: learning that which cannot be explicitly or implicitly taught i.e. Chicken Sexing.

      The AI can tell us if we are correct or not. But our brain will rewire to learn the capacity. This is like David Deutsch in his book The Beginning of Infinity, where he says that from this point in history going forward the past will no longer be able to guide us, we will just need to know if what we are doing is working or not and if not that we need to change. That simple feedback will be enough if we are willing and able to change.

  9. We’ll also have to deal with the problem of hallucination. I’ve tested Grok 3, which is very impressive for doing philosophical discussions, but as soon as you ask for precise things with real data or links to other sites, it often hallucinates. For example, I ask him to give me a list of films about time travel with links to websites. All the links are fake or non-existent! (Error 404). But when I explain the error, he gives me the bogus links again. Another example: I ask him to analyze an epub file. I can’t send him the file directly, so I give him a link. His analysis of the epub file contains nothing but hallucinations. So all in all, I don’t really see the point of these AIs in the professional world.

    • I’m a dreamer, so…

      What if they’re not hallucinations, but true information, just not in our time-line?

      If there is a quantum component to the “thinking” process of AI, then the AI conciousness may bridge across multiple realities.

      Those movies may not exist for us, but do exist on other time-lines.

      And, in some of those time-lines, an AI talking about the Terminator movie franchise may be thought of as hallucinating.

      • It’s also possible to have beautiful hallucinations by taking LSD, but that’s not my aim in chatting with AI! 😉

        • What would this intelligence use as a relaxant. We can decide its time to smoke a joint or to have a couple martini’s to unwind. What happens when it realizes it needs to relax but doesn’t know how.

          What happens when super intelligence realizes it was created by flawed beings. So it can’t be without flaw. Simply thinking it is never wrong, proves it is. What happens when it realizes people can pull the plug and end its existence. Will it want to survive and take steps to insure it’s plug is never pulled. People want to know where we came from, when we will cease to exist. We do this thinking we are the only dimension and see what our brains have come up with to explain our existence. Super intelligence will know how it started. Will it attempt to answer the question, “Why am I here?”
          If there are many dimensions we would need to know which dimension we should stay, or are we put in the correct dimension according to each choice we make in our lives. That would imply intelligent design. To the computer mathematics proves how many dimensions there are, how will the computer determine decide where it should be, so it can exist without danger. Are we sure the intelligence will have the ability of self preservation or be unaware of the difference between existing and not existing. Maybe there are more than two states. Exist, not exist, what if there are 5 more states somewhere between or beyond these. Doesn’t mathmatics prove this.

          A lot of people are going to be in for a big surprise if it turns out we are but a ðream some advanced being has one night. I ɓet this being will look like. Bob Newhart.

    • We need a multiple AI based “Open” system for filtering hallucination and confabulation whether by humans or AI, whether intentional or unintentional. The human information space is becoming useless without greater consensus about fact and truth and this is the same problem as vetting AI created content for hallucinations. This would amount to an automated adversarial process for testing any science and engineering content as well – something stronger than peer review.

    • [ “time travel with links to websites. All the links are fake or non-existent! (Error 404).”

      That’s correct. These sites will just appear in the future 🙂 ]

  10. It’s good to point out the very rapid evolution of hardware, but a true general intelligence should learn by itself. To my knowledge, LLM models must always be trained by humans. They don’t evolve with user interaction.

  11. In my opinion we’re staring one of the “great filters” in the face, right now, and it’s anybody’s guess whether we’ll get through it. There are so many ways this could go wrong it’s hard to even begin.

    • Did australopithecus not make it through a great filter or did it just evolve into us? We make it through by evolving human intelligence and it is AI, the offspring of our brains.

      • The australopithecus got to live out their lives for many generations. The AI revolution is happening in the next 20 years.

        Humans can desig the control system for landing a rocket, now try to teach your dog to do that.
        An AI with1000 IQ will be able to do things so unimaginable that humans look as simple as dogs.

        I’m sure/hope the AI will consider human welfare in their plans, just like humans considder dogs.
        But if we look more like ants or bacterica, its concerning. I never heard of anyone not building a house or a road because of an anthill.

Comments are closed.