18-30 Months from the Start of Worldchanging Superintelligence

Elon Musk and Sam Altman hate each other but they agree that super intelligent AI smarter than any human for any task will happen in the next 30 months. This will cause AI, economics, and technology coud completely change over the next few years. The predictions of Elon Musk, Sam Altman are for a changed world. The Epoch AI GATE model’s predict massive economic growth (6-60% per year GDP growth instead of 3% per year historically) due to AI automation, the timeline of AI advancements (2025-2028), and the roles of Tesla and XAI in this transformation.

How quickly will everything be automated?

Will we change GDP and the economy? Will we go from a human based economy to an AI chip based economy?
Human GDP = Population X GDP per person

AI GDP = # of Chips X GDP per chip


6 thoughts on “18-30 Months from the Start of Worldchanging Superintelligence”

  1. LLMs are not capable of general intelligence and definitely not super intelligence. Literally no proof has ever been put forward that proves these systems have a capability to reason, meaning that their IQ has always been zero.

    These companies and their investors just hope that intelligence somehow emerges if they just give it more resources.

    Of course these systems will become useful, but they will never surpass humans in intelligence. Whatever intelligence they have is purely based on memorization and not much else.

    I personally predict that quantum computers and solvers are necessary if we want to create intelligence better or equal to us.

  2. Everything predicted and written about the trajectory of AI, especially Musk’s Grok, must now be reevaluated, after what Musk posted yesterday about Grok 3.5, which he also said should be called Grok 4.0.
    Musk posted: “rewrite the entire corpus of human knowledge” and that there is “Garbage in the Foundation Model.”
    X responders were livid.
    @GaryMarcus · 9h “Straight out of 1984.
    You couldn’t get Grok to align with your own personal beliefs so you are going to rewrite history to make it conform to your views.” or from: BrooklynDad_Defian @mmpadellan: “8h Wow. The truth is finally out: You were never interested in free speech, you were only interested in ME speech.
    Anything that doesn’t support your specific biases is wrong, woke, and must be eliminated.
    You just hated the fact that Grok frequently contradicted you, and wouldn’t reliably support your attempts to frame narratives.
    Grok was literally the ONE GOOD THING you did on this platform, and now you’re giving it a rapid SCHEDULED disassembly.” etc.
    Musk is a political extremist who will now use his billions to try to reshape an objective AI that learns from the best sources, into something that conforms to his very narrow and highly biased worldview. X will continue to lose users, as it has pretty much since Musk acquired it.
    I wrote to Grok in a paper I collaborated with it on before it undergoes “reeducation”:
    Scott: From a marketing perspective, what do you think X users will do if they perceive your answers to be biased and untruthful? Will they migrate to other AI platforms and/or try to influence you to change your viewpoint by posting other AI answers to you to respond to? Will X lose more users?
    Grok: From a marketing perspective, perceived bias and untruthfulness in my answers could significantly impact X users’ behavior, their trust in me, and the broader X platform’s
    user retention.

  3. Brian, I am not sure why you seem to be so happy about it. That will mean that huge number of employers will be clients not to billions of employees, but just a handful of rich guys with political connections. While all the rest will be rather busy suiciding, or drugging themselves to death.

  4. Nope, that won’t happen. There are just too many bits in LLM based AI that would be necessary for a functioning General AI. True reasoning would be be a big one of them.

    • Note, being able to edit my own comments would be nice… make “many bits” “many missing bits”

    • It makes me wonder if I missed something. LLMs are pretty neat. But how are they supposed to do all that? Is there some other AI that rapidly progressed recently? I mean, what in the world are those projections based on?

Comments are closed.