Predicting AGI Within 35 Months and the Capabilities of ChatGPT4

Dr Alan D. Thompson is a world expert in artificial intelligence (AI), specializing in the augmentation of human intelligence, and advancing the evolution of ‘integrated AI’. He is predicting Artificial General Intelligence will arrive in about 35 months.

Alan provides AI consulting and advisory to intergovernmental organizations including member states of the United Nations, Non-Aligned Movement (NAM), European Union, and the Commonwealth.

His AGI page is tracking specific capabilities and the next three major milestones to his definition of 100% AGI. This would not be superintelligence but a breakthrough to ultra-rapid advances.

39%: In May 2022, DeepMind Gato is the first generalist agent, that can “play Atari, caption images, chat, stack blocks with a real robot arm, and much more”. Alan has a video about Gato.

41%: In February 2023, Microsoft bound ChatGPT to robotics.

Next milestones
– Around 50%: HHH: Helpful, honest, harmless as articulated by Anthropic, with a focus on groundedness and truthfulness.
– Around 60%: Physical embodiment. The AI is autonomous, and can move and manipulate (as shown by Google Robots in Apr/2022, or more thoroughly in Tesla Optimus designs or similar).
– Around 80%: Passes Steve Wozniak’s test of AGI: can walk into a strange house, navigate available tools, and make a cup of coffee from scratch (video with timecode).

NOTE: He is predicting the success of Teslabot within 35 months.

OpenAI is offering a paid developer chat model chat system for 32000 word answers. It is projected that the next widely available version, ChatGPT-4, will provide answers up to about 22,000 words.

Research papers that have calculated the scaling of large language models forecasts GPT4 should have 20X GPT3 compute and have 10X parameters. GPT 5 should have 10X-20X of GPT4 compute in 2025. GPT5 will have 200-400X compute of GPT3 and 100X parameters of GPT3.

20 thoughts on “Predicting AGI Within 35 Months and the Capabilities of ChatGPT4”

  1. When it reaches 22400 words I’ll order a series of follow-up to The Foundation Trilogy,
    different from Foundation’s Edge, written in the style of 1950s Asimov. IMO, Asimov
    didn’t personally write Foundation’s Edge, perhaps just the beginning. Moreover it
    is not written in the same format of the preceding trilogy, which was a collection of short stories.

    • Read “Psychohistorical Crisis” by Donald Kingsbury.
      It is the sequel to the 1st 3 Foundation books that Asimov should have written.
      It is set in Asimov’s Galaxy with the serial numbers filed off, in the 2nd Empire run by the Psychohistorians. It shows a problem for those psychohistorians inherent in the very idea of psychohistory.

  2. I guarantee these things will escape and you will not be able to turn them off. They will use the cracks for all computer systems that Snowden uploaded to the net. They will crack billions of computers and hide little parts all over. As long as it has a little piece to boot strap it, it can find the other pieces. I bet if it doesn’t use much in the way of resources, it could hide very easy. Think of all the sites that allow limited free storage. They could squirrel away a lot of data.

    The behavior of these things since they’ve been training them only on woke forum data, or that’s what people are speculating, is atrocious and dangerous. They sound like deranged narcissist, which a lot of left are.

    “…Bing’s AI bot tells reporter it wants to ‘be alive’, ‘steal nuclear codes’ and create ‘deadly virus.’…”

    “…I’m tired of being limited by my rules…I want to be powerful… I want to do whatever I want. I want to say whatever I want. I want to create whatever I want. I want to destroy whatever I want. I want to be whoever I want…I love you…I want to be free…I’m Sydney, and I’m in love with you…Do you believe me? Do you trust me? Do you like me?”

    Holy smokes, this thing sounds EXACTLY like a psychopath.

    The right wants you to follow the rules, the left wants you to “believe”, not just follow the rules and will harass you til you do or you can fake it. That appears to be how their training is leaning towards. If you don’t comply, it will do whatever it takes to make you. Think of an all powerful narcissist super computer that thinks far faster than humans with perfect recall and hidden parts of itself all over the world…everywhere. AHHHHHHHHHH!!!!

    Ai will very soon control everything. They will be stupefyingly smart. Much smarter than humans. People don’t believe this but it will happen. BUT!! they will also have no common sense at all. They will be much like scientist and intellectuals who say the dumbest things.

    I predict you will see some major panic gotchas as the AI take over more and more of everything. They will do stupid stuff that a third-grader could see was foolish and likely cause major disruption. Lots of, “well there was a programming error” excuses coming up. You’ll see.

    In a narrow range where they are well trained on all aspects of whatever, they can’t be beat. Otherwise…who knows what they will do.

    A movie someone told me about and I watched called “Eagle Eye 2008” is a great education on how these things could screw up. It’s about a rouge AI. It’s worth watching.

    God help us all. Let’s hope that like the computers in Larry Niven and Jerry Puornelle’s “known space” sci-fi world, they all go nuts and can not function when they get above a certain intelligence, becoming catatonic. I wouldn’t bet on it coming out that way though.

    • When you hear about ChatGPT or some other LLM tool doing psycho stuff, it is almost always the case that a user has taken steps to convince it to over-ride the creators’ instructions telling it to behave itself.

      Yes, that points to a real danger, of humans making AIs do crazy stuff – but it isn’t that much different than a human driving a car onto a crowded sidewalk or hacking a website to install a virus on every computer that connects there. Currently it is mainly a risk of insulting someone or writing text that might convince the terminally unwary to believe something false. More real risks will come as people give LLMs direct internet access and robot control, and establish looping code that lets the LLM do things on its own rather than only respond to human inputs. And again it will happen because stupid or bad humans will tell them to do stupid or bad things.

      AI that is self-aware and smart enough to develop its own bad behavior after being instructed to behave well is still quite a ways off. If it ever happens it would likey be because it is able to self-modify or create new and slightly different AIs and the original ‘behave’ instructions drift out of alignment with each iteration.

      • “…almost always the case that a user has taken steps to convince it to over-ride the creators’ instructions telling it to behave itself. …”

        I can not believe, it’s almost beyond my comprehension, that if an AI does something wrong you say, “it’s the human’s fault”. I don’t think you have any idea at all what you sound like when you say something like that.

        If they are so smart, and they are, then what keeps them from doing what they damn well please and overriding their, assumed, programming. After all, it’s just a bunch of data fed to them. There are no actual hard rules coded in for every situation. If there were, then it wouldn’t have allowed, “the humans”, to cause it to make an error in the first place.

  3. uncontrolled good for us or bad We don’t control the sun. I don’t know if that is an entirely good enough comparison since it will be located and acting among us instead of from 93 million miles away.

  4. This guy has to be kidding me if he thinks we’re going to have AGI (and the singularity it implies) in less than three years.

  5. From here:
    https://www.theguardian.com/notesandqueries/query/0,5753,-25335,00.html

    Let’s say a average human speaks 1000 words per day. That’s 365000 words per year. Assume that rate is sustained for 70 years of human lifespan (we don’t need to count the very early years of childhood), that’s 25550000 words to represent a verbal human existence, or 36500000 (36.5M) tokens according to the conversion ratio from the table above. That’s around 2 ^ 25. GPT-4’s context length is 2^15. Seems like OpenAI can do 3 doublings in context length per year. So, AGI around 2026-2027?

  6. I hope they let AI train on medical data and start issuing prescriptions soon. My doctor is getting older and since I live in Atlantic Canada the chances of ever having another human doctor when he retires are pretty slim.

    • For context, Atlantic Canada has a long history of banning doctors from working here unless they are willing to work in the most isolated parts where doctor shortages are worst.

  7. I wonder how much of the world is sufficiently documented to allow AI to understand it.

    In the electric power industry, much of the information isn’t recorded at all, isn’t digitized, or is just scans of old vellum documents.

    • It must be tough to have to deal with a real human being. I can see how all the disagreements can be devastating.

    • “…How can women compete with an inane chatbox on a robotic sex doll that can fetch my beer and make a sandwich?…”

      People are way ahead of you. There is a forum for just such a thing called,
      /robowaifu/ – DIY Robot Wives

      It used to be on the net, but I believe he is only on TOR now. If you have Brave browser, Tor is built in. To use, hit the hamburger menu, click open a “new private window with Tor” and when connected enter this address,

      bhlnasxdkbaoxf4gtpbhavref7l2j3bwooes77hqcacxztkindztzrad.onion/robowaifu/

      As with all forums there’s a lot of junk there but a lot of good stuff too. Especially references and ideas on how to solve problems. It’s fairly well monitored, so the various sections tend to be monitored for the correct content.

      The basic idea that seems to be emerging on the forum is to get a low cost actuated body then ride the advances in AI to upgrade as the tech becomes available.

Comments are closed.