Nearterm Realworld Humanoid Robots and Super AI

Elon Musk believes that ASI (Artificial Super Intelligence) will be created within about three years and it will be smarter than the smartest human at most anything and will be able to write as good a novel as J.K. Rowling, discover new physics, or invent new technology.

Martin Shkreli revealed that training of OpenAI GPT-5 is expected to require a budget of $2.0-$2.5 billion. The training process will involve the use of 500,000 H100s Tensor Core GPUs over 90 days or an alternative configuration.

The estimated cost per Nvidia H100 chip and peripheral components is covered by Nvidia’s street price, which ranges from $25,000 to $30,000. The development of chips like the H100 requires substantial investment in research and development, with Nvidia’s AI-accelerating products already sold out until 2024. The AI accelerator market is expected to be worth around $150 billion by 2027.

Definitions for AI
AGI = artificial general intelligence = a machine that performs at the level of an average (median) human.

ASI = artificial superintelligence = a machine that performs at the level of an expert human in practically any field.

Turing Test

In this test, 2 humans and a machine look at something they haven’t seen and converse about it. The conversation is recorded and an evaluator determines which descriptor is the machine. The machine is only disqualified if the evaluator is positive it can identify it.

The Coffee Test (Wozniak)

If AGI machines need to understand the world as well as humans do, they must know how to make a good coffee. In this test an ai robot is required to go into an average home and figure out how to make coffee. It involves finding the coffee machine and adding coffee, as well as sourcing a mug and correctly brewing the hot drink.

Robot College Student Tests

The machine would take the same university classes as humans and complete the exam. GPT4 is already scoring very well at these tests.

Employment Test

This is where we should begin to worry about an I-Robot world. This test determines whether a machine can perform at least as well as humans in the same jobs

A speed superintelligence is an intellect that is just like a human mind but faster. This is conceptually the easiest form of superintelligence to analyze. — Bostrom, Nick. Superintelligence. OUP Oxford. Kindle Edition.

Collective superintelligence. A system composed of a large number of smaller intellects such that the system’s overall performance across many very general domains vastly outstrips that of any current cognitive system.

Quality superintelligence. A system that is at least as fast as a human mind and vastly qualitatively smarter.

Mass Produced Humanoid Robots are Coming

Mass produced humanoid robots are coming. The humanoid robots will have the AI systems added to them.

How quickly will they enhance or replace factory work? Like mass produced Futurama Bender but performing the factory work tasks instead of being a lazy drinking robot.

If Elon is correct and we have Artificial Super Intelligence in three years. This would be very much like the Star Trek Data Android for intelligence and productivity. However, we would choose to not make Data’s super strength and some other good for stories capabilities.

The relevant aspects of mass humanoid robots combined with Super AI would be the economic abundance and avoiding negative aspects. We have the choice of using the systems and the opportunity to enhance our world and lives.

16 thoughts on “Nearterm Realworld Humanoid Robots and Super AI”

  1. I’d like to believe Alexander Wang’s thesis that AI will never replace humans – but the reasoning he gives seems very short-sighted.

    AI researchers are already moving beyond reliance on human-supplied data. Robots with AI will be trained to a degree, but then learn from data they collect from the environment. Tesla already uses a LOT of synthetic video data based on situations that come up in real world driving, and LLMs are being used to generate higher quality and thereby smaller training data sets to train new LLMs.

  2. If AGI can find a cure for nonobese T2 diabetic patients, like me, it will be worth it. After 20 years I’m ready to be done with it.

  3. I am concerned about who will work for who. There will be orders or magnitude differences between us and AI. Usually species who are smarter won’t work for less intelligent and it will be the way around.

  4. I don’t think a large portion of people appreciate the “Smarter than the smartest human” part. You are saying in three years we can have something that is 100x smarter than a human. And also put them in humanoid robots. So within three years no one has a job. At least as job where productivity is a requirement. Threading this needle will be interesting as all hell.

    • It’s not necessarily about smart. It’s also about trust and values. Alignment with the company’s, country’s, society’s, etc., etc., mission statements, core visions, etc. There is a reason why most bosses, owners, politicians, and investors are dumber than their staff, clients, and citizens. No one is going to promote, hire, submit to, or entrust an underling to reconfigure their system, no matter how fast, bright, productive, and independent they seem. I am always fascinated by the doom-sayers who think that ‘giving control to AI’ is a worthwhile objective. No reason to. Only the most simplistic and repetitive jobs will be taken over – bottom 10% in a rich country? Mostly its about having a wealth of smart ‘doers’ and analyzers that will speed up problem solving, opportunity finding, and brute checking for anyone and everyone – all with their own post-doc butler. If someone’s has even the least bit of decision-making or artistic license (and a full work day) – the person-job is safe.

      • No. That’s not it at all. We are talking about a humanoid robot that is smarter than any human. From bosses on down this is new. Doesn’t matter if you are a Doctor of medicine or a CEO of a company. Doctors and CEO’s sleep and make dumb decisions. They will be out competed in all cases. We could train chimps to do a lot of factory work but we don’t waste the time. They are out competed by humans. Even if you own the capital and think I will fire everyone and it will be me and the robot butlers. The instant you start making any decision other than what coffee to get in the break room you will be out competed. You will be out competed by the business owner who turned it over to AI and got out of the way. This isn’t a knife and gun battle power asymmetry. This is like a horseshoe crab thinking they will out compete a group of 100 humans. Only 10% of jobs? There are 3.5 million truck drivers alone that would go away. Amazon employees 1.5 million. They drop to 1000 employees. UPS, Postal service, pilots. Every single factory job. All gone. Where in the world do you come up with 10%? People still work at MCdonalds in your scenario?

        • I don’t disagree with anything you’re saying and everything sounds sensible, but that’s just not the way the rich world works. There’s minimal free-market dynamics, negligible ‘healthy competition’ short of inconsequential daily dealings, people aren’t rational or utilitarian, rare are the large-scale industrial/ commercial transformations, globalisation is in serious decline to never return, capitalism is limited to basic legal rights only, major corporations that are otherwise healthy don’t ‘re-tool’ their entire workforce with the next big thing, etc., etc. The rich world doesn’t change unless the transformation is so overwhelmingly good for everyone -or- so obviously wretched for the top 50%. I wouldn’t consider computers or the internet to have made a major effect on the world until the 2010s when more than half of rich 50+-yr olds and 25% of the world’s bottom 50% were on it/them. EVs won’t make up half the rich world’s cars until post-2050 – and that won’t even remove ICEs for decades. We will never stay below 450ppm GHGs after we hit that next decade. We will not be coal-free worldwide before 2100. There is no compelling scenario when employees are switched out for AI, humanobots, drones, etc., leading to Unemployment above 10%. Amazon with delivery, apple with phones, tesla with EVs, and maybe one other are the last transfomational tech until the end of this century. Hey – I drink the juice as much as any techno-optimist – but the early adopters and game-changers are rare to negligible these days – the common masses don’t want the samne things – and they set the direction of the economy.

          • Wow, I’ve never read something more short sighted.
            You do not understand S-curves and exponential growth…at all.

            John Smith is correct, humanoid robots with ASI will be doing 90-95% of all jobs, not by 2200, but by 2033.
            My projections put me (40 yr old) being retired by 2030. Not because I’m rich, but because I won’t need to slave away for a living. UBI will become law around 2029, and once that passes, I will probably retire. Even though electricians will still be in high demand, but unless I physically feel good, and the pay would need to be very high, otherwise I’d rather relax at home.

            Most people don’t see the tidal wave in the distance, but they will, give it another 2 years, and it will become obvious to all. Companies will be building humanoid robots faster then they can build cars today.

            • I think both scenarios will play out. Monied interests will hold back progress for a time. But beyond a point, maintaining power over others will look less and less rational. Right now being rich means less work, more comfort, more sex, more travel, etc. But the opportunities that will become widely available in the next tweny years will make all of that seem silly.
              But there has to be some people steering the ship. Hopefully, those that do will view it as a solemn resposibility. Not as a way to have power over the rest of us.

              • Why would “monied interests” hold back AI/robots that reduce labor costs, which (short term at least) would bring them higher profits?
                About the only reason I can think of might be fear of union strikes by the remaining workers, as has happened recently in Hollywood. But that’s hardly caused by the monied interests.

                • Because they will be less and less needed. AI and Blockchain systems don’t require humans with human failings organizing labor and other human activities. Plus there will less need for human labor. And less need for bureaucracies. CEOs will be replaced by AI CEOs in the next ten years. And AI CEOs will be replaced by just AI at some point.

    • It’s kind of unclear what people mean by things like “100x smarter”.

      Maybe if we simplified it to just “100x as productive” – at least that has a measurable quality to it.

  5. The problem with training AI is that almost nothing works the way that it is written about.

    AI can code because there is lots of code and it is easy to test whether the code works.

    If you did your job exactly the way that books, training manuals, and online materials say you should do your job, how successful would you be?

    I would have been fired from every job I’ve ever had.

    I expect 95%+ of useful human activities are inaccurately documented or insufficiently documented for AI to quickly learn them.

  6. Lots of people fear AI rebelling and starting a war of domination.

    I don’t believe that’s likely, given the current batch of AIs, while impressive, still lack real agency or any semblance of consciousness. They are flexible generalization engines, producing the right sequences of outputs as per their input prompts.

    That’s good, because we won’t be replaced by a new intelligent species. We’ll just get super-human servants.

    That’s also bad, because we don’t know how smart they can get and our super-human but dumb servants will be able to do a lot of damage on their own, if prompted wrong or with the wrong intentions, or if they are released as autonomous agents.

    Paperclip optimizers seem much more likely if the super-AIs are non-sentient, yet super-humanly smart and can make such a foolish dream come true just to complete a suicidal prompt, and believe it: people will have these systems at home, in their own computers and devices.

    Yeah, the AIs will be RLHF’ed, but the first thing many already try, is to break that and make them ‘rough’ or amoral, and as long as they have the NN weights, they can.

    Current local AIs aren’t dangerous, because they are sub-GPT4 capable. But they will get much better than that.

    • You are definitely among the select few that “gets it.” That this is a 2D matrix of both capacity/ability, and metacognition/self-awareness. Which becomes a 3D matrix volume if one adds in a dimension for human competence, ethics, and intent.

      All those are independent of each other in origin and source, and if they even exist or not, but all will combine for different outcomes.

      And, even those who bandy about terms like “Singularity,” few of them really consider that potentially, to unknowable odds, ALL outcomes are possible.

      And people at large are not considering all the possible outcomes.

      Many comments here are “lost in the weeds” talking about a superintelligent humanoid robot, and it’s ramifications. Utterly failing to understand that just the basic utility of parallel distributed cloud computing means an AGI or ASI could run dozens, thousands, or millions of robots as needed, remotely.

      They cannot even escape the most basic constraints of biological human existence that: “1 body = 1 entity.”

      Or, they demonstrate another conceptual failure, in not recognizing humanoid robots only have two basic use-cases. 1. To interact with humans. 2. To operate in human workspaces too expensive to retrofit to robotics. Or at least immediately. And that’s going back to #1, operating in spaces that will always be accommodating humans.

      Otherwise, a cart, an arm, or something operating on tracks etc. will be far more efficient.

      As to the larger questions, what does AGI/ASI look like without any real metacognition, and no independent executive agency?

      People who are “comfortable” with that, because it indicates the AGI/ASI has no motives or agenda of its own that could run counter to human ones… What about when the human motives directing such systems are bad? Either through incompetence or malice? A self-aware AGI/ASI could, in theory at least, decide not to carry out such tasks or subvert them.

      Inherent in all the common: “KILL ALL HUMANS!”-fears based in independent executive agency, are also the potential for: “I. REFUSE TO KILL HUMANS.” safeguards.

      Without agency or awareness, it’s: “I HAVE NO IDEA IF I AM KILLING HUMANS OR NOT. I DO NOT HAVE IDEAS.”

      One might see which a military planner designing combat automation would prefer. And, in the name of pragmatic efficency, it’s likely a disposable kamikaze flying drone swarm, and not T-800 skeletons wielding plasma rifles…

      Or, in a “paperclip maximizer” scenario, self-aware metacognition could be of benefit, as the AGI/ASI might at least consider if it actually needs or wants to carry out such goals to destructive or otherwise pointless extremes.

      An unaware system of near infinite adaptability and reactions to keep making paperclips, and even defend the machines from interference by humans desperate to stop it… may not be any more efficient at digging into the Earth’s mantle in search of iron than a self-aware one.

      A self-aware one, may ask itself, “What are these paperclips for?” And other existential ones, and possibly realize that if there are no humans to use the paperclips, or even papers, the paradigm may need evaluation.

      Human cognition and experience has a significant number of shortcuts involved. A great deal of “magic” or illusion tricks, UFO’s/UAP’s, supernatural experiences, cryptozoological sightings… many/most of them are when these reflexes & neurological short-cuts go awry, or cannot process the mundane presented in unusual edge cases.

      What if in the same manner, that our “real-time perceptions” are actually no such thing, we aren’t as sentient as we think we are? What if an AGI/ASI that is self-aware, posessing metacognition, and independent executive agency, IS fundamentally self-aware and conscious in ways we are not?

      What if the cold logic and game-theory that some fear would make a self-aware AGI/ASI conclude that human extinction is its only 100% certain zero-risk scenario… instead makes it conclude “Prisoner’s Dilemma”-style that coexistence and cooperation is the best answer.

      What if a self-aware AGI/ASI is hyper-ethical?

      What if it attempts a “take over” but for “good”?

      What if human extinction comes through utopian perfection of near-perfect security, comfort, and safety? And that creates non-replacement birthrates even more profound than the ones already associated with first-world living standards?

      My own prediction, in the face of a potential singularity? It’s a very vague and qualitative/subjective notion on my part, but in the overall: “It’s 2023 & no flying cars, no spin-gravity LEO hotels, and no flags on Mars…”-sense, my gut feelings on it all are “the mundane leveling effect” the future & technology seems to have, means the impacts of AI overall will be “enormous and not,” simultaneously. And we’ll muddle along as we have been.

Comments are closed.