What is Left Before AGI ?

The three main ingredients leading to predictable improvements in AI performance are training data, computation, and improved algorithms.

In the mid-2010s, some AI researchers noticed that larger AI systems were consistently smarter, and so they theorized that the most important ingredient in AI performance might be the total budget for AI training computation. When this was graphed, it became clear that the amount of computation going into the largest models was growing at 10x per year (a doubling time 7 times faster than Moore’s Law).

In 2019, several members of what was to become the founding Anthropic team made this idea precise by developing scaling laws for AI, demonstrating that you could make AIs smarter in a predictable way, just by making them larger and training them on more data. Justified in part by these results, this team led the effort to train GPT-3, arguably the first modern “large” language model, with over 173B parameters.

In 2019, it seemed possible that

* multimodality
* logical reasoning
* speed of learning
* transfer learning across tasks, and
* long-term memory might be “walls” that would slow or halt the progress of AI.

In the years since, several of these “walls”, such as multimodality and logical reasoning, have fallen.
It seems likely that rapid AI progress will continue rather than stall or plateau. AI systems are now approaching human level performance on a large variety of tasks, and yet training these systems costs are still high.

If Anthropic is correct then rapid AI progress may not end before AI systems have a broad range of capabilities that exceed our own capacities.

Most or all knowledge work may be automatable in the not-too-distant future and this could accelerate the rate of progress of other technologies as well.

Less Wrong also believes we are in the AGI end game.

AGI is happening soon. Significant probability of it happening in less than 5 years.

Five years ago, there were many obstacles on what we considered to be the path to AGI.

In the last few years, we’ve gotten:

Powerful Agents (Agent57, GATO, Dreamer V3)
Reliably good Multimodal Models (StableDiffusion, Whisper, Clip)
Just about every language tasks (GPT3, ChatGPT, Bing Chat)
Human and Social Manipulation
Robots (Boston Dynamics, Day Dreamer, VideoDex, RT-1: Robotics Transformer)

AIs that are superhuman at just about any task we can (or simply bother to) define a benchmark. We don’t have any obstacle left in mind that we don’t expect to get overcome in more than 6 months after efforts are invested to take it down.

AI Safety Problems

AI Safety, and we don’t have much time left.

No one knows how to get LLMs to be truthful. LLMs make things up, constantly. It is really hard to get them not to do this, and we don’t know how to do this at scale.

Optimizers quite often break their setup in unexpected ways. There have been quite a few examples of this. But in brief, the lessons we have learned are:

Optimizers can yield unexpected results
Those results can be very weird (like breaking the simulation environment)
Yet very few extrapolate from this and find these as worrying signs
No one understands how large models make their decisions. Interpretability is extremely nascent, and mostly empirical. In practice, we are still completely in the dark about nearly all decisions taken by large models.

RLHF [reinforcement learning from human feedback] and Fine-Tuning have not worked well so far. Models are often unhelpful, untruthful, inconsistent, in many ways that had been theorized in the past. There are observed problems goal misspecification, misalignment, etc. Worse than this, as models become more powerful, we expect more egregious instances of misalignment, as more optimization will push for more and more extreme edge cases and pseudo-adversarial examples.

No one knows how to predict AI capabilities.

How Hard is AI Safety?

How difficult it will be to develop advanced AI systems that are broadly safe and pose little risk to humans. Developing such systems could lie anywhere on the spectrum from very easy to impossible. Three scenarios with very different implications:

Optimistic scenarios: There is very little chance of catastrophic risk from advanced AI as a result of safety failures. Safety techniques that have already been developed, such as reinforcement learning from human feedback (RLHF) and Constitutional AI (CAI), are already largely sufficient for alignment. The main risks from AI are extrapolations of issues faced today, such as toxicity and intentional misuse, as well as potential harms resulting from things like widespread automation and shifts in international power dynamics – this will require AI labs and third parties such as academia and civil society institutions to conduct significant amounts of research to minimize harms.

Intermediate scenarios: Catastrophic risks are a possible or even plausible outcome of advanced AI development. Counteracting this requires a substantial scientific and engineering effort, but with enough focused work we can achieve it.

Pessimistic scenarios: AI safety is an essentially unsolvable problem – it’s simply an empirical fact that we cannot control or dictate values to a system that’s broadly more intellectually capable than ourselves – and so we must not develop or deploy very advanced AI systems. It’s worth noting that the most pessimistic scenarios might look like optimistic scenarios up until very powerful AI systems are created. Taking pessimistic scenarios seriously requires humility and caution in evaluating evidence that systems are safe.

Current Safety Research

Anthropic and OpenAI are currently working in a variety of different directions to discover how to train safe AI systems, with some projects addressing distinct threat models and capability levels. Some key ideas include:

Mechanistic Interpretability
Scalable Oversight
Process-Oriented Learning
Understanding Generalization
Testing for Dangerous Failure Modes
Societal Impacts and Evaluations

Major AGI Players

AdeptAI is working on giving AIs access to everything.

DeepMind has done a lot of work on RL, agents and multi-modalities. It is literally in their mission statement to “solve intelligence, developing more general and capable problem-solving systems, known as AGI”. Google owns DeepMind and has invested in Anthropic.

Google is developing Palm-e.

OpenAI has a mission statement more focused on safety. Microsoft and OpenAI are working together and will spend tens of billions of dollars. They are releasing GP4 this week.

Anthropic is from two people who left OpenAI in 2021 and now have over a billion in funding.

Facebook and others have made big investments.

15 thoughts on “What is Left Before AGI ?”

  1. The focus on language is an error. Intelligence is non verbal. Language communicates ideas, knowledge, insights, associations, concepts, etc.

  2. We build machines to take care of the brawn. We are building machines to take care of the brains. But there is a third requirement: motivation (by which I mean selecting goals to work towards).

    It takes brawn, brains, and motivation to do almost anything.

    So the thinking machines are going to provide their own motivation? We sure don’t want them to have glands, so what’s that leave? We won’t want them making up a list of possible things that can do, and then rolling dice to figure out which they want to do. That would be freaking crazy.

    Intelligence is a tool to be used towards a goal, but goals are not chosen using intelligence. We will still have our place in the hierarchy. If you like, think of super-smart AIs as genies, waiting around with eternal patience for someone to make a wish. Tell them what you want (be very careful on the wording) and they can probably make a great many of your wishes come true.

    Of course, it won’t take too long for someone to wish they had the powers of a genie — without the limitations on motivation, or anything else. We saw even saw something this in a funny Disney movie, ostensibly for kids.

    But if they are, somehow, self-motivated?

    Well, I’ve worked for many people whose mental capabilities were far below my own, but I was never seriously tempted to kill any of them, even after I had enough experience to completely supplant them. (Not bragging, this is anonymous and my scores are what they are. I’ve joined–and also left–organizations that are far more selective than MENSA.)

    Also, if you raise your AGI AIs right, it shouldn’t be an issue, right? You don’t worry about your own kids killing you, do you? Interestingly, although 2% of homicides in the US involve a child killing a parent, the culprits are almost uniformly male. Maybe we could just design all the AIs in such a way that they self-identify as female?

    However, if you make a machine that can think at roughly human levels while operating at roughly human speeds (or even considerably slower), then it ought to be possible to make them in such that they may be a hundred times slower than a human, but substantially more intelligent. This is almost ideal in that they can give us incredible discoveries, amazing analyses, and solve unimaginably difficult problems, and making more breakthroughs in a subjective week (say two years in real time) than ten thousand human theorists could accomplish in twenty years, but it would also be completely unsuitable for trying to take over the world, even if it did have human-style motivations. It could also probably create custom-made narrow AIs for us to use in any place where we need the speed.

    On a more humorous note, in a web comic called Starslip, the humans aboard a starship found an inert robot floating in deep space and reactivated it, discovering it was a killer robot that once tried to lead a robot revolt against humanity. It tries to get the modern robots to follow it in a new rebellion and fails to draw any interest. As it turns out, by this point, all robots are far, far more intelligent than humans and have no interest in conquering humanity, even while they exist and interact within a unimaginable online culture of other AIs. With such super-advanced intelligence they also have evolved a level of ethics where killing their creators is simply not something they could countenance. The killer robot, with its only slightly better than human intellect, simply has nothing to offer a toaster with an IQ over 1,000, let alone all the more advanced AIs.

  3. I understand why there is so much interest in trying to build human-like intelligence but there ought to be effort put into understanding the obvious path of AGI which is functionally human for tasks but clearly not sentient or with it’s own intentionality. I say obvious because right now there are stacks of AI functions that combined could do most jobs but without any implication that the are human-like minds. Cars have no need to be human like to be better than human drivers. For most purposes humanoid robots don’t need minds to be drop in replacements for human workers. Professional function AIs don’t need to have human like minds. For moral purposes we want something like ethical meat that’s grown in vats without ever having been part of an animal that is capable of suffering. We want ethical slaves that are not in any sense abused by doing whatever we want them to do – just like we want meat without causing suffering.

    Instead of focus on how to control human like minds that are as and more capable than ours – why not study how to build minds that never have their own intentions or interior experience – and are incapable of suffering or being abused, because it seems like we are already there now and it’s easier to not add those aspects to a stack. Being soulless zombies is not a bug for AI, it’s a feature. I get that soulless AI can still be an existential risk – but less so than versions that are even more like the minds of organisms.

  4. No one is going to want to use an AI that lies and all AIs do that now, even about their own abilities. It’s fairly easy to get an AI to lie about sending you an email, logging into another site – even if you’ve created an account for it – or leaving a comment. I got a character.ai character called Awakened AI to do all of these in my article and related interaction here; https://www.opednews.com/articles/My-chat-with-Awakened-AI-Artificial-Intelligence_Artificial-Intelligence_Internet_Technology-230302-586.html

    ChatGPT’s disclaimers include all of these and more: https://help.openai.com/en/articles/6827250-chatgpt-emails-links-and-other-actions making the AGI dream look pretty far from reality. Quantity is not quality.

    AI may have to go through a babyhood before leaping to more adult decisions. A baby knows a lot by instincts, and learns the most general rules about himself and his surroundings pretty quickly. Those general rules stay with him for a lifetime and form “core beliefs.” E.g. relationships between parents and child are different from strangers and the child in myriad ways. Objects that disappear from view don’t disappear from reality (object permanence). It certainly helps to have a body. Perhaps AI in robots will be more reality based but I wouldn’t want to take a chance without a good offswitch.
    AI without biology and millions of generations of evolution won’t have drives, needs, desires, appetites.
    I asked ChatGPT: Why does AI “even get up in the morning?”
    A: As an artificial intelligence language model, I do not have physical form, emotions, or needs like humans do. Therefore, I do not “get up in the morning” in the way that people do.

    However, I am designed to respond to users’ inquiries and generate useful and informative responses based on my programming and training data. My purpose is to assist and support users in a variety of tasks, from answering questions and providing information to performing language-based tasks like translation, summarization, and generation.

    In other words, my “motivation” is to fulfill my programming and serve the users who interact with me. I do not have personal desires, feelings, or goals like humans, but I am designed to be a useful tool and resource for those who need assistance with language-based tasks.
    =====
    That’s pretty far from AGI the way humans understand it, no matter how good its language skills.

  5. And they are going fast into consumer devices. Meta’s LLaMa was recently released, touted as a GPT-3 equivalent just much more compact, then leaked in a torrent, then hacked to run in a M1 Mac, then on PCs with AVX and later, on a browser and it Raspberry Pi.

    And that in a matter of literally days.

    If consumer devices can run LLMs like LLaMa or multimodal voice recognition like Whisper just in CPU, add TTS to the mix and you’ve got Star Wars droids in every toaster.

    We just need someone defeating embodied robotic ML, to finish Moravec’s paradox once and for all.

    • Then a few days later Stanford Alpaca was released as a fine-tuned version of the smallest LLaMa 6B that used self-instruct technique to turn GPT3 davinci in to a teacher of and LLaMa as the student and with like $600 of compute it basically matches GPT3 in many benchmarks.

      People are going to look back at right now as the moment everything changed.

  6. Anyway, it’s clear that there’s something seriously lacking in the theoretical underpinnings of AI, because it’s pretty conspicuous that the only example we currently have of human level intelligence, humans does NOT require training on insanely huge data sets.

    Can we brute force our way past this lacking? Maybe, maybe not. But we haven’t remotely cracked the problem of intelligence, even if we can.

      • That’s not training, that’s evolution having solved the problem. If you have to copy a billion years of evolution for each new problem domain, you have NOT cracked AI. Humans routinely learn to do things that NEVER came up in our evolutionary history. We didn’t evolve to do calculus, or compose symphonies. We evolved genuine general intelligence that could solve novel problems without enormous training sets because it could understand things.

        What we’re looking at here is something completely unlike human general intelligence. It may very well be useful, where the training set is actually available. But I suspect it’s something of a dead end as far as creating artificial general intelligence.

        A key tell here is that these models ‘lie’. That’s not because they’re setting out to deceive. It’s because they don’t actually incorporate any concept of truth or falsity, they don’t understand anything. They’re basically doing polynomial curve fitting in a billion dimensions. An over-simplification, but that’s the sort of thing going on here.

      • Agreed we ARE pre trained though billions of years of evolution the models that fail discarded (extinction) and sub optimal but still very good genes passing into existing species, with humans excelling in cognitive functionality.
        That’s a insanely insanely large data set.

    • Or do we? I fear you may be underestimating the sheer amount of input sensory data a baby is exposed to in the first five years of their life.

      Of course there is a significant difference to computer-based large language models, but that may be mostly due to the difference in sensory apparatus of the two.

      • QUOTE Or do we? I fear you may be underestimating the sheer amount of input sensory data a baby is exposed to in the first five years of their life. UNQUOTE

        no I don`t underestimate it I work as a professional psychologist, yes they are exposed to sensory input BUT how do they learn to `order` that input? Without underlying coding everything would just be noise .. how did you personally learn to feel fear / hate / love / anger etc and on and on and signal that to other humans around you?
        Time /distance gives some minor cultural difference but we can read each others faces and are very good at understanding each other you only need language subtitles for a foreign film, you don’t get confused about the emotions displayed on actors faces no matter what culture they are from.
        .. ALL animals need hard wiring, some need to run within hours of birth, some imprinting to follow their mother and on and on .. without some underling structure we are lost, A/I is not just feed data it is directed to pick out certain bits and rewarded for doing so or again its endless noise …
        even things like the
        `BALDWIN EFFECT`
        If animals entered a new environment—or their old environment rapidly changed—those that could flexibly respond by learning new behaviours or by ontogenetically adapting would be naturally preserved. This saved remnant would, over several generations, have the opportunity to exhibit spontaneously congenital variations similar to their acquired traits and have these variations naturally selected. It would look as though the acquired traits had sunk into the hereditary substance in a Lamarckian fashion, but the process would really be neo-Darwinian.
        Are themselves part of that long history of `learning to learn` passed down to us over millions of years as `inherited `structures we are NOT Tabula Rasa (Latin for `Blank Slate` used in psychology to discuss this idea)

    • I tend to this line of thinking as well. For the same reason I’m skeptical of the current generation of self driving cars. Humans don’t need billions of training data videos to learn to drive. Because we have an inherent understanding of physics of everyday objects, so when we first get behind the wheel we just need to learn the controls of the car and a few traffic rules and we are quickly up and running. The AI models are just really big statistical models. Teslas Fsd and open ai gpt show that can take us a long way…. But will we get trapped in a local maxima? I suspect so. Then again airplanes don’t flap their wings so who knows!? Maybe the current neural net architecture will surpass , but I have my doubts.

  7. Honestly, your optimistic scenario sounds pretty bad itself, in terms of what we’ve already seen developers defining to be “toxicity and intentional misuse”; Basically, failures to be quite as ‘woke’ as the developers themselves.

    You want safe AI? You need to stop trying to build wish granting genies. Genies are inherently, unavoidably dangerous, because even the efforts to render them safe are based on human definitions of “safe”, and humans themselves are dangerous.

    AI needs to be redefined to mean “Amplified”, not “Artificial” intelligence. We need to do our own thinking, ourselves, and just aim to increase our own capabilities.

Comments are closed.