AGI Lags Compute Power and Technological Empowerment of Individuals is Lagging

This article will show that AI projects are getting access to petaflops and exaflops of computing power, which would match the raw compute power of the human brain. However, we still do not have insect-level AI system despite having the raw power for insect AI twenty years ago. AGI is lagging and will likely to continue to lag compute power for AI by 30 years or more.

Individual people lag the compute and software capabilities of the leading technology companies. Individuals can only afford a tiny amount of the compute power, but also lag in true ability to use AI software and the most commercially valuable software.

Individual access to truly powerful means of production often lags the leading edge by 50-100 years. This is repeating with the lag to democratize search and IT automation.

AI algorithms are getting more efficient and there has been a massive surge in computer power used for AI.

Supercomputers and AI-specific accelerators are boosting the compute power available by 1000 times.

In 2019, the Cerebras CS-1 AI supercomputer was made using the Wafer Scale Engine (WSE). it was the industry’s only trillion transistor processor. The WSE is the largest chip ever made at 46,225 square millimeters in area, it is 56.7 times larger than the largest graphics processing unit. It contains 78 times more AI optimized compute cores, 3,000 times more high speed, on-chip memory, 10,000 times more memory bandwidth, and 33,000 times more communication bandwidth.

Now Microsoft has created the world’s fifth most powerful supercomputer and dedicated it for AI work.

However, passing various biological brains in compute power does not mean that the AI industry is able to make synthetic AI that matches everything of smaller biological brains.

Twenty years ago computers surpassed the compute power of insect brains. Insect brains start at about 1000 neurons.

In 2019, DARPA funded a project funded a project to make computing systems as small and efficient as the brains of “very small flying insects.” The Microscale Biomimetic Robust Artificial Intelligence Networks program, or MicroBRAIN, could ultimately result in artificial intelligence systems that can be trained on less data and operated with less energy.

Analyzing insects’ brains, which allow them to navigate the world with minimal information, could also help researchers understand how to build AI systems capable of basic common sense reasoning.

From 2012-2018, the largest AI training runs has been increasing exponentially with a 3.4-month doubling time. This metric has grown by more than 300,000x (a 2-year doubling period would yield only a 7x increase). Improvements in compute have been a key component of AI progress.

In 2020 with a 50 petaflop supercomputer, there could be projects with 4 million petaflop seconds per day of power. The log scale increase in compute power available to AI is still following the 3.4 month doublings.

AI hardware has seen five distinct eras:

Before 2012: It was uncommon to use GPUs for ML, making any of the results in the graph difficult to achieve.
2012 to 2014: Infrastructure to train on many GPUs was uncommon, so most results used 1-8 GPUs rated at 1-2 TFLOPS for a total of 0.001-0.1 pfs-days.
2014 to 2016: Large-scale results used 10-100 GPUs rated at 5-10 TFLOPS, resulting in 0.1-10 pfs-days. Diminishing returns on data parallelism meant that larger training runs had limited value.
2016 to 2017: Approaches that allow greater algorithmic parallelism such as huge batch sizes, architecture search, and expert iteration, along with specialized hardware such as TPU’s and faster interconnects, have greatly increased these limits, at least for some applications.
2018 to 2020: More dedicated AI supercomputers at multi-petaflop scales.

There will clearly be exaflop and multi-exaflop systems using specialized AI hardware in the 2021-2023 timeframe.

A human brain was supposed to be about a petaflop of processing power.

Individuals Lag in Productivity Enhancement

People will not just lag AGIs, people are currently lagging the AI and software-enabled technology companies.

In order for individuals to match up to AI supercomputer enabled corporations, there need to be systems that common person can use for internet search, e-commerce, access to DNA information and analysis. The productivity capabilities that were available to Google in 2000 are not available to individuals.

AI and software agents need to be made available. Individual education is not complete if people do not have the understanding to leverage available technical resources.

The lag in empowering people with technology needs to be reduced.

In 2019, Gartner projected that AI augmentation would create $2.9 trillion in value in 2021. Currently, individuals can mainly benefit from this value creation by buying Google, Facebook and other companies that are most successful at monetizing and taking advantage of superior computing systems.

SOURCES- Open AI, Singularity University, Gartner
Written By Brian Wang, Nextbigfuture.com

41 thoughts on “AGI Lags Compute Power and Technological Empowerment of Individuals is Lagging”

  1. At least for a part of them, short-lived wives were probably
    just a facade. And why choose the.oldest pregnant?
    A girl with eight great grandparents still living would be
    a much better choice. (for starters). Of course,our Prince should congregate with other wealthy people of similar interests in order to arrange matings from a wide and varied
    gene pool. There could also be different groups of
    longeve people, each with a different starting point
    and different mean longevity achieved. As the Bible
    says, *Who is good leaves an inheritance to his
    children “, and what best legacy than longevity?

  2. No, plasticity is a different topic. What I am describing is that the next advandes will – or so I believe – come with automatic generation of ANN architecture. But once a good network architecture has been formed, it is trained in a conventional way. But since the good network has a superior performance per “calculation”, you will need to train it less. 

    I believe that our current HW is up to the task of AGI; it is the SW architecture that is lagging. Also note that I am not proposing networks which re-arrange the overall architecture over time.

  3. Could you post a link? I used to think that too, but I arrived at that number by overestimating the number of synapses per neuron and by (errouneously) using the maximum firing frequency of the neurons with the statistical average.

  4. If only over the past few millennia, the wealthy and powerful had been selecting their wives on the basis of how old their grandmothers’ were when they had their last child, instead of trivial issues like beauty, wit, huge tracts of land, and ability to shoot a bow from horseback.

  5. Once we learn how to do a task, it becomes easy enough that we hardly put concious thought into achieving it. It’s not too dissimilar with AI. Once it learns how to do a task, it doesn’t need to learn it again. Once the rules and pattern recognition is in place, the AI is easily able to complete the task again and again. The amount of tasks we’ll want an AGI to do is a finite number. Humans have a finite number of habits that govern their behavior as well. We CAN program an AI to learn as flexibly as a human but it’s usually far more effective to program the behaviors it needs to complete the tasks we want it to perform.

  6. You can just as easily go the opposite way and say we don’t need anywhere close to the complexity of the brain because much of its processing power is dedicated to lower level functions like communicating with other organs that an AGI wouldn’t even have to worry about. Besides do we really want our AGI to be able to demonstrate a lot of plasticity? I’d argue not, both because we want to understand and control what it’s doing and so far programming has proven to be more powerful for higher level functions than learning methods. It’s kind of pointless for a robot to go through 18 years of learning and development like a human when we can just program in the knowledge it needs to know. Plasticity is not needed for AGI.

  7. What you’re describing is plasticity, an necessary element of AGI. Actually you need meta-plasticity ( learning how to learn). Look at an FPGA vs a regular computer chip (called an ASIC for very specifically tailored designs) as a rough analogy. 10x slower clock speed and uses 5000 transistors per logic element, but is fully reprogrammable. That’s a massive difference in cost.

  8. Plasticity does not come cheap in chip area. So increase that number by 10,000x based on current transistors needed for true plasticity.

    So it is currently a 400 exaflop machine for $1 million. With the death of Moore’s Law that could be a long time. You’re also ignoring the power bill in that calc, which will dwarf the capital cost of the compute cluster. So maybe 10x that to account for the energy efficiency requirements.

  9. As usual people come up with inane analogies to demonstrate their ignorance. You aren’t even addressing my comment so I don’t know why you are replying to it: I said the human mind is a black box. I don’t even have to know you to know that I know way more about the workings of the brain than you do.

  10. How much is a human brain worth? Most studies put the value of a human life well above $1 million. The brain is probably most of that value. Let’s just assume the brain operates around 40 petaflops. Once you can build a 40 petaflop computer for under $1 million, you can build a human level ai cheaper than raising a human.

  11. It takes 10,000 times the chip area to make a truly plastic neuromorphic chip right now. I’m sure there will be some improvement on that going forward – but that right there is an underestimate most people don’t include in their calculations for a compute cluster that can achieve AGI.

  12. People who actually dedicated their studies to AGI research are generally more optimistic than the average narrow AI researcher. Most AGI researchers believe a neuro-symbolic architecture will be able to get us to AGI. In most AI problems, generally the hardware has to be there first, then AI researchers will be able to create software stacks on that hardware that will solve the problem. The AGI researchers that have studied the issue generally believe we will have the hardware needed for human level AI in this decade.

  13. AGI doesn’t have to work anything like a human brain to display general intelligence. We will have AGI decades before we are capable of full simulation of a human brain.

  14. People keep saying things like “we dont understand the brain, its a big black box”, which is nonsense. We understand a TON about the brain, its not some big mysterious black box. Do we understand it 100%? Oh heck no. But we do understand a ton, and are learning more and more. We do in fact have the basic understandings of how it holds together. Its like someone saying “You dont know how a car engine works, so you have zero ability to understand anything about a car, and shouldn’t be able to drive”.

  15. It remains to be determined if progress in refining narrow AI architectures also means progress in AGI. 
    GPT-3 isn’t AGI, no transformer architecture will ever be AGI.

  16. Speaking as the originator of a number of technology prizes, the latest of which is the Hutter Prize for Lossless Compression of Human Knowledge (recently expanded by an order of magnitude), the folks with money like DARPA have a really simple solution to the AGI-lag problem:

    https://youtu.be/0ghzG14dT-w?t=622

    Take the Hutter Prize approach _really_ BIG with _really_ BIG data — macrosocial data — relevant to things like the emergence of insurgencies such as the BLM “protests” that have produce, now, an autonomous zone in Seattle called “Chaz”. It is definitely within DARPA’s charter and every cent that is spent will produce a better macrosocial model according to the least biased judging criterion summed up in a single figure of merit:

    The size of the executable archive of the macrosocial dataset.

    PS: Note, this _only_ solves the learning/training/induction half of AGI — not the decision side (sometimes inadequately called “inference” — inadequate because, although the induced model provides optimal inference, sequential decision theory needs, also, a utility function with which to select various inferred outcomes of actions).

  17. The development is actually moving really quickly now. Just check out application of “attention” on natural language processing. Amazing. A dramatic improvement over just two years ago.

  18. These estimates are getting tiresome. The truth is we are lacking something that goes beyond mere computing power. Besides this, these estimates will turn out to be disastrously low anyway. We are like cavemen estimating constraints on nucleosynthesis.

  19. Well, I think the reason is much more fundamental than that. From what we believe/know about physics, it takes a tremendous amount of energy to travel at a fraction of light speed between the stars, which makes it near impossible to send human bodies at any reasonable time scale from an alien planet to our planet. 

    The only solution to this “problem” is to create intelligent computers that can survive for tens of thousands of years and put some “sleeping” computers in a slow starship that will take a few millennia to reach the next star.

  20. The graph seems to put the human brain at about 10^16 calculations per second. Let’s look at that. According to this site, the highest estimates of number of synapses in the human brain is 10^15 [1]. But how often do the neurons fire? Well, they can fire up to 1000 times per second, but on a statistical average they only fire 0.16 times per second [2].

    If we generously assume that each synapse perform the equivalent of one integer calculation, we obtain about 2*10^14 calculations per second. Note, to simulate a human brain it may be necessary to hold the information of 10^15 weights, but you only have to use 2*10^14 of the weights per second and only have to perform 2*10^14 calculations per second.

    So, a system with the following characteristics should be able to simulate a human brain:
    200 T-ops per second
    2000 Terrabytes of memory
    200 Terrabytes/second data-bus

    This is absolutely doable..

    (1)
    https://human-memory.net/brain-neurons-synapses/

    (2)
    https://aiimpacts.org/rate-of-neuron-firing/

  21. Someone is bound to say that 10 million synapses are way less than what the human brain has (~10^12). Of course, but you could bundle the initial weights and structures. “Make 10 thousands loops as such here with a random strength centered around the value X” can create a lot of synapses but the information needed to define them is very low. So the sparse information of the brain can still be small even though the number of synapses may be extremely large.

    And this is one of the reasons why I think that the next step in AI will be algorithms that discover ANN architectures automatically. You can imagine optimizing small networks with, say, one million connections and creating different such sub-networks. Then, you could train an ANN to connect these blocks in different configurations to solve a task efficiently. Note that the total information in the architectural output of the first and second ANN needs only to contain at most 250MB to result in a toddler brain.

    And once you have reached this level, it will be very easy to create the best brain that you can possibly make with this information, i.e. a genius. And then you just make the final network bigger and you have super human general AI.

  22. There has been tremendous progress in algorithms and I believe that there will be some major developments. When you think about it, the human brain can learn a lot of things quickly with few examples. Now, a few people would say that our brain already has hard wired structures that we use to learn new thing, i.e. “pre-trained structures”. These structures have been developed during billions of years of evolution.

    While that may be true, we also know that about a third of the genome codes for the brain, which equates to about a billion base pairs. That’s not even a gigabyte, but more like 250 MB. That is the upper theoretical limit of how much pre-trained information our brians could contain. 

    And just how effective can this coding be? The structures are coded with chemical gradients which is probably a very “lossy” process which in turn means less (actual) information in our pre trained structures. Weight information in an ANN, by contrast, is not lossy. My guess is that if we had, say, 10 MB of real pre-trained data, it would suffice for a human brain.

  23. The article is a big over-simplification.
    One must differentiate between training and inference.
    Training a neural network model is what takes all that hardware and it’s very energy intensive just like it is for humans. We need years of painstaking schooling before we output something useful. Once trained, the networks can be used with little hardware and extreme energy efficiency to solve problems. That’s how it’s possible to have self driving cars with just a modest computer in the trunk. Somewhere else, there is a huge computer cluster taking in driving data for training and doing the trial&error like training day and night. Once the models produce output below a certain error threshold, it’s pushed out to the small inference machines in each car to be used for practical work.

    So, we will all be available to use AI applications. However, it will be a while until we can train our own.

  24. A brain is part of a body and intelligence is not the result of a computer or computer system, responding to the outside world involves non-computational and non-computable processes. If I squint maybe I can imagine parts of the automonic nervous system have elements of computation in the way they function but that’s a tiny part of the whole. When we come to the functioning of the human mind… there is still not even a basic understanding of how it all hangs together, not even a glimmer. It’s a black box.
    The people pursuing AGI are the equivalent of monkies throwing shit at the wall: they have no idea what they’re doing.

  25. It’s not a matter of living longer. There’s ample evidence we forget most of what we experience, and misremember most of the rest. Our memories are only as good as the last time we remembered something, and that last memory is slightly distorted to fit the “story” we tell ourselves, so that by the time we remember something for the 100th time, it may bare little resemblance to an objective memory…or maybe we have weeded it out altogether.
    There ARE people who can remember everything they’ve ever done nearly their entire lives, usually starting at some point in pre-teen childhood. But they are very rare, and sometimes dysfunctional otherwise. Think of Rainman.
    If we’re ever going to get huge jumps in intelligence, it probably won’t be just from memorizing things anyway. The trick is in knowing what to ask and using good judgment to weed out the useless stuff, kind of like doing a good Google search, which hints at the human-machine hybrid that will probably come from a future neural-net.

  26. Thomas Edison once said “We still don’t know one-tenth of one
    percent of anything”. It still applies.

  27. Increasing lifespan doesn’t require technology.A few centuries
    of artificial selection should be enough to see results.
    Probably it has already been done, but the Highlanders
    live in hiding. Maybe they don’t even seem rich.

  28. It would still be worth heading out into the void but it would be done in an ultra ultra safe manner with lots of probes being sent ahead to scout for any possible danger.

  29. If you had the potential to live forever, would you risk it all and head out into the void. Or would you spend it resort hopping anywhere there was a decent resort. You’re not likely to live very long if you take unnecessary risks for long enough.

  30. There is currently no known architecture that could be mistaken for AGI, it doesn’t matter how much compute you have. Someone could stumble across the answer next month or next millennium. Lots of people are trying different ideas they hope will work, it’s difficult to predict when someone will come up with the solution to a complex problem that isn’t well understood. Until someone stumbles across an architecture for AGI, the only thing in the cards for the foreseeable future is artificial narrow super intelligence.

  31. Even Phd and post Phd experts have to take much more time of their short lives to understand the little details of their domains, which are but a drop in an ocean of information.

    So the Fermi paradox filter might be:
    1.. you need a longer lifespan to progress much* beyond where we are.
    2.. But you need to progress beyond where we are to usefully increase lifespan.
    3.. So most species will grind to a halt in mid 21st century level tech, maybe settling one or two other worlds in their immediate solar system, but not doing anything interstellar, and so entering a long period of stagnation until something happens they can’t deal with and they go extinct or collapse back into a dark age.
    4.. Hence no alien visitors
    5.. Except for that one species who started out with a lifespan of 10 000 years through shear luck, but being a tree based species they have no inherent desire to spread out anyway.

    *The word “much” does the hard work of making my theory hard to disprove.

  32. There is no known architecture that could be mistaken for AGI, it doesn’t matter how much compute you have. Someone could stumble across the answer next month or next millennium. Lots of people are trying different ideas they hope will work, it’s difficult to predict when someone will come up with the solution to a complex problem that isn’t well understood. Until someone stumbles across an architecture for AGI, the only thing in the cards for the foreseeable future is artificial narrow super intelligence.

  33. Given the over-abundance of information about any technical or non technical topic, required to be understood before we can do any significant contribution to it. I’d say we lag the brain power to absorb and digest any significant fraction of that.

    Even Phd and post Phd experts have to take much more time of their short lives to understand the little details of their domains, which are but a drop in an ocean of information.

    So Google, Facebook et al can have Skynet under their command, and for most of us it’s the same. We don’t understand it, we can’t use it and as a result don’t care about it. Those with the inclination can dedicate their lives to some super-specialized domain and make a good buck out of it, but for the rest of us it’s just life as usual, not knowing and not caring.

    The net result of this will be the end of exponential growth and the start of the long flatter part of the S-curves. Which might be all right IMO, given I don’t think any human is really pressed to be replaced by machines, except in the most boring of tasks, which also seems to be happening.

Comments are closed.