Narrow Super Intelligences are Near

In June 2019, Tej Kohli, a London-based tech entrepreneur, invested another $100 million into Rewired, a robotics-focused venture studio.

Kohli predicts that the global AI sector will be worth $150 trillion by 2025. The total valuation of the internet is $50 trillion. This is not GDP impact. The US Internet provides a $2.1 trillion boost to GDP. It is likely that the Internet boosts the world GDP by about $6-8 trillion. Kohli is predicting something like an $18-25 trillion boost to Global GDP by 2025 because of AI. This would be about a 4% per year boost to world GDP Growth. Although it will likely not be evenly spread with more in the later years.

Price Waterhouse estimates that the Internet will boost the world GDP in 2030 by $15.7 trillion. This would be more than the current output of China and India combined. $6.6 trillion is likely to come from increased productivity and $9.1 trillion is likely to come from consumption side effects.

Humans experts are usually single narrow specialist intelligences. All humans tend to have some level of common generalist knowledge. This would typically be called common sense or shared culture.

Artificial Intelligences seem likely to have massive economic growth with narrow superintelligence. However, huge efforts will enable AI to conquer high value narrow knowledge areas or capabilities like driving.

Billions are bring spent to master driving. Full self-driving is near and it will have driving that is ten to one thousand times safer than human driving. It will involve cameras, sensors and special purpose-built computers.

The Traditional Broad Categories of AI. Narrow AI. General AI and Super AI

Wait-but-Why summarized the very broad categories of Artificial Intelligence.

1) Artificial Narrow Intelligence (ANI): We currently have weak AI. This is Artificial Narrow Intelligence. Various Deep Learning systems dominate and the AI specializes in narrow areas. They recognize some kinds of images. They recognize particular patterns. One system may handle breast cancer x-rays and another a different type of cancer. One system plays chess and another go. One superior system can be adapted for use with GO or with chess.

AI Caliber 2) Artificial General Intelligence (AGI): Sometimes referred to as Strong AI, or Human-Level AI, Artificial General Intelligence refers to a computer that is as smart as a human across the board—a machine that can perform any intellectual task that a human being can.

AI Caliber 3) Artificial Super Intelligence (AGI): Nick Bostrom defines superintelligence as an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills. Artificial Superintelligence ranges from a computer that’s just a little smarter than a human to one that’s trillions of times smarter—across the board.

A narrow AI that is at the top 5 to 25% of human-level for human driving is about the current level of AI self-driving. This is worth tens of billions of dollars.

A narrow AI that is slightly above human-level for driving will be worth many trillions of dollars to the world economy.

Narrow Super Intelligence or Narrow Super Capabilities alone can be worth many trillions of dollars.

There will also be value in created broad near human-level general intelligence. Master general areas of conversations beyond chatbots will be useful. Systems that are able to activate the right searches, database accesses and being able to involve the right narrow AIs and bringing humans into the loop at the right time.

Having teams of narrow AIs and humans in the cloud will get us to weak super-intelligence.

23 thoughts on “Narrow Super Intelligences are Near”

  1. Playing Go isn’t very useful in itself, but it’s an example of a problem that was difficult for AI until very recently. Yet now it’s solved to a level surpassing human performance. At the least, it’s a proof of principle. From AI researchers’ point of view, maybe it was also a good learning/testing platform for some of the principles they used there.

    For the ultimate applications, you’d have to ask an AI expert. AFAIK the typical applications at the moment are pattern-matching tasks. This includes computer vision, natural language processing, big data analysis, etc. Most of that doesn’t quite surpass or even approach human level yet, but it still allows us to automate things we couldn’t before, and it’s getting better.

    One of the bigger applications may be self-driving cars, which I think combines several layers of computer vision, various other pattern recognition, as well as rule-based logic. But as the other commenters point out, we’re still a while away from fully autonomous driving that surpasses human performance in most situations.

    But the key point is that individual specific tasks are much easier to fine-tune and improve up to and past human performance, than it would be to make more generic AI. AlphaGo and a few other examples demonstrated exactly that.

  2. As future predictions go, here is one guaranteed to persist: misnomers. Full self driving is one little thing. Much bigger is “autonomy”, as related to any machine. Autonomy implies that machine knows what to do in any situation, without prior knowledge and planning of everything. What the misnomer people call “autonomy” equally applies to washing and vending machines, elevators and hand driers. All that is automation, which has been around for generations, and is not appealing. Essentially, these are lies, not misnomers, as intent is the cause of each of them.

    Autopilot that is not autopilot.
    Autonomy that is not autonomy.
    Scientist that is not scientist.
    Meat that is not meat.
    One can add to this list for days.
    It is like “1984” extended edition.

  3. Alright, but playing Go isn’t all that useful in the greater scheme of things. What’s the most sophisticated and value-added use of such AI, that pushes the limits of what can be done with it?

  4. Oddly, except for Ludus’ last, future-looking sentence, you both seem in violent agreement: Tesla cars won’t be smart enough to drive themselves without intervention, they’ll need a skilled human driver ready to take over if/when it starts to go wrong.

    ‘Full Self Driving’ is currently about as big a misnomer as ‘AutoPilot’…

  5. There is no need in getting emotional about any of this.
    That landing system uses visible and IR cameras and machine vision algorithms. Neither sensory mode works in rain, while fog and snow will affect algorithms, disabling them in some situations. Planes have to be capable of landing when and where they need to land, not when and where a fragile machine vision system may or may not work, as there is only one way to find out if it works this particular time. That is not how aviation does things, not even Boeing. Production planes can already land in zero visibility, using ILS. Until another system matches that level performance, all “nice tries” will remain just that.
    Google has been trying for about 10 years now. They mapped and photographed the entire planet at street level while at it. They have every “AI” trick at hand, including their proprietary, they attract the best talent and have unlimited resources of any kind. The future you are looking at is their past. Yet they are still reluctant to field their system at any scale. If that fact does not help you understand the state of this art, it will be better to let it go.

  6. Is there some point of coming to a future looking site and and saying everything is garbage?
    Obviously, when liabilities are more serious, more care will be taken.
    Forklifts at Costco can kill you, so they put a stupid rotating yellow light on top and and it makes obnoxious sounds.

    Show me this “well acknowledged” drivel.

    Actually, I think it is fairly trivial to avoid most injuries. You have sensors and have it stop moving its arm or whatever if it makes unexpected contact with anything. At least, for fairly slow human size robots. Obviously driving and such is more complex. Flying they can do now. Auto pilot can land or take off a plane as well as all the stuff in between. https://techcrunch.com/2019/07/05/watch-a-plane-land-itself-truly-autonomously-for-the-first-time/

  7. Disregarding the absurd numbers, it’s interesting to note that the local effects of productivity enhancers like AI is to lower GDP. If there are fewer crashes, that reduces demand for everything from insurance to emergency rooms to body shops. Self-driving trucks can run non-stop, reducing costs. These reduced expenses translate to lower GDP contributions of the associated firms. The Fed will have to keep goosing demand to get those truckers hired to do something else. That will be a boon (they’ll make more stuff for all of us) but it complicates economic policy.

  8. Not even that. More like a mind hammer. A hammer can barely do anything other than drive a nail, but it drives a nail a lot better than your hand (or any other body part). Same with a narrow super AI – it can only perform the task it was designed for, and barely anything else, but does that one task better than a human would. The difference is that the task here is one we perform with our brain. Previous tools were for tasks we performed with out muscles.

    AlphaGo is an example of a narrow super AI – it can only play Go, but plays it better than the best human players.

  9. Lots of information in a short article. I’d like to be able to rate your articles with some kind of rating system.

  10. “scratch its balls” is just a metonym for “attend to the first indications of an issue that can be addressed by simple maintenance operations now, but if ignored could result in more serious problems”

    So actually, you want your robot to scratch its balls.

  11. I don’t see a bubble as really a problem. The original tech bubble still led to the massive investment in communications infrastructure we enjoy today. Online retail now has a massive share of overall retail sales in the economy. Many of those original promises did turn out to be correct. Especially if it’s directed in the right areas, massive investment in ai isn’t a problem that I see, even if there is massive restructuring in the meanwhile.

    Out of the 150 billion neurons of the human brain, I’m guessing a very small percentage of those are actually needed for high level processes. An airplane wing probably isn’t near as complex as a bird wing, but airplane wings can travel orders of magnitude faster. I don’t see AI as much different. Computers can already do several tasks much faster than humans. Computers are rapidly catching up in areas that they were traditionally weak in. Most of what humans think of as high level or complex processes are really just simple patterns and processes that a computer will eventually have no problem executing. AI doesn’t have to be as complex as a human brain to carry out useful tasks.

  12. There was recently an experiment where they taught rats to drive little cars. It decreased their stress levels, so I think they wouldn’t need to drink so much.

  13. we are not even close to the number of artificial neurons required to emulate a human brain fully… all we got is the tiny visual cortex of a drunk rat … And that’s what we are using to self drive A car…

  14. Its potential impact is well acknowledged: maimed or dead people, and loss of property, as a result of hundreds of millions of lines of code in software riddled with defects, instituted patching culture, and complete separation from the very concepts of reliability and dependability. If airlines allowed themselves even part of that, there would be no airlines.
    As for my inability to determine, it is a direct result of my contact with reality, and my ability to separate it from fantasies and implanted constructs. This thread is not about me though, or is it now?

  15. I see more prescience, promises and future tense, with added dark fantasies about the world united in legal insanity.
    Present tense is used in Tesla disclamer on the “autopilot” that is explicitly, evidently and legally not an autopilot, but ADAS. That is sufficient basis for expecting the same from “full self-driving”.

  16. Rodney Brooks phrased his bet in a careful way likely to save him from eating a million Robotaxis. More likely what Elon has in mind because it’s physically what will certainly be on the road in 2020 is a million Teslas with FSD hardware and software frequently connected to the to be released “Tesla Network”. That would enable them to operate as Robotaxis under something like Uber, with their owners in the drivers seat but TN/AP controlling everything unless the owner intervenes. That would be legal in most of the world and would count as meeting the prediction from Elon’s POV if not from Rodney’s. A million FSD cars streaming real world performance data at a billion+ miles a month would get to regulatory approval level driverless pretty quickly.

  17. Tesla FSD in Tesla Network will be an early example of narrow superAI generating enormous wealth. Tesla’s path is likely the only path. Only mass production of FSD vehicles and mass fleet learning real world data can train high human level driving (with superhuman performance because of consistency). TN scaling up then reaches narrow superAI status.

    Unlike IOS vs Android there likely isn’t as much market for the second best, buggy, difficult to update version of a self driving vehicle operating system.

    Tesla will get there first with a massive lead and there won’t ever be a serious number two. TN will just scale to dominate the global market both Tesla in house and licensed to legacy OEMs. A second best buggy robot vehicle is just intolerably dangerous at anywhere near the price of The number one system.

  18. I expect that companies will embrace automation in the 2020s, with most of those that do not thereafter becoming non-competitive and failing.

    This will likely be an extremely good time to own investments that are benefiting from increased automation and a bad time to be living paycheck to paycheck, because the percentage of income generated by capital will be increasing and the percentage of income generated by wages will be correspondingly shrinking.

    At some point, the effects of the gains from automation will likely be outstripped by the losses in wages, causing a plunge in demand even during a surplus of things to buy and, somewhere around 2030, could see far too much investment money (from those who had investments) chasing too few investment opportunities. This will make the wealthier folks unhappy, although probably not as unhappy as the majority of folks that, even in the midst of plenty, can no longer afford most of it.

    Then society starts flailing over what is, essentially, income/wealth inequality on a sharper scale than in recent history. The shape of the eventual outcome, given the perspicacity and general selflessness that are the hallmarks of our politicians, is not one I am sanguine about, especially in the short term, where there will likely be massive pain and unrest. Because of the destabilizing effects caused by having large portions of the population idle and living on a dole, welfare, or even an UBI, I expect we will eventually see a lot of workfare.

  19. The next singularity is due in the 2020s (the last was the internet 30 years back, 60 years before that was electronics, 120 years before that was the industrial revolution, etc.).

    I decided about ten years back that this next one will be heavy duty automation. A couple of years ago I did some research and invested heavily in Rockwell Automation. This is not the glamorous stuff. It muddled along for two years and then shot up a couple months back. I went to the business news page to see what was going on and there, topping the news, was the CEO explaining why it had happened. Same reasons I had bought it in the first place.

    If you are interested enough in this science stuff to be a regular reader of sites like this, there is no reason you can’t monetize it. Don’t bet the farm, but make informed decisions.

    In case you are wondering, by 2040 (15 years from previous singularity, I expect real manufactured minds (artilects) or the biological singularity (where aging pretty much becomes a non-player) about 7.5 years later. I’m guessing at the artilects first, because they will likely be extremely useful in attaining the next one.

    Of course, a double singularity is possible and would accelerate things.

  20. Strongly reminds the “new economy” of the dot-com bubble. Written mostly in future tense, full of prescience and promises, and shortly followed by bust. All those clicks will turn into dollars, they said, buy stock of pets.com, they said. Well, now, 20 years later, clicks indeed turned into dollars for Amazon, Google and a few more, but all of them were either at larva stage, or non-existent at the height of the bubble. This thing has not even reached the height of its bubble, hence the future surviving few are not yet known to public, or their own founders, if they even exist yet.

    Rodney Brooks, whose opinion on the matter of self-driving cars is worth more than the next thousand tons of “expert” biomass, does not expect “full self-driving” any time soon, with exception of re-defined degenerate forms, such as point-to-point shuttles and isolated traffic. And the “general AI” is an imaginary carrot on an invisible stick. How is that “digital worm” doing, with its 137 or so neurons meticulously 3D-mapped? Did not quite come to a worm live, did it? Well then, let’s not waste time and tackle the 150 billion neurons, each with 100k~1M synapses, dynamically reconfiguring, bidirectionally transmitting, 24+ neurotransmitter-encoded in pure analog, also interacting with glial cells that are three orders of magnitude more numerous than neurons – it is going to work, trust us.

Comments are closed.