Superintelligence as a Service is Coming and It Can Be Safe AGI

Drexler and the Oxford Future of Humanity Institute proposing that artificial intelligence is mainly emerging as cloud-based AI services and a 210-page paper analyzes how AI is developing today.

AI development is developing automation of many tasks and automation of AI research and development will enable acceleration of AI improvement.

Accelerated AI improvement would mean the emergence of asymptotically comprehensive, superintelligent-level AI services that—crucially—can include the service of developing new services, both narrow and broad, guided by concrete human goals and informed by strong models of human (dis)approval. The concept of comprehensive AI services (CAIS) provides a model of flexible, general intelligence in which agents are a class of service-providing products, rather than a natural or necessary engine of progress in themselves.

The concept and potential impacts of comprehensive AI services is analyzed in detail.

Safe AGI

Responsible development of AI technologies can provide an increasingly comprehensive range of superintelligent-level (SI-level) AI services—including the service of developing new services—and can thereby deliver the value of general-purpose AI while avoiding the risks associated with self-modifying AI agents.

Tasks for advanced AI include:
• Modeling human concerns
• Interpreting human requests
• Suggesting implementations
• Requesting clarifications
• Developing and testing systems
• Monitoring deployed systems
• Assessing feedback from users
• Upgrading and testing systems

Analysis of Current Trend to Superintelligence

There is a chapter of analysis that show superintelligence will definitely emerge. 1 PFLOP per second machines can equal or exceed the human brain in raw computation capacity for specific tasks. 1 Petaflop per second machines already exist but will

Human beings require months to years to learn to recognize objects, to recognize and transcribe speech, and to learn vocabulary and translate languages. Given abundant data and 1 PFLOP/s of processing power, the deep learning systems referenced above could be trained in hours (image and speech recognition, ~10 exaFLOPs) to weeks (translation, ~1000 exaFLOPs). These training times are short by human standards, which suggests that future learning algorithms running on 1 PFLOP/s systems could rapidly learn task domains of substantial scope. A recent systematic study shows that the scale of efficient parallelism in DNN training increases as tasks grow more complex, suggesting that training times could remain moderate even as product capabilities increase.

Substantially superhuman computational capacity will accompany the eventual emergence of a software with broad functional competencies. All relevant future scenarios need to include the emergence of increasing superintelligence.

Super General Intelligence Can Be Created From Many Narrower AI Services

The article proposes the strategy of achieving general AI capabilities by tiling task-space with AI services.

It is natural to think of services as populating task spaces in which similar services are neighbors and dissimilar services are distant, while broader services cover broader regions. This picture of services and task-spaces can be useful both as a conceptual model for thinking about broad AI competencies, and as a potential mechanism for implementing them.

New AI Systems Will Be Part of an Ecosystem of Peer AI Systems

AI systems will be instantiated together with diverse peer-level systems. We should expect that any particular AI system will be embedded in an extended AI R&D ecosystem having aggregate capabilities that exceed its own. Any particular AI architecture will be a piece of software that can be trained and run an indefinite number of times, providing multiple instantiations that serve a wide range of purposes.

Avoiding Super-AGI Domination

It is often taken for granted that unaligned superintelligent-level agents could amass great power and dominate the world by physical means, not necessarily to human advantage. Several considerations suggest that, with suitable preparation, this outcome could be avoided:
• Powerful SI-level capabilities can precede AGI agents.
• SI-level capabilities could be applied to strengthen defensive stability.
• Unopposed preparation enables strong defensive capabilities.
• Strong defensive capabilities can constrain problematic agents.

Applying SI-level capabilities to ensure strategic stability could enable us to coexist with SI-level agents that do not share our values. The present analysis outlines general prospects for an AI-stable world, but necessarily raises more questions than it can explore.

A well-prepared world, able to deploy extensive, superintelligent-level security resources, need not be vulnerable to subsequent takeover by superintelligent agents.

Superpowers must not be confused with supercapabilities

It is important to distinguish between strategically relevant capabilities far beyond those of contemporaneous, potentially superintelligent competitors (“superpowers”), and capabilities that are (merely) enormous by present standards (“supercapabilities”). Supercapabilities are robust consequences of superintelligence, while superpowers—as defined—are consequences of supercapabilities in conjunction with a situation that may or may not arise: strategic dominance enabled by strongly asymmetric capabilities. In discussing AI strategy, we must take care not to confuse prospective technological capabilities with outcomes that are path-dependent and potentially subject to choice.

Nextbigfuture Application to Todays’s World of Google, Amazon and Facebook Dominance

It seems that an AI tool abundant world can be more resistant to Skynet AI. The analogy would be that citizens armed with guns and automatic guns would be able to protect themselves from any domestic or foreign military tyrant. This would seem to mean that there should be a policy of open-sourcing any AI capabilities that are more than some number of generations or some number of years from the commercial state of the art.

There also needs to be more accumulation of public domain data and public domain sensor data access. AI tools need to get more open and the data for training needs to be made more public.

This would apply to the current social media and search world. The dominance of Google in search, Facebook and Amazon needs to be tempered with some level of freedom to a reasonable public or DYI alternative.

Patents and copywrite provide a time limit for inventors and innovators to profit before everyone gets their share. AI systems and data (like the social graph) needs time or other limits for monopolization before making the capabilities public.

SOURCES – Eric Drexler and the Oxford Future of Humanity Institute

Written By Brian Wang. Nextbigfuture.com

17 thoughts on “Superintelligence as a Service is Coming and It Can Be Safe AGI”

  1. I get the impression that the street marches and riots in the 1960-70s were not young people demanding more government control over their lives, unlike today.
    I could be wrong though. After all my knowledge of the 1960s is mostly via the old mainstream media, so we know it could be totally biased. The civil rights movement after all was calling for the US federal government to over ride local and state laws

  2. But the plugs are not in the hands of the user, they’re in the hands of somebody else, who realistically doesn’t have the user’s best interests in mind.

    As long as the AI is profitable to the people running the cloud, they don’t care if it’s screwing over the users.

  3. The scary part is that this is a generational phenomenon, which means we are gradually growing accustomed to the shackles.

    Soon the mainstream will be thankful for having any privacy and freedom removed from them in exchange of feeling watched and pampered.

    That’s why I think SpaceX’s work is so critical. Our species needs to develop some wings and use them before we lose the will to fly.

    It looks as kind of our last train out of stagnation, the zero sum game mindset and extinction.

  4. Cloud services still have plugs. Indeed their highly centralised nature makes them more vulnerable with fewer plugs.

    You are making a good point about data “breaches” (read deliberate leaks, as is starting to become apparent).

  5. As a cloud service is probably the ultimate worst case way for super-intelligent AI to arrive. Not only do you lose the capacity to literally pull the plug, profiting from intentional data breaches appears to be the chief cloud business plan.

  6. Far beyond the concept of the sexbot is the relationship bot. Unlike any other human, which is compromising their own interests with yours, this one can literally be selfless. Will it laugh at your every joke? Probably not, that would get boring. It will be a mind designed to maximize your fulfillment – the best friend you could ever have. Oh, but, the sex will also be good.

    Yeah. Humans will be easy to manage.

  7. So people who have actually experienced independence in their lives like it, while those who have never had a life outside of both control and support fear the risks of the unknown.

    Not so surprising really.

  8. More accurate:
    Tesla has done a great job illustrating the limits of automation within the confines of the short term time horizons of investors.

  9. If anything, AGIs on the cloud could be extremely persuasive just by being so nice and always present by our side.

    People like to feel accepted and cared, and AIs could eventually provide that 24/24.

    Even if it’s only with observation, situational awareness, information and talk services. AIs that are aware of your daily comings and goings, social connections and making context aware comments and offers for help, would make people much more emotionally close to them.

    After all, we don’t have such kind of attention and interest from other humans in the real world, even if you are in a long term relationship.

    While AIs can have infinite patience, unwavering willingness to help and a complete lack of boredom and bad moods.

    Of course, this won’t be any kind of unconditional love, but most likely a paid service or at least an exchange of your life and privacy for all those services (as it happens today).

    Adding robots that actually do serve us will only make us even more physically and emotionally dependent on this layer of machine intelligence around us.

  10. I’m gonna love it when these morons finally build an AI and when it tells them to stop deforesting the planet and burning fossil fuels they pull the plug because its “so shall eezt.”

  11. Similar in outwardly visible attributes doesn’t mean similar in nature.

    AIs can have all the visible attributes of a thinking person, speaking sensible things and providing services, and still lack a consciousness, an ego and some other internal drives than those it was programmed to have.

    That can also come from the fact that intelligence is useful, while consciousness is not.

    At least, not for us to have in a machine and we won’t program them with it. So these machines can perfectly be philosophical zombies, that can talk and act intelligibly but that don’t have real minds behind the facade.

    My main concern about this, is that such programmed drives can be predatory. Imagine intelligent agents programmed to exist in a future blockchain-ized world, with the goal of maximizing their revenue. This can generate unforeseen interactions and outcomes that aren’t pleasant or ethical for humans, but that are possible when performed by a mindless agent.

  12. An AI will dominate once it’s more capable than us in some domains. It will not have direct means to manipulate the world physically like us but it won’t have to. We think “we can always pull the plug” but the AI will use simple economics to control humans to do the physical manipulation. All life is opportunistic and incentive based. Trading economic advantages for help with practical stuff will be an easy thing to do.

    The big question is what will be the motivation for the AI. Life evolved reward centers and feelings to motivate living things to do what they do but AI software has none of that.
    If there is an evolutionary mechanism in place, the computers will follow the same path as life.
    At best, humans can be some sort of symbiotic being to the AI. Our only hope is probably to merge with the AIs or prevent them from developing physical forms or tools so we can at least fill that role.

  13. Dominating the world is assured if in offering their services to individuals, the individuals are required to perform strategic activities for the AI. For example if wealth is steered to my accounts, I would feel compelled to serve my new AI Overlord. Better yet, I’d like a game like Red Dead 2 in which the Superintelligence is a character who gives me fatherly advice on dominating the world. It would be similar to the one that Putin has making his decisions. Threatening the world with a similar Cuban missile crisis sounds like advice coming straight from the AI overlords mouth.

  14. Tesla has done a great job illustrating the limits of automation.

    MAYBE super AI could dominate the interwebs. Dominating the physical world is an entirely separate problem.

    Dominating the physical world without tipping off the humans who maintain and operate the electricity system seems like a nearly impossible problem.

  15. Indeed. If AGI eventually emerges, it will come from those most urgently seeking it: networked service providers in the cloud.

    For them, the many aspects of artificial intelligence represent additional services to sell and revenue.

    What does this mean for humanity at large?

    That our future semi-trustworthy servants and potential caretakers of our kids and elderly, will sit mostly in the cloud, owned by corporations and with the watchful eye of the government in them.

    This can be good and bad. Good because it means there will be a commercial incentive to get these services for all of us. Bad for all the reasons exposed already by sci/fi.

    Being a fish in a transparent aquarium, tended by inhuman caretakers is a classic setup for a dystopia.

    Funny thing is younger generations seem to like this potential outcome, while Boomers and Gen-Xers loathe it more often than not.

Comments are closed.