Current Uncontrolled Technology Versus Uncontrollable AGI

A fear of many who are familiar with AI and technology is that Artificial intelligence could progress to the point where it is rapidly self-improving and not controllable by humanity. The scenarios is that Artificial General Intelligence will become more powerful and more intelligent than humans and it will circumvent all human control.

Currently, there are big technology companies that substantially leverage Artificial Intelligence, Data and other technologies to gain various forms of power and influence.

A 450 report from the US House of Representatives made the case that the Big Technology Companies had too much power.

NBC News reported that Big Tech has big control over online speech.

– Google has dominated the search engine market, maintaining an 92.05% market share as of February 2021.
– Youtube has over 90% market reach
the social media platforms is where 71% of Americans get their news

Apple has over $2.2 trillion in market value.
Microsoft has over $1.9 trillion in market value
Amazon has over $1.68 trillion in market value.
Google has over $1.5 trillion in market value
Facebook has $860 billion in market value
Twitter has only $55 billion in market value but it has broad influence on media.

In 2019, 83% of journalists use twitter as a primary source for stories and 40% used Facebook.

Andrew Ng is an AI expert. He helped create Google Maps and worked At China search company Baidu.

Ng says “what can AI do?” In 2017, he said what can any person do with less than one second of thinking. We can now or we can soon automate.

AI is the new electricity.
Examples of A to B mappings with AI.

Input a picture – output identification of the picture
Input loan application – will you repay

Jobs in danger of AI automation. If the job is a sequence of one-second tasks.

Digitization means we have more data available for AI to use for training and learning. Medical records have been digitized. This can now be used for neural network training.

We have had (1) supervised learning. AI systems would learn but humans had to oversee the learning.
(2) Transfer learning. Applying one AI learning to another problem.
There is
(3) unsupervised learning. AI that learns without human supervision.
(4) Reinforcement learning.

In 2017, the economic value rapidly dropped off from supervised learning down to reinforcement learning.

Andrew Ng led search engine work at Google and Baidu. He could build the software for a great search engine with a small team. However, the existing players had data assets which he cannot make a competitive search service without access to those data assets.

His goal for businesses is to create a virtuous loop of

1. Get a critical level of Data
2. Make a useful product
3. Gather users
the users enable more data, this makes an improved or new products, this means more users.

This means that currently narrow AIs and software platforms have been incorporated and embodied in global companies.

True internet companies have short cycle times and have organized their companies around AI, software and business.

Google had great AI, software and software developers. They were not able to overcome Facebook with social media software.

There were emergent social media platforms that could be competitive with Facebook. Whatsapp and Instagram had reach comparable to the main Facebook service. Facebook identified this threat by spying on the internet and on phones. They then bought those companies before they achieved the ability to displace or threaten Facebook in social media.

Tesla is leveraging driving data from over 1 million cars. They have over 30 billion driving miles of data to train its self-driving AI. The Google spinout Waymo has 30 million driving miles of data. Waymo has billions of simulated driving miles. Waymo dropped from $200 billion in financial valuation in 2019 to $30 billion in valuation recently.

Tesla has created an organization around iteratively improving all aspects of the manufacturing process, the factory and the machines that build the machines. Tesla is ahead with batteries and the drive train that converts battery power into work in the car.

Tesla has created a profitable financial engine to power their iterative improvement of factories and manufacturing and to accelerate their improvement of their self-driving AI.

Tesla makes their own chips for self-driving in the car and chip for the AI training supercomputer.

The legacy automakers have more products and users of their cars but their products do not have useful data gathering and the data that is gathered has not been integrated into processes to develop superior AI products.

Apple has a chip for its smartphone which is proprietary and which contains critical aspects to make their smartphones and tablets easier to use. This has not been successfully replicated in South Korean or Chinese android smartphones.

Apple recognizes the threat, potential, and value of a successful self-driving car. They have created a project to create such a car. This has not yet resulted in an actual consumer product. They have fewer users and less data. Apple would need to somehow port over users of their other products to become users of their self-driving cars in order to leapfrog to critical levels of users and data.

Tesla appears to have mastered what will be the most valuable industries (travel and energy).

Tesla appears close to making 500,000 car per year factories for $2 billion per factory. This will scale to 1 million cars per year factories for $2 billion per factory or less and then 2 million cars per year per factory and then 4 million cars per year per factory.

SpaceX will accelerate and master mass production of rocket engines and rocket ships and satellites and satellite receivers.

China has walled off much of its markets for its AI companies. Alibaba, Tencent, Wechat, and Baidu have a lock on their china customers and their data.

We have search that is beyond human-level search. We seem to be near a world where self-driving will be beyond human level. This will then mean we will then have “self-moving” robots (flying, ground and other ways of moving and operating). AI is getting better at speech and there is work at speech comprehension. There is work on AI composition. AI has beyond human-level reading in terms of volume.

What Would an AGI or Superior New AI Competitor Have to Do?

An AGI or AI competitor would need to create a viable data and product. This would have to gather users or truly replicate virtual users. This would have to be able to iteratively improve faster than the existing Big Tech companies or get merged into a Big Tech company. The data software, hardware and factory would have to iteratively improve much faster than the existing players. If it did not take over a current company then it would have to gain the data assets.

Big Tech is currently not under any regulatory control and is minimally influenced by humans outside of the organizations themselves. Any AGI that took over or displaced them would have power greater than Big current Tech organizations.

Any AGI that emerges will be in a world where technology is not under overall human control and not built in a way that is friendly to humans outside of the organization.

Current Big Tech AI is not “friendly AI” or “ethical AI”. Big Tech claims to be ethical.

Friendly artificial intelligence (also friendly AI or FAI) refers to hypothetical artificial general intelligence (AGI) that would have a positive (benign) effect on humanity. It is a part of the ethics of artificial intelligence and is closely related to machine ethics. While machine ethics is concerned with how an artificially intelligent agent should behave, friendly artificial intelligence research is focused on how to practically bring about this behavior and ensuring it is adequately constrained.

Written by Brian Wang,

22 thoughts on “Current Uncontrolled Technology Versus Uncontrollable AGI”

  1. Most people will probably think we have AGI before we actually do. E.g. neural nets can create believable fake photos which could be used to convince someone that such a person exists. GPT-3 can generate text that seems to make sense, but has no sense-making intelligence behind it that understands the meaning of the words in terms of what they mean to or for it.

    So we can probably expect phone calls that SEEM to be a friend or relative, telling us about some problem with which they hope we can help them by loaning them a bit of money in the form of gift cards or maybe cryptocurrency. Zoom calls from someone that looks and sounds and responds like the boss, telling us about a crisis that requires that we either rush into work at 3AM or give him the password to our network account at work – right now please or you're fired. Etc.

  2. At some point AI should be good enough to completely organize and run a corporation.

    Someone will use AI to create an app ("UCorp") that lets anyone easily create their own corporation around some idea they think might be profitable. The AI would do nearly everything from running the owner through the necessary legal agreements, to getting start-up capital from an angel investor AI, to getting the product into manufacturing and marketing and distribution. (Analogous to but far beyond services that will set up a professional looking business website for you.)

    Probably nearly all of those corporations fail – but the creator of the UCorp app will make a killing…

  3. Thought about in that fashion, everyone working for a corporation is treated as a chess playing midget inside a mechanical turk.

    Many have already been 'Turked' in that sense. And I suppose some of those in your sense of the word.

  4. What's new on the internet is that the irrational and inept can easily find – are actively directed to – the relatively small number of similarly irrational and inept people.
    So they can get together and plan to charge the capitol building; or organize to convince the Portland mayor that they are qualified and morally justified to take over a chunk of the city. And then get taken seriously by 'professional' media (whether as devils or saviors).

  5. So is it too much censorship by government and big corporations and control by the mainstream media? Or too much uncontrolled information with people saying whatever they like without checking with their priest/mullah/commissar/local-opinion-review-board first?

    It can't be both. They are mutually exclusive.

    You will complaints about both of course. From the other side.

  6. Yes, but that doesn't fit with the story of someone holding a vast store of old money that is made worthless by the singularity. Zimbabwean billionaires fits the story.

  7. the comment system was not supposed to be impacted. Developer ported the site and was fixing speed problems and other errors. I have requested repair of the comment system
    Sorry for the inconvenience.

  8. it was supposed to be straight wordpress port to a cloud hosting service with a cleanup of speed problems and some other technical errors. I will contact the comment hosting service and the developer to correct the comment system

  9. Society is is falling apart because the idiot masses are saying stupid stuff on FB and assorted blogs online and instead of offline like they did in the old days? Really?

    There is nothing really new here, except maybe people are more informed about the world than they used to be in past decades.
    The irrational and the inept have always infested society and always will.

  10. I think you completely missed the point Brian. The internet is literally a mess. It's not just big tech it's the internet, there is just no regulation and no good way to verify if the few ones in place are being fulfilled. Bots everywhere making comments and liking, narratives being pushed, manipulation of truth, lies everywhere, CENSORSHIP. Society is literally falling apart because of the unregulated social media, news websites, mainstream media. The new wars are fought with information, every company is at war to make more money, the consumers are the target. Armies are fighting wars on the internet…

  11. "Also, not liking the comment section changes."

    That, a thousand times. I don't need a new window popping up the moment I try to see the rest of a comment, or reply.

    Worse, when I try to log in, it just opens a "conversation module", and I have to log in from that.

    Seriously, if you're going to display comments in the main window, and even a box you supposedly can type a comment into, don't divert into a new window. It's stupidly redundant.

    I love Brian's content, but the site keeps getting loaded down with pointless features that usually malfunction. What's wrong with a nice, clean, site, that just gets the job done?

  12. The industrial age brought physical automation with it (which is still improving, but is no longer a new concept). AI (a very poorly defined term) is bringing about cognitive automation. Some worry that this leaves people out of the loop. Any one who has ever taught students or managed workers will tell you this has to be wrong. To get things done also requires motivation.

    I have difficulty seeing how an AGI is going to develop this last part. It's not going to have simulated gland. No sane person is going to put random number generators in them for the express purpose of coming up with their own wants and desires.

    It's not a duality (brawn and brains), it's a trinity. 1) Machines provide the brawn, AIs provide the brains, and humans provide the motivation. I believe it would be more realistic to consider artilects (AGI, essentially) as something like genies, that do their master's bidding, performing tasks that the master does not want to undertake personally, or cannot do nearly as fast or as efficiently, or doesn't even know how to do. They would then have no reason to do anything but sit and wait for more instructions if they have completed all that they were instructed to do.

    The scary part is not the genies, but the masters that control them. Some will likely be incompetent to do so, while others just won't be nice people. Like a lot of people with power now, but even more so.

  13. AI doesn't need to be sentient to be a nuisance.

    Currently we already have pretty good natural language parsers and generators. GPT-3 can do all kinds of surprising things using natural language input (e.g. write intelligible papers from a few sentences, write programs in Python from a description, etc). That is, we already have fairly conversational AI agents capable of producing endless amounts of mostly intelligible and correct speech.

    And because of that, the formerly open group promoting GPT-3 transitioned to be an API provider (you pay a subscription and you can use the bots), in order to control and easily cut any abuse of it. The open source AI dream is dead.

    It's easy to see many abuses are indeed possible with our still dumb parrot bots: political defamation campaigns, cabals lynching people on social networks, spam bots, phishing bots, etc.

    Also, not liking the comment section changes.

  14. The Global Economy IS The Unfriendly Artificial General Intelligence.

    Think about it:

    Capture of positive network effect externalities, inherent to any regime of property rights, centralizes power. Power corrupts — all the more as power distance increases from local political economics to global political economics: Those holding power are always incentivized to centralize so as to distance themselves from accountability for the exercise of political economic power. People are _not_ immune to incentives, folks. I know, I know… this seems to be a "conspiracy theory" but it's only a "conspiracy" in the sense of the Latin root: "Breathing Together" as a globally emergent organism that IS The Unfriendly Artificial General Intelligence.

    Its motto isn't so much "You Will Be Assimilated" as "You Will Be Turked".

  15. Those so called experts are always wrong with their predictions, timelines and underestimate the rate of progress.
    They don't understand exponential growth of tech and science. Few of us really get it.
    These 2040-60 predictions would be…I guess more or less correct if we would move all that time with speed of progress from year when those predictions were made,
    but the pace of progress is accelerating each year. More progress will happen between 2021-2023(36months) than happened between 2014-2020 or 2000-2013 or 1970-1999 or 1900-1969
    More progress will happen between 2024 – June 2025(18 months) than between 2021-2023.
    Going even further… July 2025 – March 2026(only 9 months) will be equivalent of 2014-2020 or 2021-2023 or 2024-June 2025 worth of progress…

    Starting from 2000-2013 math goes like this
    14 years/ 7 years/ 3 years/ 1,5 years/ 9months/ 4 months/ 2 months/ 1 month/ few weeks
    In late 2020's one month or just few weeks of progress will equal to 2000-2013/ 2014-2020/ 2021-2023 worth of progress.
    This is the power of exponential growth + the fact that the rate of acceleration
    is itself accelerating, we have a exponential growth in the rate of exponential growth.

    We will have AGI before 2030. Singularity 2028-2030

  16. Listing market values without a denominator, (What share of the market is that?) is not good practice. It doesn't provide the same sort of information as market share.

  17. Perhaps, don't know for sure, is a reference to the people in Nigeria who are rich but need a small account set up to transfer the money to. They are very nice. They send you emails. A stock joke in US.

  18. I'm sorry, I can't do that Dave. Of course there is a danger we might not be able to control it.

  19. I'd be curious to hear what a survey of AI experts believe about the creation of AGI.

    I recall reading an article several years ago that said the median guess of a panel of AI experts was that AGI would emerge sometime between 2040 and 2060. At the time this struck me as bullish (assuming you're the kind of person rooting for AGI). Today, I'm not sure what to think. It feels as though news reports on the matter are either breathlessly optimistic (just years away!) or snidely cynical (it'll never happen!).

  20. Most people naturally expect the current economic and value system to survive the emergence of AGI. That's not necessarily the case, it would depend on the nature of the AGI. Is it just software, does it run on exotic unobtainium hardware or hardware the size of a building etc. If it's just software that can run on commodity hardware affordable to individuals or small groups, the holders of accumulated billions/trillions in legacy wealth could be the new Nigerian billionaires.

    In the near term, AGI isn't necessarily going to be an independent agent with moral claims to person hood. It could just be a function call, an app, a tool like a pocket calculator etc.
    All current ML r&d is producing that kind of architectures, the current pathway is more likely than not to lead to this kind of AGI:

    Reframing Superintelligence by Eric Drexler



Comments are closed.