Super General Intelligence Versus A Non-Fragile Civilization

The increase in the capabilities of Artificial Intelligence has increased fears of the risks of uncontrollable AI.

There has been a call to pause AI experiments. However, many of those who signed the letter, like Elon Musk, knew that there was no chance for a pause to be implemented. They merely wanted to be on the record for a pause.

Nick Bostrom has the most comprehensive analysis of Superintelligence scenarios, dangers and Strategies. This analysis was written a few years ago. The AI risk situation is now simplified. We do not need to consider Whole Brain Emulation or scenarios.

The main risk scenario is improving Generative AI combined with other types of Artificial Intelligence could reach superhuman intelligence. It is clear that Generative AI (ChatGPT, Bard, Palm-2, GPT-4 and later versions of Large Language Models) is the source of significantly broad and capable human and near human level artificial intelligence. The systems can also achieve super-human level intelligence in various domains. The systems are rapidly improving.

Many experts are now debating AI safety. This is link to my article where there are many videos where AI experts debate the risks.

David Orban and Roman V. Yampolskiy go over some of the current facts.

* Historically, we have been bad at cybersecurity. All of our code libraries have undiscovered bugs and errors.
* We have layers of programs that are flawed

Yampolskiy makes the case and proofs that an AGI will be uncontrollable and unpredictable.

However, there are limitations to the capability of any superintelligence at any point and in various domains. IF the limitations are very high and vastly beyond humanity then the superAGI being uncontrollable and unpredictable would mean humanity would be at the mercy of superintelligence. We would have to hope that superAGI chooses to be good.

If intelligence systems are limited and controllable by humanity then we have to maximize the benefits of systems as powerful but not existential risks.

I would argue that we need to work better in the intermediate cases. We do not know if we can make “superAGI” and if we can how powerful it will be. We need to have a multiple paths. AI researchers still need to work on improving the control of AI and AGI. We should also have work under the assumption that we can make things better if we improve human civilization robustness.

Earthquakes and other disasters of the same scale (ie a 8.0 Earthquake) kills more people in poorer and less developed countries. Those places have more poorly engineered buildings and do not have good emergency response.

We need to engineer a civilization with more passive toughness and survivability.

We need to work on expanding the upper bound of systems that are controllable by humanity.

We need to make civilization less fragile and tougher.

I think we should proliferate useful narrow AI and software systems that improve computer security and human security as rapidly as possible.

We have the case where AI is far better than human at chess for decades. If instead of chess, there was AI for more important and useful general strategizer for gaining competing in business or some other highly valuable competition. We would want to have people enhanced with what are believed or known to be “safe” pre-cursor AI. We would use the Narrow superAI tools to enhance the capabilities and security of each person. It would be like trying to distribute the equivalent of rifles and body armor to the citizenry.

We would also be trying to harden key infrastructure and to make certain critical services distributed.

On this path of civilization robustness, we need to work under the assumption that best efforts will have time to improve the situation. There should be triage of the problems, weaknesses and selection of solutions that can be created and deployed faster.

Improving civilization robustness will be good even if near-term AI is relatively weak. AI is relatively weak means AI development falls short of AGI or the AGI that we get is not significantly better than human.

18 thoughts on “Super General Intelligence Versus A Non-Fragile Civilization”

  1. One of the uses of AI should be to write code a 0 and 1 at a time…a hack-proof operating system to serve to control regular equipment.

    This was, if later, more sophisticated AIs try to hack into things, they cannot.

    Also all companies mandated to use 1950’s era tech as a back up just in case.

  2. Re: “The increase in the capabilities of Artificial Intelligence has increased fears of the risks of uncontrollable AI. There has been a call to pause AI experiments.”

    The FAKE narrative (ie propaganda) nearly everyone, including “alternative news” sources, have been spreading is that the big threat is that AI might achieve control over humans.

    The TRUE narrative (ie empirical reality) virtually no one talks about or spreads is that the big threat with AI is that AI allows the governing psychopaths-in-power to materialize their ultimate wet dream to control and enslave everyone and everything on the whole planet, a process that’s long been ongoing in front of everyone’s “awake” (=sleeping, dumb) nose …. http://www.CovidTruthBeKnown.com (or https://www.rolf-hefti.com/covid-19-coronavirus.html)

    Like with every criminal inhumane self-concerned agenda of theirs the psychopaths-in-control sell and propagandize AI to the timelessly foolish (=”awake”) public with total lies such as AI being the benign means to connect, unit, transform, benefit, and save humanity.

    The proof is in the pudding… ask yourself, “how is the hacking of the planet going so far? Has it increased or crushed personal freedom?”

    Since many of the same criminal establishment “expert” psychopaths, such as Musk (https://archive.ph/9ZNsL) and Harari (Harari is the psychopath working for Schwab’s WEF [https://www.bitchute.com/video/Alhj4UwNWp2m]) or Geoffrey Hinton, the “godfather of AI” who have for many years helped develop, promote, and invest in AI are now suddenly supposedly have a change of heart and warn the public about AI it’s clear their call for a temporary AI ban and/or its regulation is just a manipulative tactic to misdirect and deceive the public, once again.

    This scheme is part of The Hegellian Dialectic in action: problem-reaction-solution.

    This “warning about AI” campaign is meant to raise public fear/hype panic about an alleged big “PROBLEM” (these psychopaths helped to create in the first place!) so the public demands (REACTION) the governments regulate and control this technology =they provide the “SOLUTION’ FOR THEIR OWN INTERESTS AND AGENDAS… because… all governments are owned and controlled by the leading psychopaths-in-power (see CovidTruthBeKnown.com).

    What a convenient self-serving trickery … of the ever foolish public.

    “AI responds according to the “rules” created by the programmers who are in turn owned by the people who pay their salaries. This is precisely why Globalists want an AI controlled society- rules for serfs, exceptions for the aristocracy.” —Unknown

    “Almost all AI systems today learn when and what their human designers or users want.” —Ali Minai, Ph.D., American Professor of Computer Science, 2023

    The ruling criminals pulled off the Covid Scam globally via its WHO institution because almost all nations belong to it. Sign the declaration at https://sovereigntycoalition.org to exit the WHO

  3. Kind of a hyper-simplistic discussion.
    The very idea that a super-AGI (that was free of its programmer’s and other functionality biases) would have the same foibles, flaws, and anthropomorphic (or animal) sensibilities of individual humans, a given culture, or any civilization generalities is ludicrous. There is no reason that such attributes as self-preservation, alpha dominance seeking, maximizing resource use for its own solutions, or anything like that are intrinsic to intelligence. Sensing, storing data, analysing, reasoning (or its AI equivalent), and providing conclusions are the only things inherent in intelligence – like a magic 8 ball with rationality. Coming to solutions of increasing complexity in better and better time constraints are the main feature – invaluable. And they will never have a soul or be conscious or exercise true creativity – which is not to be considered a drawback to creating and understanding (form solutions from discontinuous data) better (and possibly living richer and more fulfilling lives) than we ever will.
    Society-wide risk analysis/ existentialism, another non-sensical and navel-gazing undertaking, is also very over-hyped and unquantifiable – to no one’s benefit. The types of risks we undertake as individuals, families, groups, regions, and nations are very different and typically resolve at a very immediate and proximate level to our lifestyle and politics – which also tends to normalize over time. Ask yourself what kind of world would you not want to bring a child into: a world where electricity was no longer possible, a world without accessible metals, unavoidable famines and outbreaks? the very idea that huge populations live on fault lines, under volcanoes, in regions with several months without direct sunlight more than a few hours a day, and in cultures where certain genders cannot drive or own property is telling of what we think generally of civilization’s existence. Ask instead as individuals and groups, what we aspire to, follow with minimal rationality, and seek that has very little to do with basic day-to-day needs. What is possible and how soon can we get (or live vicariously through) that thing – that’s what drives rich-world civilization and quantifies risk.

    • Agreed – except, that it is most likely that every AI, AGI, etc., will be a tool and exhibit extreme bias on behalf of its human/ business/ government user. Witness an AI as a director (perhaps not legally, but functionally) of a major company in direct competition with another in a crucial industry – semiconductors – optimising supply, talent, pricing, legals… As with financial companies acquiring the fastest possible trading power (and analysis) to get rich in fractions of a second (i’m not sure if this is still prevalent). Also, the latest LLMs show advancing bargaining abilities:
      https://arxiv.org/abs/2305.10142
      But, like anything – this will normalize as most parties get a relatively advanced version and a new equilibirum forms – but until then… and then the government’s own resources…

    • Please, oh Please. Musk. Smuggle a Boring Co., tool, optimized for size and weight aboard a Starship to the moon and create an extensive burrow there, undiscovered over the decades, so that we as a civilization can visit en masse and possibly escape any cataclysmic earth risk. A hive of a few million as ARK, with knowledge storage and DNA samples, would be sufficient to ensure civilization’s continuity (at least until we can expand such outside of cislunar space)

      • That would be a major bucket list item: get technology to extend healthy life a few decades -and- a trip to the moon (even if mostly underground in a burrow) for less than $50k/night (2023) — in the next 20 years.

  4. I am currently working on a robot project that is proving to be quite difficult. We are using robots to automatically manufacture customized shipping crates. I will be lucky if I can make this thing (three robotics and various handling carts) make crates with the throughput and reliability of a gang of Mexicans at the factory. I can tell you Moravec’s Paradox is in no danger of being breached anytime soon. If and when Moravec’s paradox is breached, then AI MIGHT become a threat to us. For now, I can assure you there is absolutely zero chance of AI being any kind of threat to us in the foreseeable future.

  5. There are all sorts of reasons that have nothing to do with AI for wanting to harden civilization, make it more resilient. But all the trends are in the opposite direction, squeezing out that last fraction of a percent of revenue in the optimistic case, even if it means things fall to pieces if something goes wrong.

    For instance, spreading supply chains across the globe to capture increased efficiencies, even if it means a war someplace shuts things down. Reducing inventories, (What we in the automotive industry have come to call “just too late manufacturing”.) to the bare minimum needed to operate if your delivery truck doesn’t get caught in a traffic jam.

    In fact, we’re seeing ideologically motivated anti-resilience, such as replacing reliable baseline power sources with wind and solar, that don’t even make financial sense if you assume everything does go right.

    So? I don’t see it happening, sadly.

    • I agree with this comment. It reminds me of how historians debated causes for the Bronze Age collapse of established civilizations. They started claiming it was all caused by the “Sea People” invasions, then that it was plague, earthquakes, volcanoes, drought, “systems collapse” (caused by things becoming too complex because somehow the Bronze Age was more complex than any other time in history 🙄). The powers of the day had dealt with things like these long before and survived. Eventually they hit on an explanation that makes sense. The 4 major powers of the day started responding to all these things by centralizing power with the leaders and there courts. The Hittites would see the Pharos centralizing and mistook it for strength and power so they would centralize more. The Pharos would see the Hittites doing it and get nervous about how powerful they were getting and centralize more. Eventuality things became so centralized that none of these minor problems—famine, invasion and earthquakes—could be dealt with efficiently and in a timely manner.

      People think of globalization as “decentralization” but the way it is being implemented is part of a process of centralization. Fewer companies cornering markets, centring more more of their supply chains in fewer more authoritarian nations to get around expanding regulatory hurdles at home.

      When you hear Western politicians like Justin Trudeau saying how much they admire China because they can get things done without the messy democratic process you are hearing the Hittites admiring the power of the Pharos court.

    • Solar and wind are actually far more resilient than fossil power generation, they are just more variable; there’s a difference. Fossil fuels are dependent on resource pricing, supply chains, personnel. When society starts to break down, the traditional power plants will shut down almost immediately, and solar and wind will keep on producing.

      • No, not really. For instance, both solar panels and windmills are out there exposed to the weather, and subject to being damaged by it. Your typical nuclear plant could shrug off a direct hit by a tornado, and other fossil fuel plants, while not similarly over-built, will typically keep chugging along right through bad weather.

        “When society starts to break down, the traditional power plants will shut down almost immediately, and solar and wind will keep on producing.”

        For a while, sure. If I were a survivalist, sure, I’d want solar power. But if your goal is to keep society from breaking down in the first place, rather than just being personally comfortable if it does, solar is a terrible choice, and wind is almost incalculably worse.

        For one thing, solar is every bit as dependent on supply chains, in the long run. Yeah, it will continue producing power on sunny days for a while, while producing negligible amounts on cloudy days, and none at all at night, but you won’t be able to replace panels as they age out. And how are you supposed to run a civilization if you’ve made yourself dependent on power sources that only show up when they feel like it?

        So, if your goal here is reliable power that just keeps coming through natural disaster and economic disruption, nuclear plants are your best option. Weather resistant, and can go years without being refueled.

        But if your goal is to hole up after society collapses, and live off dehydrated foods in comfort until you have some medical problem that kills you? Yeah, solar is the way to go.

        • Not sure I agree.
          If we are talking a civilization that has been drasticallly reduced, but not destroyed/ ‘primitived-to-pre-steam’ –the most likely cataclysmic outcome– it seems that simplifying the supply chains and on-shoring them would mean choosing power supply tech appropriate to a simpler (and for some reason, carbon-free) world – solar heating and earlier photovoltaics seem easier. This, of course, is a world without nuclear of any kind, vastly reduced but not removed power distribution, a world without jets, rockets, and most ostentatious personal vehicles, probably without most high-rises that need vertical transportation and cities over a large fraction of a milion. This is an early 20th century world that was very civilized and livable but refused nuclear, fossil fuels, and quick world travel – the best way to manage an under 1B world. I believe that solar would be dominant and contribute well to modernity and growth in such a world – though few would live above the 40th parallel north, i presume.

          • Hard to imagine a growing, modern, worthwhile world without access to nuclear or fossil fuels — and with vastly reduced and localized supply chains. Even with mid-21st century tech, perhaps hydrogen, solar, wind, geo, and simpler local storage than current battery tech — of course this is a western Europe dream. I suppose if hydrogen could be created at quantities a few times lower, and at efficiencies at least a few times higher than current fuels – hydrogen could replace natural gas, oil?? and at populations 10x lower but above average survivability of educated/ skiled humans…

        • Pffft. Solar-thermal electric has great potential pretty much everywhere. >1kW/m² instead of 200W intermittent as with photovoltaic. It’s simple: transparent silica aerogel (from rice husks etc) can heat up to 265C without need for vacuum [1]. Works fine in the winter, even on cloudy days, just slower. Use the steam (hot or cold via vacuum) to spin a Tesla turbine to charge FeI2 or ZnI2 batteries etc and bank extra energy in a huge sand battery up to 1200C to use as needed for say reducing titanium etc. ~60MWh/hectare/day. ~20 hectares solar thermal for 50MW continuous 24/7. 😉

          [1] Harnessing Heat Beyond 200 °C from Unconcentrated Sunlight with Nonevacuated Transparent Aerogels https://pubmed.ncbi.nlm.nih.gov/31199125/

            • Oh right, good idea, perhaps the sand or salt in aircrete tanks can be heated up directly with the aerogel slabs. Mobile applications also interesting.

Comments are closed.