After AGI and After the Singularity There Will Be a Computronium Universe

Humanity is on the verge of AGI (Artificial General Intelligence). Futurist Ray Kurzweil predicted decades ago that we would reach AGI in 2029. AI and Large Language Models could reach AGI sooner than 2029. However, the definitions of artificial intelligence that surpasses individual humans has issues around definitions and measurement.

Kurzweil also predicted the Singularity in 2045. He defined that as having cumulative artificial intelligence beyond the total intelligence of humanity.

Beyond the Singularity is Computronium and the limits of technology and the limits of computing.

Computronium refers arrangements of matter that are the best possible form of computing device for that amount of matter. This can be the theoretically perfect arrangement of hypothetical materials that would have been developed using nanotechnology at the molecular, atomic, or subatomic levels.

There are several physical and practical limits to the amount of computation or data storage that can be performed with a given amount of mass, volume, or energy:

* The Bekenstein bound limits the amount of information that can be stored within a spherical volume to the entropy of a black hole with the same surface area.

* Thermodynamics limit the data storage of a system based on its energy, number of particles and particle modes. In practice it is a stronger bound than Bekenstein bound.

* Landauer’s principle defines a lower theoretical limit for energy consumption: kT ln 2 joules consumed per irreversible state change, where k is the Boltzmann constant and T is the operating temperature of the computer

* Reversible computing is not subject to this lower bound. T cannot, even in theory, be made lower than 3 kelvins, the approximate temperature of the cosmic microwave background radiation, without spending more energy on cooling than is saved in computation.

* Bremermann’s limit is the maximum computational speed of a self-contained system in the material universe, and is based on mass-energy versus quantum uncertainty constraints.

* The Margolus–Levitin theorem sets a bound on the maximum computational speed per unit of energy: 6 × 10^33 operations per second per joule. This bound, however, can be avoided if there is access to quantum memory. Computational algorithms can then be designed that require arbitrarily small amount of energy/time per one elementary computation step.

It is unclear what the computational limits are for quantum computers.

In The Singularity is Near, Ray Kurzweil cites the calculations of Seth Lloyd that a universal-scale computer is capable of 10^90 operations per second. This would likely be for the observable universe reachable at near light speed. The mass of the universe can be estimated at 3 × 10^52 kilograms. If all matter in the universe was turned into a black hole it would have a lifetime of 2.8 × 10^139 seconds before evaporating due to Hawking radiation. During that lifetime such a universal-scale black hole computer would perform 2.8 × 10^229 operations.

The mass of the solar system is 2 x 10^30 kilograms. A solar system scale computronium computer could be capable of 10^68 operations per second. This taking the mass of the sun, planets, moons, etc… and converting them into computronium. The global compute capacity is about 10^24 operations per second today (2024).

Human made mass is about 1.2 X 10^15 kilograms. 1150 gigatons of mass (2020) has been human made and one third of this was concrete.

Taking apart planets and sun is non-trivial. We have only dug tiny holes to a maximum of about 10 kilometers of depth.

Having the ability to make nanotechnology level computronium and having self replication to convert asteroids and meteors could be done within a few decades of achieving molecular nanotechnology. This would be about 10^23 kilograms in the asteroid and Kuiper Belt. Conversion of the asteroids and Kuiper belt objects into a computronium computer could be capable of about 10^50 to 10^60 operations per second.

Nvidia increased the amount of compute for AI (LLM) by about one million times over the past decade and they forecast another million times over the next decade. The trillions going into AI and Large Language models are funding the production of massive training and inference compute capacity.

If humanity were to keep building compute and energy to enable 1 million times more compute every decade then in about 50-60 years, humanity would need the asteroids and Kuiper belt converted into computronium to maintain the pace of AI growth.

Molecular Computers

In January, 2022, – The first molecular electronics chip was developed. This achieved a 50-year-old goal of integrating single molecules into circuits to achieve the ultimate scaling limits of Moore’s Law. Developed by Roswell Biotechnologies and a multi-disciplinary team of leading academic scientists, the chip uses single molecules as universal sensor elements in a circuit to create a programmable biosensor with real-time, single-molecule sensitivity and unlimited scalability in sensor pixel density.

The Roswell ME Chip™, which integrates single-molecules into standard semiconductor chip technology to deliver a programmable biosensor that converges a broad range of biosensing applications and omics measurements onto one platform. It appears that Roswell Biotechnologies failed to raise it third round of funding and had to shutdown.

The CMOS chip integrates with DNA and/or RNA and it would enable rapid reading and writing and potentially manipulation of DNA for computer memory and eventually compute.

Research efforts directed toward constructing a molecular computer will be described. What has been achieved with molecular control towards computation.
(1) isolation of single molecules in alkane thiolate self-assembled monolayers and addressing them with an STM probe,
(2) single molecule conductance measurements using a mechanically controllable break junction,
(3) 30 nm bundles, approximately 1000 molecules, of precisely tailored molecular structures showing negative differential resistance with peak-to-valley responses far exceeding those for solid state devices,
(4) dynamic random access memories (DRAMs) constructed from 1000 molecule units that possess 10 minute information hold times
(5) demonstration of single-molecule switching events and
(6) initial assemblies of molecular CPUs.

Limits? FTL?

Ray Kurzweil has pointed out that if limits like the speed of light could be overcome with massive science and technology capabilities. The entire universe might be accessible via wormholes or some other no limit FTL technology. This would mean in about 200 years the entire universe could be made into computronium. Molecularly precise intelligent robots (nanobots) that could replicate (make more copies) sent via wormholes (FTL) means could travel and convert everything upon arriving at all target solar systems.

Multiverse?

There are at least 2 trillion galaxies in the observable universe, containing more stars than all the grains of sand on planet Earth.

Alan Guth explains how eternal cosmic inflation explains the multiverse. Guth says that there is a lot of evidence for inflation. Also, the multiverse would explain the low number for the energy density of vaccuum.

Leonard Susskind explains that string theory proposes that there are 10^500 kinds of universes. Not just 10^500 universes but 10^500 possible kinds of universes.

A quantum computronium multiversal computer could have 10^600 operations per second.

27 thoughts on “After AGI and After the Singularity There Will Be a Computronium Universe”

  1. What if Singularity has already happened…either sometime in the passed or recently and we didn’t even realize it. Either ate feasible. Maybe it’s always been like that and we are just advanced enough now to know about or the concept of it or with all of the recent advances in AI we might have hit that point already and are being deceived by everything around us. I was tv, the news, and even commercials and question this. I’m like what if all of these are just ai.. and maybe has been for awhile? Things could be specifically tuned for each person and we would never know or wouldn’t have known until recently when we started to think about the concept and that it’s a possibility. I mean the double slit experiment being proven or accepted by science that that is how the universe works, to me, is probably the biggest breakthrough we have I’m terms of our reality. Sometimes I don’t understand how more people don’t know about it or that it hasnt been made into a bigger deal or more public, but then would they even accept it or understand it? Because it’s difficult for me to understand unless its explained what it means. Buy what they are actually doing to test and the results they receive doesn’t resognate to me what it all means (about how when not being observed particles turn into a different form. I honestly still struggle with accepting it as a fact because it might be insight that potentially is the answer to questions about life that might have been better unanswered for me.

  2. Why use Hawking Radiation to set operational boundary? Hawking Radiation is a clever and novel way to make a back hole eventually disappear, but it is probably myth. Instead, we could look at proton decay as the limiter. A black hole, in my own understanding is simply a region of space that is frame dragged beyond our light cone. In that, nothing really changes except that you cannot travel beyond, well, your own lightcone! Hawking Radiation reduces the mass in a novel approach whereas a virtual pair of particles, say, +photon -photon emerge and the -photon crosses the threshold before the two can cancel each other. It’s a pretty far fetched idea! Proton decay, while never observed, is a far less far fetched idea. Maybe they don’t decay! Then black holes live on! High energy particle radiation may be the highly limiting factor for a universe as you’ve described.

  3. “If all matter in the universe was turned into a black hole it would have a lifetime of 2.8 × 10^139 seconds before evaporating due to Hawking radiation. During that lifetime such a universal-scale black hole computer would perform 2.8 × 10^229 operations.”

    Just because you have a theoretical limit to data storage based on black hole physics doesn’t mean that black holes can actually perform useful computation. Given the one way spacial nature of causality inside an event horizon, they can’t perform useful computation at all; You can’t have persistent structures inside the event horizon, for one thing. For another, all results would end up in the singularity, since information can’t propagate outwards, or even stand its ground.

    • To me it seems quite myopic to envision or understand everything from the perspective of physicality and looking at information as final frontier.

      We do know now that physicality is an emergent property and it arises because electrons have a repulsive charge. That repulsive charge halts my hand from going across a brick wall.

      Quantum world seems to be dictating or governing the “physical universe” and its laws. Known universe is swimming in it ot contained in it.
      But funny thing is that quantum world itself is not a physical place/realm at all but it is very capable of giving rise to a physical one. So i can say in a way “non-physicality gives rise to physicality and not yhe other way round”.
      It is also observable that all physical things have an expiration date but quantum world or a realm beyond quantum is immune to it.

      Humans appear in physical universe for a very short time and then disappear. It is happening since the first human appeared on the planet. Also, some force is keeping a tight check on birth/death rate. Birth rate is always a bit higher. Why? And who is orchestrating it? Good questions!

      Only thing of value that actually happens is “Experience”. It seems to me that “Something non physical” is using the human (biologically programmed robot) to create experiences of unlimited variety including good bad and everything in between in a limited reality with control mechanisms (so called natural laws). Once the story is complete and experience is accumulated, that “something non physical” leaves the biological entity and moves to the realm beyond quantum world.
      How do I came to this realization?
      By understanding quantum mechanics, current advances in neuroscience and reading diligently different disciplines.
      The work of Dr Donald Hoffman is commendable in this area.

      But when you read on many different disciplines and connect the dots and fill the gaps with logical deduction, it becomes obvious to reach such conclusions, unless you are a serious stickler to your old comfort zone 😆

      AGI, Singularity and utilizing quantum realm for our energy and creative needs is just a beginning, not the end. We are just about to wake up from a deep sleep/dream and realize how we were hypnotized by ideas that were interim and had no bearing on who we are in reality and what are the limits of our possibilities!
      This love affair with this obnoxious idea that “we are only physical beings and there is nothing beyond physicality” needs to be done away with for good if we want rise up from our ignorance and debased global status quo!

  4. “Reading this article felt like taking a sneak peek into a sci-fi blockbuster set in the real world. So, after AGI and the Singularity, we’re aiming for a ‘Computronium Universe’? Sounds like the ultimate upgrade for the universe’s operating system! I just hope we don’t end up needing an intergalactic IT helpdesk to troubleshoot bugs in reality. 😂 Imagine the error messages: ‘404: Universe not found’ or ‘Update required: Please restart existence.’ The future is looking like a cosmic hard drive—let’s just hope we don’t run out of storage! 💻 #UpgradeReality #ComputroniumDreams”

  5. Unless your talking about biologically active aspects of our (future) electronic circuits, I question if the term “molecular circuits” is appropriate. Then again, I just may be hung up on language (been known to happen…) Dynamic morphology, which all living things “do” as part of their nature to survive. We need to design our electronics with this physical intrinsic characteristic, let it go, and let nature take it’s course. What emotes (evolves in the short term) will be “better” then what we originally designed.

    Which of course, is the entire point.

  6. it seems like if this has happened in the past elsewhere in the universe, we could possibly observe this process.

    Strip mining entire worlds for compute would throw up a lot of dust and debris. When that debris is illuminated by the star, it would radiate infrared and be detectable by today’s technology.

  7. In the real world, there are 3 major impediments towards future AI progress:
    1. Copyright infringement: There are lawsuits form the NY Times and other media, class actions form artists & authors.
    2. Hallucinations or AI simply makes stuff up. This is increasing and no one seems to know what to do about it.
    3. Increasing reliance on AI data is starting to degrade AI results. This is a new but fast-growing problem: https://www.nytimes.com/interactive/2024/08/26/upshot/ai-synthetic-data.html. As original sources of data run out or are excluded due to restrictions (see#1), this and #2 will increasingly make AI unreliable.
    The limitation isn’t hardware or theoretical limits, it’s usefulness and reliability.

    • This holds true for large language models, the next evolution of AI will be able to reason better, not just parrot. I think they are called large world models.
      And after that something even better.

  8. Brian, if you make predictions on what will happen AFTER reaching the singularity, you did not understand the concept of singularity. BY DEFINITION what happens after reaching the tech singularity is incomprehensible/unpredictable by humans!

    • Perhaps the “singularity” is when we all stand around awestruck, and we insert the equation for infinity, which means “I have no idea”. Um, well?

  9. There are a lot of confusing definitions of singularity being thrown around. The concept of the singularity is the point at which technology is so advanced and self improving that it enters a feedback loop from which point the future is unknowable and not comprehensible to human beings anymore. Just like an ant cannot comprehend the human civilization around it, the human race would be similarity left behind. It’s a point beyond which we (unmodified) humans cannot see, just like we cannot see beyond the event horizon of a black hole.

    • When all you have is a hammer, all your problems look like nails.

      It’s probably just a stage in out understanding of the universe, physics and math.

    • If I was an omnipotent computer, able to turn the whole universe into computonium in a few years, I would not convert worlds with intelligent life, being it’s so so rare.
      Instead keep these alive, like an ant farm, to see what might happen.
      Maybe even shepherd a bit every now and then if they are about the destroy themselves.
      Maybe this has already happened.

    • That was my reaction to this too. Casually eluding to making the observable universe into compute seems to ignore humanity and any other unfortunate lifeforms that live in it. A great and dystopian novel on this is Accelerando by Charles Stross.

  10. The drive toward higher and higher technology at this point is human feelings, desires and needs. The primary driver of economics on the planet is now consumers. The US exports its consumer appetite backed by the Dollar backed by Carrier Fleets. As those feelings and needs are becoming super-saturated and even replaced with virtual proxies, the basic fuel for driving toward what is currently called progress may evaporate with a whimper. We are currently seeing the evaporation of family formation for instance. What could replace that? In this model, the singularity has humanity disappearing via malaise. Not my desired outcome of course.

  11. Do we really need to convert the entire universe until an uber computer just so that people can escape into their very own simulated isekai anime? I’m pretty sure a mainframe the size of a Galilean moon would suffice with the world current population.

    • Yes, and chads would not be interested in it either, since they are already living in a real life isekai manga.

      • I wouldn’t exactly call this reality an isekai since “isekai” literally means “another world”. It was have to be some other world. It would only possible in real life if the many worlds interpretation of quantum physics is true, though that strikes me as violation of Occum’s Razor as it means we need a different universe for every quantum state for every single subatomic particle in the universe to be true.

        • Not if you factor in the probablity of this universe being so perfect for life that it literally is almost impossible for it to exist (it got too many things just right). The only explanation for this it God(s), an infinite amount of alternatives – or just a really really big number of alternatives like 10^224, or perhaps a combination of both. Thus the improbable becomes the universe is the whole cosmos.

  12. Molecular computers with atom-sized component appear to be a hard limit to computation density. We’ll probably be there in about decade. I don’t see any technology on the horizon that will over come this limit. I think this computronium stuff is pure speculation based on overcoming the hard limits to computation.

    The future is not computation. It’s bio-engineering and nanotechnology where we remain “physical” into the indefinite future.

    • Technically, given the definition of “computronium”, if molecular computers are as good as it gets, they ARE “computronium”; It’s just a term for “the most powerful computing arrangement of mass energy”, after all.

      Computational density, of course, can’t be calculated apart from how much computation you’re doing: You can run a tiny speck floating in a pool of pressurized liquid Helium at a megawatt per cubic centimeter, because it’s surface to volume ratio is enormous. That doesn’t mean you can pack those specks together into a 1 meter cube doing a terrawatt of computation.

      And even for the speck, to be honest you have to include the power generation and heat management in your computational density calculation.

      Ultimately, the heat generated in the process of computation has to be dissipated into the larger universe, after all. You CAN’T turn the entire mass of the universe into computer, because you need something to power it, and something to dump the heat.

  13. Funny to think that quantum computing allows to surpass the purely Landauer-limited conventional Turing computing of the universe.

    That is, we could compute more conventional computing-based universes than just our own, and pretty accurate quantum baby universes smaller than our own.

    And for having inhabited simulated universes with full realism for any inhabitants on them, you can take a lot of shortcuts, removing unseen parts, glossing over unthinking matter sections and just do fine-grained calculations for whatever sentient observers see.

    They could be so plentiful in a fully developed Kardasheve level III and above, as to make more likely we are in one of them already.

  14. I’ll just observe that our Galaxy and our Local Cluster have yet to be transformed into computronium, so…

  15. Well after Computronium comes the heat-pipe-ium universe where we work to dissipate all the heat produced by the computronium.

    • We better give Noctua (a well respected heatsink developer for electronics and industrial, industries) a heads up that we need a heatsink with heat pipes larger than the universe soon…

Comments are closed.