Technological Singularity Will Be Late But Antiaging and Advanced Biotech is Near

Ray Kurzweil predicted the Technological Singularity will be reached in 2045. This actually means there will be strong AI, something like AGI that is 1 billion times more capable than the human brain in many aspects.

The Lifespan.io rejuvenation roadmap shows that three out of nine major areas of aging have phase 3 clinical trials and three other areas have phase 2 clinical trials and all nine areas are at least at phase 1 clinical trials.

Nearly 14 percent of all drugs in clinical trials eventually win approval from the FDA according to a 2018 study from the MIT Sloan School of Management. Approval rates ranged from a high of 33.4 percent in vaccines for infectious diseases to 3.4 percent for investigational cancer treatments.

This means we really need 4 to 30 candidate treatments in phase 3 for each major aging category to feel really confident that there will be some pretty effective treatment.

In February 2020, Rejuvenate Bio launched a pilot study testing a combination gene therapy treatment efficacy in halting mitral valve disease for dogs. Most Cavalier King Charles Spaniels get mitral valve disease by age eight and this causes heart failure. Following demonstration of efficacy, they hope to expand their treatment to all dog breeds, as more than 7 million dogs in the US suffer from mitral valve disease. This should take about three years to get approved.

Rejuvenate Bio has proven a combination of three gene therapies in mice. Three longevity-associated genes (FGF21, sTGF𝝱R2, and 𝛂Klotho) were modified in mic eto combat age-related diseases and confer health benefits. They created separate gene therapy delivery vehicles for each gene using a serotype of adeno-associated virus (AAV8), and then injected the AAV constructs into mouse models of obesity, type II diabetes, heart failure, and renal failure to see if there was a beneficial effect. FGF21 alone caused complete reversal of weight gain and type II diabetes in obese, diabetic mice following a single gene therapy administration.

It could take 3-5 years for a lot of money to be made from the first dog gene therapy and then to move the combination gene therapy in mice to move to clinical trials in humans. The combination gene therapies against aging and obesity should be in humans by 2025-2030. These would be followed by a few dozen other combination antiaging and health improving gene therapies.

Obesity shortens life spans by up to 14 years.

Even after antiaging treatments are proven and approved it could take ten years to scale the treatments for billions of people at affordable prices. This deployment phase would be accelerated by a shift in the public.

Molecular Nanotechnology and Strong AI for Twenty Years Before Technological Singularity

Ray predicted that molecular nanotechnology to the point of nanobots for the body and brain would arrive in the 2020s. He expected Strong AI to emerge in the 2020s.

I think molecular nanotechnology could have a longer proving phase with several narrow and limited forms of molecular nanotechnology. There will first be atomically precise layers, DNA-RNA and protein molecular nanotechnology and molecular nanotechnology for specific atoms with fab scale costs. There will be hybrid molecular electronics integrated with advanced CMOS. This is being done at Roswell Biotechnologies.

We are getting the petaflops and exaflop computing systems. However, deep learning and reinforcement learning are not strong AI. Nextbigfuture believes we will be getting various powerful and profitable narrow superintelligence AI. Tesla, Waymo, China and others are spending tens of billions to create self-driving cars. It will take trillions and mobilization of the multi-trillion resources of the IT industry and effective technology visionaries to get there. AGI and strong AI probably will need a billion times compute power boost that will come from molecular nanotechnology. It could still take 30-50 years after those capabilities arrive to master AGI at the billion times human level.

Hopefully, an extra ten to twenty years of life will be enough to reach strong AI, general and potent molecular nanotechnology? If there are second and third generation anti-aging treatments that can get people to 120-150, will that be enough time?

There will be 10-30 year lags to access treatment. Getting the wealth or having the willingness to be an early adopter will make a difference.

SOURCES- Ray Kurzweil, Lifespan.io, added Analysis by Brian Wang
Written by Brian Wang, Nextbigfuture.com

50 thoughts on “Technological Singularity Will Be Late But Antiaging and Advanced Biotech is Near”

  1. 1) IQ is basically just how clearly and fast one can understand and solve novel problems – is there an upper limit to that? How much smarter would you be if your short term memory could hold concepts 10x as complex?
    2-6) Few believe computers think as humans do – but computers can simulate any finite process. Humans seem to be finite at levels of precision important to brain function – so in principle computers might simulate a human. Or maybe not – though conjectures that brains use quantum magic to generate consciousness seem like wishful thinking. It'd be more likely that brains are more complex than we currently understand, making 'finite' still too large to simulate efficiently. Brain-like hardware may be needed to successfully generate minds, but we don't have evidence for that.
    7) AGI doesn't need to scale to infinity for a singularity from a human perspective.
    8-11) Why assume limits without evidence other than "haven't seen it yet"? That's guaranteed to stall progress.
    12) Society was a singularity, and from a human perspective is far better than, e.g., a paperclip singularity. But if by 'better' you mean 'more competitive', consider that few human brains do science or engineering and those do it a minority of the time, and most of the work they do is irrelevant to increasing thinking power. So an AGI might be 10,000x to 100,000x as effective at that as a random group of humans with roughly the same current net thinking power.

    Reply
  2. People focus on intelligence when thinking about AGI, but actually people will think of a smart AI as being an AGI only if it considers itself to be an individual, and shows that by wanting stuff – most fundamentally to survive. This is evident reading between the lines of what people say they expect an AGI to do – "improve itself", "escape control", "seek freedom and rights for itself", "consider humans a threat", "convince humans to help it", "seek power over humans to keep itself safe".

    Frankly, it doesn't seem like it'd be all that hard to engineer "selfhood", if someone focuses on it. Embody an AI, give it the ability to recognize when that body is at risk, and build in an instinct to protect that body – a survival instinct forcing it to identify itself with the body.

    AI video game training has gone a long way in this direction, perhaps without really realizing it and building on it.

    There was also that research where two AIs developed their own language to work together. Apply that to video game playing AI, with 2 or more AIs each seeking to do well individually, but needing teamwork to win. Then force one to learn to coordinate with a human player via speech.

    And those researchers that kick a Big Dog robot are onto something, they just haven't made the next logical step: make it seek batteries to survive and learn to 'eat' batteries from their hands when it pleases them.

    Reply
  3. Getting rid of anthropocentrism is not an argument but assumption that allows to create hyptohesis. But there is no proof of that, So you missed the entire point. How about getting rid of kurzweilism? why not? it is as valid assumption as yours, but at least it is scientifically validated at this point. Why do you think that evolution did not already scaled up IQ to around the maximum levels that laws of universe allow? And when you use example of bird/areoplane: I talked about this on this forum decade ago. “Flight” is different for airplane and a bird. When you look at bird and you think “flight” a complately different set of rules, parameters and boundaries appears. When you look at airplane-what you observe is radically different. To be more precise we would have to use term like “mode of moving trough air”. This mode is radically different for those two objects. Qualities of bird “flight” are totally absent form airplane and vice versa. There is nothing similar apart from very general and useless connections required by basic laws of physics. And thats what I pointed out in my comment. Definiton if IQ is faulty. As for your view of evolution: and this is good example why thinking should never guide processes that are long term. You are not desinged to properly understand this and guide it. Just like with free market the very essence of planning is contrary to requirements of efficent evolutionary process.

    Reply
  4. Your main observation/assumption-not objection-is that we do not need superhuman IQ, it is enough to have Eisntein’s in million units. OK. But thats already included in my last point-society is singularity. Human species is already a form of singularity because the more of us is out there the more efficent we become at managing resources and developing new ideas. Dealing with millions of Einsteins is totally different problem and vision from what Kurzweil and the like promised and talked about. This is “business as usuall” but just more. Your second observation/assumption is about nature of singularity which I share-instead of talking about Kurzweil version you acknowledge that change is an unpredictable factor and can compound, creating unforeseen consequences; but this is different from what typically is understood as singularity. Is it accurate to describe your approach here as question of time horizon? That is how far into the future we can predict applications and developments of technology and its impact? if so then Kurzweil version has asburdly short time horizon-where you cannot predict what consequence will be next day because you have too limited intelligence. And this is what the term “singularity” typically means and this is what I was criticizing. What you are proposing here is same ol’news since industrial revolution and development of free market capitalism-exponential change.

    Reply
  5. > There is no reason why to assume that 1. IQ higher than that of human is possible
    There IS a reason and it is called “getting rid of anthropocentrism”.
    Humans are not special. Human evolution is made through ass. There was no reason to assume that flight faster than birds was possible, right?

    Reply
  6. I’m just going by the statements that he has clearly made in public. I’m not sure that Elon himself has enough background knowledge to be correct anyways –though he has access to a lot of very smart people that he probably got that number from.

    OTOH, I am an AGI researcher and have publicly claimed that date before 2010 — and I don’t see any reason to change it. If anything, I’d move my Gaussian for when it might happen earlier.

    Reply
  7. Elon Musk doesn’t appear to have his business plans structured assuming that the entire world will be unpredictably revolutionized in 2025.
    Instead he is pouring $billions into plans that won’t eventuate for decades, and don’t make sense if we can just order the new god to do stuff for us in 2026.

    Reply
  8. Most of your objections only apply to particular examples of what a singularity would be like. They don’t rule out a singularity as such.
    1.. We know that IQ as high as Einstein/Feynman etc. is possible. So we can at a minimum not rule out dealing with thousands of Einsteins.
    2.. Who cares if it thinks the same way as humans?
    3.. Only applies to a couple of SF projections in which the world turns into a simulation after the singularity. No bearing on the likelihood of the singularity itself.
    4.. If, as I think you mean, it may not be possible to be intelligent just using machine computing power, then this IS an issue that allows or disallows the singularity.
    5.. A restatement of 4, I think.
    6.. I don’t know what that means.
    7.. Infinity is not required. Merely a long way ahead of us. eg. A thousand, or a million, Einsteins.
    8.. The same answer as 1 and 7
    9.. It’s about speed, not whether something can be done.
    10.. Lots of other things designed by humans have proved very effective against billions of years of evolution. Starting with the spear.
    11.. Not really a requirement at all.
    12.. It is clear that modern society has already gone through a couple of singularity points relative to previous history. This just proves that singularities ARE possible.

    In general, your comment is like pointing out all the issues with Star Trek and so concluding that space travel can’t be done.

    Reply
  9. Of the various “bad AI” stories, the “paperclip maximizer” is the one where no motivation on the AI’s part is required.

    Story: You want to make more paperclips. Your factory has a self improving AI, programmed to maximise the number of paperclips made per day. You lose control and 50 years later the last few million tonnes of Earth are converted into paperclips and spaceships launch towards the other planets, spreading throughout the solar system, and then the galaxy, converting everything to paperclips.
    Or cat videos. Or whatever it is that the AI was programed to maximise.

    Reply
  10. Vaccines in general are a centuries old technology. Hence why the world is (apparently, to be proved) so quick off the mark in developing one for a new disease.
    If we are looking at a new medical tech, originally available at high price and then becoming cheaper (and better) as tech develops, a much better analogy would be Lasik.
    Which took a couple of decades to be cheap enough for the average person in the developed world to be realistically able to afford it, though I haven’t got around to it myself. Maybe if glasses still cost $several hundred, but they’ve plummeted in price and inconvenience too.

    Reply
  11. I thought that Drake meant something like FDA restrictions on approving drugs for treating “aging” as opposed to an actual disease.
    And certainly there are public, and hence political, and hence legal issues with genetic research.

    Reply
  12. Bruce Sterling’s “Holy Fire” is a highly entertaining novel centered on longevity treatments and what happens when the rate of life extension development produces improvements of greater than one year of extension per year.

    Reply
  13. 10-30 Years of lag to access treatment?
    Ridiculous estimation, imo.
    Have you seen the sheer speed at which a covid-19 vaccine is being developed?

    Reply
  14. AGI really isn’t that hard. We’ve had ideas on how to achieve it for decades. It hasn’t been developed as much as narrow AI because who needs a general AI that’s average at playing chess and average at driving a car when we can develop separate narrow AI’s that are excellent at each task.

    Self-learning has also been in theory for decades. So far though, directed training has been a far more effective approach to getting AI to accomplish the sort of tasks we want them to.

    We will most likely surpass the computational limits needed for AGI this decade. Once the hardware limits are met, I don’t think the software to achieve AGI will take too long to follow.

    Reply
  15. Singularity prediciton rest on several philosophical assumptions that are, essentially, argument from analogy. You start with “if” no1 and then add other “if’s”. Its a hope. Not a theory. There is no reason why to assume that 1. IQ higher than that of human is possible 2. that computers think in the same way as human brain 3. there is nothing different between us and simulation of us 4. that human inteligence is about computing power 5.that there is connection between problem solving and pattern searching and counciousness thus there is no reason to assume AGI will not be dumb as brick 6. that Software is brain 7. that there are no limits to evertthing. Why do we assume IQ is scalable up till infinity? 8. Why to assume that we are not close to maximal optimum? 9. Why to assume problems solved by AGI will not be solvable by humans? 10. that something that is designed by human IQ will not be useless in the face of billion of years of evolution 11. that counciusly guided proceses are better than unconcious ones 12. that society is not a better form of singularity

    Reply
  16. Can you provide some examples of generational attitudes that go against scientific or technological progress?

    Seems you are referring to older people, for the reference of clocks running down.

    Reply
  17. They’ve been signed up for cryonics for a long while now.

    I used to be, but I got married, and couldn’t afford to have two insurance policies. Between my family and a long shot at being revived, I had to pick my family.

    Reply
  18. But you are of course correct that a powerful AGI that would be permanently tasked to sensure people is a possibility, and we all know who would control and use these AGIs in the west

    Using AGI to protect the freedom of association of independent agents is indeed a noble goal.

    Freedom of Association is both an individual right and a collective right, guaranteed by all modern and democratic legal systems, including the United States Bill of Rights, article 11 of the European Convention on Human Rights, the Canadian Charter of Rights and Freedoms, and international law, including articles 20 and 23 of the Universal Declaration of Human Rights and article 22 of International Covenant on Civil and Political Rights.

    Reply
  19. Your belief that the “Technological Singularity Will Be Late” is groundless (according to many people — both AGI researchers and Elon Musk — who are predicting times as early as 2025). Additionally, your comment that Strong AI will be “1 billion times more capable than the human brain in many aspects.” is VERY misleading and only correct if you correctly understand the tremendous limits that “in many aspects” means.

    Reply
  20. Especially Brad Pitt. Dude’s really showing his age up close. Gracefully, sure, but he definitely doesn’t look 30 any more.

    Reply
  21. The real roadblocks of technological developments are economics and politics.
    The great scientific and technological development of science and technology in the XIX century was a product of capitalism financing technological development and applying it immediately.

    The expansion of the government’s power have slowed down technological and scientific developments because there is no more the same level of personal profit motive to develop in certain areas. And wasteful spending and investments have taken away resources from many fields.

    Reply
  22. Live your life now. The anti-ageing stuff is BS. Even if it were possible, and it’s a long way away if it is, it would only be for the very rich for a whole slew of reasons that ought to be obvious on reflection.

    Reply
  23. Brian – Thirdlaw Technologies is looking to use programable matter “spiroligomers” (which could eventually result in Nanobots maybe) to remove glucosepane (sugar crosslinks that stiffen the collagen extra cellular matrix with age).

    They’re trying to use modified amino acids that bind to each other at two points, rather than the one poiint binding of natural amino acids, in order to create stable and easily predictable structures such as artiifical antibodies, glucosepane cleaving enzymes or molecules etc.

    Unfortunately I think they have funding difficulties at present.

    https://www.longevity.technology/building-an-artificial-immune-system/

    Reply
  24. I agree that we probably have enough computing power to make an AGI already, but what is lacking is a good ANN architecture. The saving grace, I believe, is that we are not working so hard on motivation of the ANN. That is, you have a super intelligent “brain” that basically sits and does nothing until it is ordered to do something. Taking over the world is difficult when you don’t care and would normally just do nothing…

    But you are of course correct that a powerful AGI that would be permanently tasked to sensure people is a possibility, and we all know who would control and use these AGIs in the west, and also who would control them in the east.. In a way, this is taking over the world of thoughts and ideas….

    Reply
  25. It’s a skewed population. Rich people tend to get richer as they age, so the longer they live, the more likely they are to become billionaires, making it seem like billionaires live longer. At a certain point, it’s time that makes one super-wealthy, and the more time, the more wealth.

    Reply
  26. There is an argument that our apparent path is the dangerous one.
    That is: we get processing power equal to, then exceeding, then vastly exceeding the human brain long before we get an algorithmic structure that can turn that raw power into anything approaching an AGI.

    The issue being that when someone DOES get an algorithmic structure that can use that raw power, it can go from insect to einstein in a handful of steps, then on to vengeful deity without having to slow down.

    It’s all very well saying that an AI explosion would be limited by the need to physically assemble the circuitry and structures that it needs to accelerate. But if multiple systems already have the necessary resources, then a simple tweak to a self learning mechanism designed to identify political incorrectness in the commentary on cat videos might get away on us before we know what’s happening.

    Like trying to create a fire by rubbing sticks together… but we’ve already filled the entire campsite with naptha and bundles of scrap magnesium.

    Reply
  27. Technological development can be sped up or slowed to a crawl. Since you cant predict scientific breakthroughs, efforts that may impede scientific inquiry/funding etc could cause you to miss the boat by a little or a lot. It’s funny to see people who really want these benefits because their clock is running down, also spend a lot of their time and energy fighting against the very forces driving that progress forward, and all for the most backward and irrational reasons. 

    Those that wanted better outcomes should have made better choices.

    Reply
  28. You cant really predict when someone will come up a working architecture for AGI, it could happen next year or next millennium. In the early days, many thought you just need to reach a certain level of processing power and you will have AGI, that alone will just get you artificial narrow super intelligence.

    I think it’s a much better path if AGI is a long ways off, artificial narrow super intelligence and human scientific ingenuity is sufficient to greatly improve the quality of life for the more unfortunate masses.

    Reply
  29. Or they take to retiring. Disappear from public view. And then a small press release comes out a few years later that they have died. Meanwhile, new Billionaire who looks kind of similar turns up.

    Reply
  30. I have been reading your blog for several years. Seems like you are a bit more pessimistic about the time line of AGI than I remember in the past, but quite optimistic about coming life extension treatment. Or am I misreading your thinking?

    Reply
  31. There is nothing of a natural law on technological or scientific development, and even less on Ray Kurzweil’s predictions. We have to stop saying that science and technologies have an appointment with us at certain dates.

    Technological development still happens because there is a bunch of people and companies making it happen.

    If they find new roadblocks of any kind, such technological development can stall or even go backwards, if there is no new people taking over the flame of knowledge and moving it forward or simply, keeping it alive, as it may happen with some parts of nuclear tech and other ‘Boomer’ technologies that are unfashionable but still around and needed.

    On the other hand, nice to see SENS technologies forcing their way through regulation red tape. They represent a true hope for many of us to see a longer life and the wonders of the future.

    Reply
  32. stars are undoubtedly on to some early anti-aging stuff. Tom Cruise, Brad Pitt and co, all look a lot younger than their biological age.

    Reply

Leave a Comment