AI Accelerates Rate of AI Technology Improvement – This is the Core of a Technology Singularity

AI is accelerating computer hardware improvement and software improvement. This was the core of the case for the Technological Singularity. AI is also accelerating the improvement speed of science. AI technology that enables the faster improvement of AI.

ChatGPT, CoPilot and other Generative AI improves programming productivity by ten times.

Nvidia accelerates Computational Lithography by forty times. Cycle time for reticles is reduced from two weeks to eight hours. This is the first major step before actual lithography. This capability will accelerate the move from 3 nanometers to 2 nanometers and then to 1.5 and 1 nanometers.

Nvidia accelerates and improves colon cancer detection.

19 thoughts on “AI Accelerates Rate of AI Technology Improvement – This is the Core of a Technology Singularity”

  1. More on AI use/abuse:

    “ChaosGPT, the autonomous AI program that hopes to “destroy humans” and gain “power and dominance,” is now attempting to gain Twitter followers in order to manipulate and control them. The video of how it’s going about this is an interesting window into the current state of easily accessible AI tools, which is to say, we do not currently have much to fear.”

    See:

    https://www.vice.com/en/article/z3mxe3/ai-tasked-with-destroying-humanity-now-working-on-control-over-humanity-through-manipulation

    • That, fusion power, molecular nanotechnology, and space colonization technologies. If I see substantial increases in innovation rates in these technologies, I will be more convinced a singularity scenario is real.

  2. Well AI is hot, granted but, while less sensational, the title could as accurately be: “Technology Accelerates Rate of Technology Improvement – This is the Core of a Technology Singularity”

    Quoting myself: ” . . . it is easy to conclude that each [technological singularity] is an enabler to the next . . . ”

    Reposting my Ars Technica post that was in, some of you have seen big parts of it before in these forums. (And yes, I know I may sound like a broken record at times):
    —————————————-
    Every so often, we have an advance so significant that people that lived before it could not reasonably be expected to understand or predict the changes it would cause. These are often called technological singularities (emphatically not The Singularity). The name is misleading as they are actually more like event horizons, especially since it is very difficult to notice them, even when in the middle of one. A technological singularity doesn’t represent a precise moment in time, such as the day the inventor shouted, “Eureka!” but, instead, the fuzzy extended period of time at which it’s impact has gained sufficient momentum as to be a fait accompli.

    Examples include things like tool use, ranged weaponry, fire, animal domestication, agriculture, all the way up to the Guggenheim printing press (which was very slow to cross the threshold due to massive resistance and attempts to control and limit it), the industrial revolution, electronics, and the internet. Each typically emerges in about half the time it took to achieve its predecessor. From which it is easy to conclude that each is an enabler to the next. The most recent one, the internet (or World Wide Web) could be placed at about 1993, around 60 years after electronics really began taking off, and the industrial revolution 120 years before that. Which means that, 30 years from 1993, another one is coming due.

    Contrary to what many science fiction writers have hypothesized, it’s not strong or narrow AI, nor is it AGI. Not yet. Call it cognitive automation, the automation of routine thinking. In the way that the industrial revolution automated routine physical activities (such as weaving textiles), cognitive automation has a huge variety of uses, from writing sports articles reporting last night’s game, to running factories and piloting vehicles in far off space.

    I’ve been wondering if this would be the year for the past 15 or 20 years.
    Now I am going to wonder if there is another technological singularity coming in 2038, and 2045, and 2049, and 2051, and 2052, and 2053, and 2053, and 2053, and 2053, and 2053, and 2053 . . . . .

    Surely not. It can’t be possible. But then I look at potential candidates for those events, even while recalling that our ability to predict beyond even the next one is supremely suspect. I come up with things like narrow AI and possibly AGI, man-machine mental interfaces, radical life extension (or even regeneration and rejuvenation), self-replicating fully automated general purpose manufacturing facilities (and resource gathering), nano-tech in situ replacements for organic cells, fusion energy, and the list just goes on and on.

  3. I love all this. I say this with complete sincerity. But at the end of the day we continue to die of cancer at any age without any cure or prevention mechanisms (it would have to be cured directly). I mean, everything that has been achieved in this field is impressive. While in the medical aspect, and not to mention issues such as “normal” aging, without cellular degeneration and so on, that this page talks a lot about the subject, everything is papers, hypotheses, some theory and at most some marginal advance. Like civil aeronautics, which has basically been the same for 60 years.

    • My stepson had melanoma that had moved to stage 3 and entered his lymph nodes last year.

      10 years earlier he would certainly have died. He went to Sloan-Kettering and they didn’t even do chemo. They just did something to his immune system, as I understand it, and it all went away. He never even spent one night in a hospital or went under a knife.

      He’s given me more grandkids since then and all is good. Progress is most definitely being made all over the place.

      We just have this way of thinking (and we all do it to some extent) that goes: “Yeah, we already got that old thing many moons ago. What new thing are you giving us this week?”

    • The medical field is notoriously difficult, because you have to deal with an exceptionally complex system whole building blocks are orders of magnitude smaller than the tools we have; and also because of the lengthy and complex regulatory process, which is partly needed because of that system complexity.

      Nevertheless, we are making progress. Just the other day, there was a claim by Moderna that they could have a broad-range customizable cancer vaccine within a few years. The enabling technologies are coming together to make something like that. Then there’s the various immunotherapies of recent years, and the related T-cell reprogramming therapies, which I think is what Snazster posted about. The latter is getting adapted to more types of cancer as well, and looks promising.

  4. Not only there is a 40 times acceleration, there is also a sevenfold reduction in power consumption. Therefore the cost of running the process drops almost 300 times.

  5. The speed up in mask cycle time is less tp do with AI and more to do with porting software to GPUs-GPUs just happen to be used in AI.

    • Yes. But there is a useful application of AI in chipmaking, that is, it can automate floor planning, with better designs than humans’.

  6. Without any software improvement, double computing power might increase ChatGPT IQ about 5 point (currently around 110). If current rate of hardware improvement continue, we will have nobel level AI scientist in around 10 years.

    • And 10 years beyond that, science and technology that current Nobel scientists couldn’t begin to comprehend? Interesting times…

      • Science and technology that we can’t comprehend would not be as useful as those that we can, and may not be useful at all (except in some edge cases). It also doesn’t look like current AI really understands the underlying topics yet. There does seem to be some emergent reasoning skills, but how far can that really take us without software improvements? A super-genius IQ is not all that useful if its inventions aren’t grounded in reality. But in the real world, of course we will see software improvements as well.

        Some next steps that I’d like to see, some of which may already be being worked on:
        – AI explaining its own reasoning, so it’s less of a black box. Already being worked on.
        – AI rating scientific papers and other sources for reliability score. Want to get this to human expert level (expert in the relevant fields, otherwise it’s worthless).
        – AI detecting method and logic errors. E.g. known sources of measurement error, biases, analysis errors, bad practices, data quality. This feeds into rating sources.
        – AI citing, rating, and quoting the sources it used (part of explaining its reasoning), and favoring more reliable sources.
        – AI understanding real-world objects and properties, and relationships between them: what is “soft”, “rigid”, “slippery”, “bouncy”, “solid”, “liquid”, etc; how does a cat relate to a dog or to a table or to a parrot; how different objects behave when forces are applied to them in different ways – all of the stuff that children learn in their first few years. Basically, object categorization, category relationships, and category-spacial-temporal relationships (cause and effect and their relation to the object category).
        – Using the above to improve its reasoning and self-check.
        – Quantifying risks and costs (part of cause and effect): doing X has P1 chance of causing an explosion, explosion has P2-P4 chance of causing injuries and deaths and property damages; doing Y will take so and so resources, will produce so and so waste; doing Z has unknown risks and costs; injuries and deaths are bad, damages are bad, high costs are bad, unknown risks or costs are bad; can I estimate the unknowns, how, with which certainty?
        – Then it can start building hypotheses and designing experiments.

  7. Collaboration between Nvidia and TSMC to accelerate process nodes from current 3nm to 1nm is amusing. Last I heard itnel was stuck at 10nm so they renamed 10nm as 7nm.

Comments are closed.