What does being on track for the predicted Technological Singularity mean and are we on track ?

Ray Kurzweil is famous for his vision and prediction of a Technological Singularity by 2049 Although whenever Ray predicts a date like 2049, based on Kurzweil’s own past reviews of his predictions, he gives his predictions ten years late or early to develop. So by Ray’s personal standard his prediction timing of being correct on the Technological Singularity would be if it happened in the 2041 to 2059 time window. Usually his predictions are based upon exponential developments and progress, so he rarely would make an error in predicting something happening too early.

The technological singularity is the hypothesis that the invention of artificial superintelligence will abruptly trigger runaway technological growth, resulting in unfathomable changes to human civilization

Some use “the singularity” in a broader way to refer to any radical changes in our society brought about by new technologies such as molecular nanotechnology,although Vinge and other writers specifically state that without superintelligence, such changes would not qualify as a true singularity

The key parts of Ray’s definition for reaching his definition of technological singularity are

$1000 buys a computer a billion times more intelligent than every human combined. This means that average and even low-end computers are vastly smarter than even highly intelligent, enhanced humans. The technological singularity occurs as artificial intelligences surpass human beings as the smartest and most capable life forms on the Earth.

In 2023, Ray has predicted that 10^16 calculations (10 petaFLOPS) per second—roughly the equivalent of one human brain—will cost $1,000. So the billion times more than human is 8 X 10^25 calculations per second in the 2041 to 2059 time window.

Ray defined Technological Singularity and predicted the timing. This is the definition and predictions to see if we are track. The core of Kurzweil’s prediction is exponential computing power continuing to progress and the co-development of exponentially more powerful artificial intelligence.

Others like myself may have our own predictions and modifications of what we believe will happen, but the main test is a lot of compute power and very powerful AI. Note – the exponentials going beyond the targets of 8X 10^25 calculations means that there can be a lot more inefficiency in artificial intelligence compared to human intelligence. AI becomes the driver of overall civilization “intelligence”.

Ten years after passing total human intelligence or compute singularity then IF AI was progressing at 1000X per decade, then

Tech Singularity + 10 years = For $1000, Artificial intelligence compute is 1000 times human
Tech Singularity + 20 years = For $1000, Artificial intelligence compute would be 1 million times human

500 Nvidia Xavier chips should deliver 10 PetaOPS (16 bit operations) in 2018. Each Xavier processor will use 20 watts. So 10 petaOPS would use 10 kilowatts.

Nvidia has been able to sustain ten times chip improvement every two years for a few years.

If this were maintained then it would take six years and three generations of chips after Xavier to achieve 1000 times improvement then Xavier+3 Gen would be 1 chip to deliver 20 PetaOPS.

If this were to slow down slightly so that it took six years and three generations of chips after Xavier to achieve 100 times improvement then Xavier+3 Gen would be 5 chips to deliver 10 PetaOPS.

So GPUs, FPGAs and new computing devices are enabling 1 PetaOPS at the affordable $1000 price around 2022-2029.

NOTE – universal quantum computers could emerge over the next years to be more powerful than all classical computing and quantum computers seem likely to have far faster compute power scaling than even GPUs or FPGAs.

How much will Deep learning and other artificial intelligence software be able to leverage this affordable compute power for applications ?

Based on this let us look at the “2019” predictions which actually are 2011 to 2029 predictions that are on the Technological Singularity timeline.

Nanotube and nanowire processors and memory are commercializing now

Kurzweil Prediction – Three dimensional nanotube lattices are predicted to be the dominant computer substrate in the 2011 to 2029 timeframe.

Nano-Ram, the first non-volatile memory chip to exploit carbon-nanotube technology, appears poised for commercialization in 2018. BCC Research expects this technology to be more disruptive to enterprise storage, enterprise servers and consumer electronics than flash memory, enabling a wave of innovation in those products.

BCC Research anticipates the global NRAM market to achieve a compound annual growth rate (CAGR) of 62.5% from 2018 to 2023, with the embedded systems market anticipated to reach $4.7 million in 2018 and $217.6 million in 2023, growing at a CAGR of 115.3%.

Nantero’s NRAM is as fast as and denser than DRAM, nonvolatile like flash, has essentially zero power consumption in standby mode and 160x lower write energy per bit than flash, and is highly resistant to environmental forces (heat even up to 300 degrees C, cold, magnetism, radiation, vibration). NRAM is compatible with existing CMOS fabs without needing any new tools or processes, and it is scalable even to below 5nm.

The computer memory market is about $85 billion per year
Embedded memory is about $10 billion per year
DRAM is $45 billion per year
Flash is about $30 billion per year

Nantero is looking to eventually get to half the price of DRAM

Intel’s 3D XPoint memory is based upon a lattice nanowires. The chips are about 1,000 times faster than the flash memory that underpins your iPhone and can store about 10 times more data than the DRAM memory in PCs, laptops, and servers.

HP and other companies are working on nanowire memristors. Memristors are often made of silicon-oxygen-nitrogen material laced with clumps of silver nanoparticles at the electrical terminals. When current is applied across the memristor, the silver nanoparticles shuffle around within their parent oxynitride matrix, to line up within a lightning-bolt-like path of least electrical resistance

Fujitsu says it has implemented basic optimisation circuits using an FPGA to handle combinations which can be expressed as 1024 bits, which when using a ‘simulated annealing’ process ran 10,000 times faster than conventional processors in terms of handling the aforementioned thorny combinatorial optimisation problems.

Fujitsu says it will work on improving the architecture going forward, and by the fiscal year 2018, it expects “to have prototype computational systems able to handle real-world problems of 100,000 bits to one million bits that it will validate on the path toward practical implementation”.

Stanford’s N3XT project is breaking data bottlenecks by integrating processors and memory like floors in a skyscraper and by connecting these components with millions of “vias,” which play the role of tiny electronic elevators. The N3XT high-rise approach will move more data, much faster, using far less energy, than would be possible using low-rise circuits. N3XT high-rise chips are based on carbon nanotube transistors (CNTs). Transistors are fundamental units of a computer processor, the tiny on-off switches that create digital zeroes and ones. CNTs are faster and more energy-efficient than silicon processors. Moreover, in the N3XT architecture, they can be fabricated and placed over and below other layers of memory.

Other Kurzweil Predictions

Kurzweil prediction – Automated vehicles dominate roads in the 2011-2029 window

NBF tracking – Improved versions of the Tesla autopilot system will exist and more self driving cars will be used by Uber and other ride sharing companies. Other car makers are introducing more driver assist and in the 2020s many companies are targeting full driver automation.

Kurzweil prediction – Language translation machines routinely used in conversation

NBF Tracking – Google Translate and other translations systems are common. These are being combined with Voice systems like Alexa.

Kurzweil prediction – Total power of all computers will pass the total computational ability of all humans

NBF tracking – This is again referring to the 10^16 operations per second standard for a human. There are 8 X 10^9 (8 billion) humans. Total compute power of all computers and devices would need to exceed 8 X 10^25 operations per second to match this prediction.

The installed base of all desktop and laptop computers is about 3 billion. Smartphones and tablets have gone past the installed base of PCs. There are large servers farms. There are a lot of gaming consoles.

PCs range from about 10 gigaFLOPs to 200 gigaFLOPS or some TeraFLOPS. TeraFLOPS if they have GPUs. (140 Gigaflops, with 2x Xeon E5-2670’s @ 3.0Ghz.)

About 2 billion mobile phones are shipping every year.

There are also large numbers of embedded compute devices.

10 billion devices and servers with 100 TeraOPS would be 10^10 X 10^14. 10^24 operations per second

So there are a number of ways to get to total computers exceed total human brain compute.

100 billion devices and servers with an average of 800 TeraOPS would be 8X10^25 operations per second
10 billion devices and servers with an average of 8 PetaOPS would be 8X10^25 operations per second.
1 trillion devices and servers with an average of 10 TeraOPS would be 8X10^25 operations per second.