In 2016 Supercomputers merged with AI and by 2018 there will AI machines with Exaflop power

Nextbigfuture will have several articles over the next few weeks reviewing developments in 2016 and looking ahead to developments over the next few years. Here we look at 2016 in computers. Later articles will look at medicine, life extension, energy, space and other areas.

The biggest developments in computing
* The Google Deep mind AI beat a worldclass champion Go player
* Current and future Supercomputers are mainly being built and targeted to advanced deep learning and artificial intelligence
* Deep learning is better than people at
* China made a supercomputer nearly three times faster than the previous fastest. The 93 petaflop Sunway TaihuLight
* The USA, Japan, Europe and China have funded projects to develop ExaFlop class supercomputers

In 2016, Nvidia introduced Xavier, the most ambitious single-chip computer they have ever undertaken — the world’s first AI supercomputer chip. Xavier is 7 billion transistors — more complex than the most advanced server-class CPU. Miraculously, Xavier has the equivalent horsepower of DRIVE PX 2 launched at CES earlier this year — 20 trillion operations per second of deep learning performance — at just 20 watts.

50 Xavier chips would produce a petaOP (a quadrillion deep learning operations for 1 kilowatt.) A conventional petaflop supercomputer costs $2-4 million and uses 100-500 kilowatts of power. In 2008, the first petaflop supercomputer cost about $100 million.

Nvidia will begin sampling to customers in the fourth quarter of 2017 with “automakers, tier 1 suppliers, start-ups, and research institutions who are developing self-driving cars.”

There are several other emerging options which will provide options for accelerated supercomputing and supercomputer problem scale applications.

There will be FPGAs that are 1000 to 10000 times faster than regular processors, optical processing which will become faster and cheaper for fast fourier transforms and quantum computing – quantum annealing systems which will be faster for optimization problems.

However, a significant general computing speedup will take longer to become easy to use and generally available and cheap. The best options there will be new computer memory that will eventually replace hard drives and optical communication within computers. However, most people cannot be troubled and have had no need for GPU co-processors. GPUs have been generally available for many years for accelerated computing. The vast majority do not max out computer memory on their laptops or devices.

Neuromorphic computing will be for niche supercomputing or embedded intelligence applications.

Fujitsu has a view of what could accelerate computing in the chart below.

The new non-volatile memory and possibly approximate computing could provide a speed up for laptops and tablets and smartphones which the broad population uses. There should also be faster wireless communication where everyday people will notice the improvements.

Analysts looking at computer memory are not expecting a sudden displacement of existing computer memory with new non-volatile memory.

Optical computing cheaper petaflop option for fast fourier transform in 2017 and then tens of exaflops in a few years

In 2015, Optalysis built an optical computing prototype that achieves a processing speed equivalent to 320 Gflops and it is incredibly energy efficient as it uses low-powered, cost effective components.

One petaflop target next year and 17 exaflops in 2022.

Specialized and general purpose quantum computing

Quantum computing seems on track to becoming faster than classical computing for optimization problems. There is the possibility that general purpose quantum computing could be faster starting in 2018. However, those general purpose systems will also need special cooling and other special data center locations to operate.

The niche systems will all be available via cloud access where software could submit heavy compute problems. However, it would only be worth it where the communication lag is less than the computing speed up.

Going beyond exaFLOPS will likely take new types of computing

In 2016, a new Molecular mechanical nanocomputer design which would be theoretically 100 billion to 100 trillion times more energy efficient than todays supercomputers.

At the same ten megawatt power level as some supercomputers, molecular nanocomputer supercomputers could achieve YottaFlop, BrontoFlops or even GeoFlop compute levels.

Ralph Merkle, Robert Freitas and others have a theoretical design for a molecular mechanical computer that would be 100 billion times more energy efficient than the most energy efficient conventional green supercomputer. Removing the need for gears, clutches, switches, springs makes the design easier to build.

Existing designs for mechanical computing can be vastly improved upon in terms of the number of parts required to implement a complete computational system. Only two types of parts are required: Links, and rotary joints. Links are simply stiff, beam-like structures. Rotary joints are joints that allow rotational movement in a single plane

A molecular model of a diamond-based lock, ¾ view

So a technological brute force acceleration looks likely in the 15 to 35 year timeframe. We will at least have some improvements on deep learning and reinforcement learning. Substantial trillion+ qubit general purpose quantum computers and all optical computers will also likely be available.

SOURCES – Optalysys, fujitsu, techradar, Nvidia