Niche supercomputing will have multiple acceleration options but laptops and will be too troublesome for regular users

There are several emerging options which will provide options for accelerated supercomputing and supercomputer problem scale applications.

There will be FPGAs that are 1000 to 10000 times faster than regular processors, optical processing which will become faster and cheaper for fast fourier transforms and quantum computing – quantum annealing systems which will be faster for optimization problems.

However, a significant general computing speedup will take longer to become easy to use and generally available and cheap. The best options there will be new computer memory that will eventually replace hard drives and optical communication within computers. However, most people cannot be troubled and have had no need for GPU co-processors. GPUs have been generally available for many years for accelerated computing. The vast majority do not max out computer memory on their laptops or devices.

Neuromorphic computing will be for niche supercomputing or embedded intelligence applications.

Fujitsu has a view of what could accelerate computing in the chart below.

The new non-volatile memory and possibly approximate computing could provide a speed up for laptops and tablets and smartphones which the broad population uses. There should also be faster wireless communication where everyday people will notice the improvements.

Analysts looking at computer memory are not expecting a sudden displacement of existing computer memory with new non-volatile memory.

Fujitsu Laboratories is enabling faster solutions to computationally intensive combinatorial optimization problems, such as how to streamline distribution, improve post-disaster recovery plans, formulate economic policy, and optimize investment portfolios. It will also make possible the development of new ICT services that support swift and optimal decision-making in such areas as social policy and business, which involve complex intertwined elements.

Fujitsu says it has implemented basic optimization circuits using an FPGA to handle combinations which can be expressed as 1024 bits, which when using a ‘simulated annealing’ process ran 10,000 times faster than conventional processors in terms of handling the aforementioned thorny combinatorial optimisation problems.

The company says it will work on improving the architecture going forward, and by the fiscal year 2018, it expects “to have prototype computational systems able to handle real-world problems of 100,000 bits to one million bits that it will validate on the path toward practical implementation”.

Optical computing cheaper petaflop option for fast fourier transform next year and then tens of exaflops in a few years

In 2015, Optalysis built an optical computing prototype that achieves a processing speed equivalent to 320 Gflops and it is incredibly energy efficient as it uses low-powered, cost effective components.

One petaflop target next year and 17 exaflops in 2022.

Specialized and general purpose quantum computing

Quantum computing seems on track to becoming faster than classical computing for optimization problems. There is the possibility that general purpose quantum computing could be faster starting in 2018. However, those general purpose systems will also need special cooling and other special data center locations to operate.

The niche systems will all be available via cloud access where software could submit heavy compute problems. However, it would only be worth it where the communication lag is less than the computing speed up.

SOURCES – Optalysys, fujitsu, techradar