D-Wave systems next quantum chip will 1000X faster and will revolutionize machine learning

Dwave’s next quantum chip, due in 2017, will be able to handle 2,000 qubits which is double the usable number in the existing D-Wave 2X system chip. It will be capable of solving certain problems 1,000x faster than its predecessor.

The new processor will also support additional features that allow for more efficient calculations.

“From an internal tests, that looks like that’s a really good thing to do. We’ve got some problems we’ve already sped up by a factor of 1,000 by exploiting that capability,” said Williams at the CW TEC conference in Cambridge.

D-Wave machines are designed to tackle a specific task known as unconstrained binary optimization, as well as related sampling problems.

The specialized work that the D-Wave processor can carry out could be useful in a number of areas, according to D-Wave, particularly training machine learning models.

However, there are still significant obstacles to the larger challenge of building a universal quantum computer, with several unresolved engineering challenges. By extrapolating from past trends in chip development, John Morton, professor of nanoelectronics and nanophotonics at UCL, predicts the first universal quantum processor won’t be built until the 2030s.

Google ran tests in 2015 showing that Dwave’s quantum annealing system can be 100 million times faster than classical computers on a Dwave specific optimization problem. Google’s Neven admitted at the end of his blog post that other algorithms can beat the D-Wave, but the Google team think this advantage will disappear as quantum computers get larger. This next chip will be 1000X faster than the 2015 chip.

The tests demonstrated the future viability of D-Wave’s chips, according to Williams. The experiments showed the quantum tunneling really was occurring in the D-Wave chip. It showed that even if the range of that tunnelling is finite, it is a useful computational tool.

As Dwave evolves the design of their chip to make it more densely connected, then the classical algorithms that currently work well for these optimization problem will completely fall apart.

Dwave processors had been used in the financial sector for trading trajectory optimization, for protein folding calculations in bioscience, and to create filters for lists that never miss a potential match — useful for security services checking terrorist watchlists, for development of binary classifiers in AI and for computer vision.

Unsupervised machine learning is where the D-Wave processor is expected to make the greatest impact, perhaps explaining Google’s interest in the technology.

D-Wave has already experimented with machine learning on the chip, setting up a Boltzmann machine, a type of stochastic recurrent neural network, as well as a “Quantum Boltzmann machine”, which Williams said is ‘fundamentally different from previous machine learning models’.

Williams doesn’t see the D-Wave chip, or other quantum processors, that might follow as replacing classical computer chips, but as working alongside them.

Beyond the 2000 qubit processor, Williams says that D-Wave has a design for a “next-generation chip” with a “fundamentally new topology, based on all the lessons we’ve learnt”.

On the previous D-Wave chips we only had the ability to look at one annealing trajectory. The new chip allows more control of parameters which allow for more control over the trajectory.

  • They can pause the annealing
  • They can anneal at a different speeds
  • they can now probe the quantum state in the middle of the anneal, which is a critical feature from the point of Quantum Boltzmann Machines.
  • They can have a faster annealing, previous generations of systems could only anneal down to 20 microseconds.The new system can anneal in five microseconds.

SOURCES- Tech Republic, Dwave Systems

About The Author

Add comment

E-mail is already registered on the site. Please use the Login form or enter another.

You entered an incorrect username or password

Sorry, you must be logged in to post a comment.