In tests last September, an independent researcher found that for some types of problems the Dwave quantum computer was 3,600 times faster than a traditional intel quadcore workstation (2.4 Ghz quadcore chips with 16 GB of memory and about 420 GFlops). According to a D-Wave official, the machine performed even better in Google’s tests, which involved 500 variables with different constraints. “The tougher, more complex ones had better performance,” said Colin Williams, D-Wave’s director of business development. “For most problems, it was 11,000 times faster, but in the more difficult 50 percent, it was 33,000 times faster. In the top 25 percent, it was 50,000 times faster.

So for 25% of Google’s optimization problems the Dwave system was roughly 50,000 times better than a 420 Gflop system. This approximately is 21 petaflops.

So for some optimization problems of commercial value the Dwave Quantum Computer is faster than any existing classical supercomputer.

A summary of the quantum computer versus classical system speed tests done in September, 2012 are described here

Quantum hardware (QA) and Blackbox are compared to three conventional software solvers:

CPLEX, METSlib tabu search (TABU), and a branch-and-bound solver called Akmaxsat (AK) . The solvers are evaluated using instances from three NP-Hard problems: Quadratic Unconstrained Binary Optimization (QUBO); Weighed Maximum 2-Satisfi ability (W2SAT), and the Quadratic Assignment Problem (QAP).

439 and 502 qubit systems were compared to a quadcore system.

**Harder Optimization problems for classical computers have more speedup on Dwave**

Dwave stays consistent solving problems in a few microseconds and in about one second of physical clock time to load, solve and extract the result. If it takes 100 hours to solve it on a fast classical system then there is more speedup versus a version of the problem that takes 60 minutes. 3600 times speedup from 60 minutes to 1 second. 360,000 times speedup to go from 100 hours to 1 second.

Dwave is on track to eight thousand qubits by about 2017.

**A chart seems to show that as qubits are added the solving time stays at about 1 second**

*For Google’s problems the speedup at 512 qubits is 50,000 times for 25% of problems. It is not ten billion times faster for a different particular benchmark. The benchmark in the chart had a projected speedup of 10,000 times from 400 qubits to 500 qubits. The testing was showing 3 to 6 times speedup from 439 qubits to 502 qubits for one set of optimization problems. It might be about ten to twenty times speedup scaling for the google machine learning algorithms.*

So it will be best to feed future quantum computers with hard problems that scale rapidly to times over one hour to years to impossible solve with classical computers. Even if quantum computers prove to be able to solve any problem that can be expressed and loaded into 8000 qubits or so faster than any computational system.

There are adiabatic quantum algorithms that are neuromorphic

* Quantum systems will be useful for breaking down hard problems and providing the proven solved answers as saved solutions for classical systems

* Once the 2000-10000 qubit systems prove massive speedup versus any supercomputer for certain classes of useful problems, then there will be a lot of sales and a lot more investment in quantum computers. This could mean say a few billion dollars to rapidly scale the Dwave’s superconducting adiabatic system to a full wafer of qubits using more advanced lithography. This would still likely take a few years. I think this would be a leap up to about a few million qubits.

There are other approaches to quantum computers using quantum dots which would likely have even greater scaling potential.

The number of qubits will still be a limiting factor in where the quantum computers are used. Certain algorithms could have theoretical speedup but may not be useful until there are billions or trillions of qubits.

An example is an adiabatic version of Grovers algorithm could enable database searchers in the square root of N. This would be N the number of qubits and the number of things being searched.

There are ways to get around limitations in qubits by mathetically breaking a larger problem into sub-problems that need fewer qubits and solving the sub-problems serially.

*If you liked this article, please give it a quick review on ycombinator or StumbleUpon. Thanks*