Adiabatic Quantum Computing and Dwave

After seeing the Dwave demo and hearing more about how it works. A paper by Seth Lloyd now makes more sense to me.

Seth says:
Adiabatic quantum computation is a recently proposed, general approach to solving NP-hard combinatorial minimization problems. It consists of constructing a set of qubits with a time-dependent Hamiltonian ˆH (t) whose starting point ˆHs has a ground state that is quickly reachable simply by cooling and whose final point ˆHp has couplings that encode the cost scheme of a desired minimization problem. The name “adiabatic” comes from the fact that if the qubits are initialized in the ground state of ˆHs and if ˆH (t) is varied slowly enough, then the qubits will overwhelmingly be in the ground state of ˆH (t) at all times t, thus in principle completely bypassing the usual concern about local minima in ˆHp confounding the search for the problem’s solution. Later in the paper (pg 6) it indicates how the AQC is resistant to decohernce and how results are still good so long as the quantum conherence is dominant. (ie not there all of the time but most of the time)

Dwave have a four by four grid of superconducting loops, that have connections/couplings. The loops are the vertices of the grid.
They set the initial state (variables for the problem) using induction from wires near the superconducting loops and couplings. The electrical feed to those wires needs to be precisely controlled and whole system needs to be as clean from interference as possible. Thus it is in a shielded room and the electrical feeds have really heavy duty electrical means to keep the electricity and magnetism precise and clean from corruption.
Cooling causes the system to settle at or near the answer state. The answer is then read out.

So the Adiabatic Quantum Computer (AQC) as noted by Seth Lloyd is naturally resistent to decoherence and dephasing.

Dwave’s specialized qubits (still open question how much quantumness there is and if it stays dominant as it scales up) avoid the error correction problem by running the same problem 100 times and polling the results. The most popular answer is taken and has been their experience to be the right answer.

The AQC need to let the system more “slowly” settle to an answer seems to mean that instead of exponentially speeding up in some cases for “better qubits” they get a quadratic speedup.

So the systems is 100 times slower for multiple runs and 100 times slower for setup times than an optimal QC system.

However, if the quantum computer is a million or billion times faster for a particular problem than a classical computer then Dwave will still have a performance advantage. Plus they could refine their process to reduce the time devoted to setup and the setup time will be a smaller fraction as they scale up and have bigger systems. Later versions might get better on the error run redundancy.

2009 is the timeframe for adjustments to the Dwave qubits to allow simulations that will help develop molecular nanotech.

In 2008 it will be apparent if the speed up in problem solving is clearly developed. (when they have 512 and 1024 qubit systems)

If someone else were to get one of other approaches to quantum computers working with universal qubits then that system would probably be superior to a Dwave system with the same number of qubits. However, other solutions are still taking far longer to make and will seem likely to lag in the number of qubits. Dwave as they refine their qubits could close that gap during their multi-year head start. Plus early Dwave quantum computers could be used to design better quantum computers.

Comments are closed.