Dwave puts out a background summary of their Orion quantum computer

A Dwave system pdf that has questions and answers about their quantum computer and their quantum computer plans

some more useful answers in the comments and discussion:

“For the algorithms you demostrated, how many times do you have to run the quantum computer to get the right answer?”

Generally the success rates are around 90% for 4-vertex MIS problems and around 85% for 6-vertex MIS problems.

Do you understand why it is scaling in the manner it is scaling?”

The main issue now is probably calibration of the machine language numbers (the coupler and qubit bias numbers), although we’re going to have to build significantly bigger systems to get enough information to chop into this issue.

“How did you determine that quantum effects are responsible for the operation of your processor?”

I wouldn’t characterize them as “responsible”. They are definitely involved, in the sense that their presence is changing the behaviour of the machine vs. the case where these weren’t there. One of the characterization experiments is the macroscopic resonant tunneling (MRT) experiments discussed at the APS meeting earlier this month. There are a few others.

more answers were provided to room408

Can you outline the main technical challenges that must be overcome before a system such as Orion is scalable and universal?

There are several aspects of the current design that aren’t scalable to the levels we want to be able to achieve (thousands to millions of qubits). The next processor generation, which we’ll release Q4/2007, has planned fixes to all of these. Whether or not the redesigned processor elements do the job of course remains to be seen.

Re. universality, the Hamiltonian of the current system isn’t universal, in the sense that arbitrary states can’t be encoded in its ground state. The Hamiltonian is of the form X+Z+ZZ. It’s known that adding another type of coupling device to give something like X+Z+ZZ+XZ (for example) allows for universal state encoding in the ground state.

Of course this isn’t enough. Making the system we’ve got now universal is of course very hard. The question is whether or not it’s worth trying to do this. Ultimately this question resolves down to the potential value of applications requiring resources the current type of chip can’t provide. It could be that for certain quantum chemistry applications the value of modifying the chip design to get closer to universal QC might be worth the effort, although I see a big opportunity just sticking to discrete optimization problems.

Given the unabated advancement of classical computing technology, and given that Orion doesn’t appear to enable the efficient solution of problems that can’t already be solved efficiently by a classical computer, for what problem size do you expect that a system such as Orion will be able to outperform the fastest classical computer? When do you expect to achieve this?

This is a real hard question to answer. Our best guess is that the ability to solve 256-variable integer programming problems in hardware will be close to break even for certain instance classes.

There are two tough problems to solve in developing a predictive model to answer performance questions. The first is that since this type of system is a “hardware heuristic” there will definitely be instance-dependence and we don’t know what the types of instances that will be best suited to the system look like yet. The second is that there’s no way to predict the scaling advantage from having the system be quantum mechanical. You can look at general arguments to ascertain bounds on performance, but how the machine will function in practice is an entirely different problem. Our attitude is that it’s at least as tough to try to develop realistic models and solve them as it is to actually build real hardware, so we focus on building real hardware and having very fast redesign cycles.

I think also that because the approach we’ve taken can be taken far past the projected cross-over point (up to say a million qubits, which should be able to encode 10s of thousands of variables), even if we’re wrong on where the cross-over point is we can continue building bigger and bigger systems.