Dwave Systems closed a $17M financing round as of the end of January 2008. These funds will be used primarily to push the level of integration of our chips into the low thousands of qubits by the end of the year. In parallel with this central effort we will be working on running experiments on smaller systems to map out features of these systems important to their operation as quantum computers.
On November 2007, the last iteration of D-Wave’s chip was 28 qubits (quantum bits). The CTO Geordie rose said they were on track to show a 512 qubit machine in 2008 and 1024 the year after that (by the end of 2008. The die has room for a million qubits. This new announcement seems to imply 2000-4000 qubits by the end of 2008. “low thousands of qubits by the end of the year ”.
This was the number 1 item on my list of technologies to watch in 2008 It is ahead of the Bussard fusion because the Bussard fusion prototype could answer a lot of the questions about the potential of that potentially bigger technology but not all of the issues will be answered until the commercial fusion system is funded, built and operating. By the end of 2008, the Dwave quantum computer should be operating (or not) at a commercial level. Thousands of qubits will have applications where it should be clearly superior to any conventional computer system.
2-5 months until the Q2, 2008, 512 qubit machine.
9-11 months until a Q4, 2008, 2000-4000 qubit machine.
That will be great to see.
Hopefully by 2009-2010 we can fill out that die and get up to 1 million qubits and start transforming business and science.
UPDATE: Is it Quantum computing ?
Scott Aaronson, Dwave critic, has finally met Geordie Rose, CTO of Dwave They met at MIT when Geordie presented four hard problems to get MIT help in solving.
These problems were as follows:
1. Find a practical adiabatic factoring algorithm. Because of the equivalence of adiabatic and standard quantum computing, we know that such an algorithm exists, but the running time you get from applying the reduction is something like O(n11). Geordie asks for an O(n3) factoring algorithm in the adiabatic model. It was generally agreed (with one dissent, from Geordie) that reducing factoring to a 3SAT instance, and then throwing a generic adiabatic optimization algorithm at the result, would be a really, really bad approach to this problem.
2. Find a fault-tolerance threshold for adiabatic quantum computing, similar to the known threshold in the circuit model. Geordie asserted that such a threshold has to exist, because of the equivalence of adiabatic and standard quantum computing. However, others immediately pointed out that this is not so: the equivalence theorem is not known to be “fault-tolerance-preserving.” This is a major open problem that many people have worked on without success.
3. Prove upper and lower bounds on the adiabatic algorithm’s performance in finding exact solutions to hard optimization problems.
4. Prove upper and lower bounds on its performance in finding approximate solutions to such problems. (Ed Farhi described 3 and 4 as “so much harder than anything else we’ve failed to solve.”)
Scott is leaving himself an out in case Dwave’s system works in 2008: Scott says:
Even if D-Wave managed to build (say) a coherent 1,024-qubit machine satisfying all of its design specs, it’s not obvious it would outperform a classical computer on any problem of practical interest. This is true both because of the inherent limitations of the adiabatic algorithm, and because of specific concerns about the Ising spin graph problem. On the other hand, it’s also not obvious that such a machine wouldn’t outperform a classical computer on some practical problems. The experiment would be an interesting one! Of course, this uncertainty — combined with the more immediate uncertainties about whether D-Wave can build such a machine at all, and indeed, about whether they can even produce two-qubit entanglement.
Scott also shows that he still does not understand business:
also means that any talk of “lining up customers” is comically premature
The first is that there are already buyers and sellers of quantum computers for research (Bruker NMR machines) and our systems are already much more useful and interesting than these.
The second is that we expect that even for fairly small systems (~1,000 qubits, which we plan to do this year) this type of special purpose hardware can beat the best known classical approaches for instance classes where the class embed directly onto the hardware graph even if the “spins” are treated entirely classically, which we assume is a worst-case bound. Often forgotten in this type of conversation is the fact that there is a long history of simple special purpose analog hardware outperforming general purpose machines. If you want an example, look at Condon and Ogielski’s 1985 Rev Mod Sci article–their Ising model simulator beat the fastest Cray of the time in Monte Carlo steps/second. You can’t draw conclusions about the general utility of this type of approach without looking at details.
I would note that it is standard business practice to pre-sell tickets to things that are not complete and may or may not work.
Examples: Aptera electric car, Tesla electric car, Toyota Prius sign up lists, music concerts, Microsoft sells software subscriptions with the understanding that their should be major software upgrades (yet Vista and other major upgrades were delayed for years), Virgin Galactic has presold hundreds of tickets for its yet to be completed sub-orbital rocket.