Dwave Quantum Annealing 1000 qubit chip and other improvements

Wired provides a thorough history of the Dwave Quantum Annealing Computer but also provides an update on the 1000 qubit processor and improvements to the system.

Dwave’s 512 qubit chip was tested as being 3600 times faster than some off the shelf optimizer software on a regular workstation and even with a customized optimizing algorithm.

What about Dwave’s speed issues? “Calibration errors,” Geordie Rose (CTO of Dwave) says. Programming a problem into the D-Wave is a manual process, tuning each qubit to the right level on the problem-solving landscape. If you don’t set those dials precisely right, “you might be specifying the wrong problem on the chip,” Rose says. As for noise, he admits it’s still an issue, but the next chip—the 1,000-qubit version codenamed Washington, coming out this fall—will reduce noise yet more. DWave plans to replace the niobium loops with aluminum to reduce oxide buildup. “I don’t care if you build [a traditional computer] the size of the moon with interconnection at the speed of light, running the best algorithm that Google has ever come up with. It won’t matter, ’cause this thing will still kick your ass,” Rose says. Then he backs off a bit. “OK, everybody wants to get to that point—and Washington’s not gonna get us there. But Washington is a step in that direction.”

Troyer and Lidar say, DWAve doesn’t have enough “coherence time.” For some reason its qubits aren’t qubitting—the quantum state of the niobium loops isn’t sustained.

One way to fix this problem, if indeed it’s a problem, might be to have more qubits running error correction. Lidar suspects D-Wave would need another 100—maybe 1,000—qubits checking its operations (though the physics here are so weird and new, he’s not sure how error correction would work). “I think that almost everybody would agree that without error correction this plane is not going to take off,” Lidar says.

Here’s another way to look at it, he tells me. Maybe the real problem with people trying to assess D-Wave is that they’re asking the wrong questions. Maybe his machine needs harder problems.

On its face, this sounds crazy. If plain old Intels are beating the D-Wave, why would the D-Wave win if the problems got tougher? Because the tests Troyer threw at the machine were random. On a tiny subset of those problems, the D-Wave system did better. Rose thinks the key will be zooming in on those success stories and figuring out what sets them apart—what advantage D-Wave had in those cases over the classical machine. In other words, he needs to figure out what sort of problems his machine is uniquely good at. Helmut Katzgraber, a quantum scientist at Texas A&M, cowrote a paper in April bolstering Rose’s point of view. Katzgraber argued that the optimization problems everyone was tossing at the D-Wave were, indeed, too simple. The Intel machines could easily keep pace. If you think of the problem as a rugged surface and the solvers as trying to find the lowest spot, these problems “look like a bumpy golf course. What I’m proposing is something that looks like the Alps,” he says.

In one sense, this sounds like a classic case of moving the goalposts. D-Wave will just keep on redefining the problem until it wins. But D-Wave’s customers believe this is, in fact, what they need to do. They’re testing and retesting the machine to figure out what it’s good at. At Lockheed Martin, Greg Tallant has found that some problems run faster on the D-Wave and some don’t. At Google, Neven has run over 500,000 problems on his D-Wave and finds the same. He’s used the D-Wave to train image-recognizing algorithms for mobile phones that are more efficient than any before. He produced a car-recognition algorithm better than anything he could do on a regular silicon machine. He’s also working on a way for Google Glass to detect when you’re winking (on purpose) and snap a picture. “When surgeons go into surgery they have many scalpels, a big one, a small one,” he says. “You have to think of quantum optimization as the sharp scalpel—the specific tool.”

It can be fine to sift through problems to find what this quantum computing machine is especially good at if those things are very commercially viable. Solving specific image recognition or pattern recognition problems that are worth billions of dollars for better solutions and then creating revenue to make better and better and bigger and bigger quantum computers.

SOURCE – Wired

If you liked this article, please give it a quick review on ycombinator or StumbleUpon. Thanks