The original motivation for building superconducting processors had nothing to do with quantum computation. There were two factors.
1) Superconducting device timescales are shorter than semiconductor devices. Clocked superconducting digital circuits have been run at around 700GHz and it’s entirely feasible that you could build a complex ASIC type processor clocked at say 100GHz. For certain types of problems that type of advantage is significant.
2) You can operate superconducting chips using tiny fractions of the power that semiconducting chips consume. Studies have been done of reasonable expectations of total all-in energy cost where the same computational load could have energy needs cut by factors of about 100, even with all of the cryogenics etc.
The architecture we’ve build has both these advantages, even if you put aside scaling advantages.
As to the choice of instance types for benchmarking — this work is underway! I expect that the groups doing this research will publish some results shortly (within a few months).
(a) there is a LOT of room to improve the ‘prefactor’ in this type of design. Because superconducting technology is so immature, it is probably possible to drop this prefactor by another factor of 10 or so each year for the foreseeable future (3-5 years at least), even as the device count doubles each year.
(b) the scaling that’s being observed in the current generations of processors IS NOT THE INTRINSIC SCALING OF THE UNDERLYING QUANTUM ALGORITHM. It’s the empirically measured scaling, which is different. There is strong evidence that the scaling behavior is dominated currently by what we call Intrinsic Control Errors (ICE), which are mis-specifications of the problem parameters. As the ICE is reduced, the scaling curves observed will flatten out — Dwave saw this on Rainier where in going from R4 to R7 each generation the scaling curves changed.
(c) as I discussed above, the problem type studied in the work you’re thinking of is not the best one to use – other instance classes will see different relative performance;
(d) it is straightforward to modify the processors to increase the number of couplers per qubit (the connectivity), reduce the ICE, add new types of device (such as XZ couplers that would make the Hamiltonian universal for quantum computation), etc.
There is a misconception that there is “A D-Wave Machine”. This isn’t correct — we build D-Wave to be a process, not a specific system. Our fundamental technology strategy is to evolve processors using high throughput experimentation. Every three months or so we release a new design incorporating what we’ve learned.
Improving connectivity is believed to be an area where speedups can be achieved. Connectivity can be changed (it has changed nearly every generation since we started).
Learning that the original problem type Dwave picked to look at isn’t the best to show quantum / classical scaling differences isn’t a retreat. It’s science. Dwave didn’t know that when they started. They learned something and they evolve. They don’t stop when there is a setback. We know that not everything is going to work. In fact, they assume that 99/100 things they (and everyone else in the field) try aren’t going to work.
Many people working on this and trying new ideas to see if there are any classes of problems that have been shown to scale better on the current generation of chip.
What to expect
The current papers were research done on the 128 qubit and 512 qubit processor.
Dwave now has a 1024 qubit processor in the lab and will release it commercially later this year.
In 2015 Dwave should have a 2048 qubit processor.
Every three months they are adjusting the design of their chip. So there will be multiple versions of the 1024 qubit processor this year.
Brian Wang is a Futurist Thought Leader and a popular Science blogger with 1 million readers per month. His blog Nextbigfuture.com is ranked #1 Science News Blog. It covers many disruptive technology and trends including Space, Robotics, Artificial Intelligence, Medicine, Anti-aging Biotechnology, and Nanotechnology.
Known for identifying cutting edge technologies, he is currently a Co-Founder of a startup and fundraiser for high potential early-stage companies. He is the Head of Research for Allocations for deep technology investments and an Angel Investor at Space Angels.
A frequent speaker at corporations, he has been a TEDx speaker, a Singularity University speaker and guest at numerous interviews for radio and podcasts. He is open to public speaking and advising engagements.