Is talk of Aaronson Boson Sampling approach to quantum computers faster than classical computers comically premature roast beef sandwich hype ?

There are plenty of statements that Scott Aaronson was tossing around that insulted Dwave for having the temerity to prematurely talk about beating classical computers with an unbuilt quantum computer system.

In 2007 Scott said –

D-Wave’s current machine is said to have sixteen qubits. Even assuming it worked perfectly, with no decoherence or error, a sixteen-qubit quantum computer would be about as useful for industrial optimization problems as a roast-beef sandwich.

While fully qualified with the industrial optimizations problem and the 16 qubit machine. All other quantum computing systems have not reached 16 qubits which Dwave had in 2007.

There were questions about will it scale to more qubits and will it get a lot faster than the lower qubit versions.

They scaled from 16 qubits to 128 qubits to 512 qubits and the systems performance

512 qubits >> 128 qubits >> 16 qubits

If we are parsing statements Dwave hoped to get to 1024 qubits by 2008 but they did not match their hoped for objective.

Was Dwave right in saying that their approach could scale to a lot more qubits ? Yes they were.

Were they right in saying that each increase in qubits would give a give performance jump. Yes they were.

Are they crushing the performance of classical computers yet. I am not sure but the comparison is no longer trivial for classical to beat them.

If Dwave had not been trying to line up customers back in 2007 and 2008 then the long sales cycle would not have resulted in the 2010 sale to Lockheed and then the sale to Google and they would be out of business now. If Aaronson had been the CEO of Dwave then they would have done of business in 2010 or 2011.

The value that Lockheed placed upon it was $10 million.

Aaronson said lining up customers is comically premature.

The “salient point” is that Aaronson is a business idiot. A clown in business. He cannot sell roast beef sandwiches. He is comically idiotic about business.

I am blogger, I can use the same language that Aaronson was tossing around back at him.

If we are looking at grading the sum total of Geordie being correct and sum total of Scott being correct, then

Geordie has built a business with over $20 million in sales to major clients. He has achieved over $50 million in funding. He has built a company that has a 512 qubit system. The largest non-Dwave system is about 7-14 qubits.

Quantum entanglement – there is also the paper in Nature. and there is the USC papers.

The Nature paper found physical evidence consistent with quantum annealing.

We observe a clear signature of quantum annealing, distinguishable from classical thermal annealing through the temperature dependence of the time at which the system dynamics freezes.

There is a lot of difficult work to making a commercially successful quantum computer and quantum computer company. There is currently no other remotely commercially viable option.

If it takes 100,000 qubits or millions of qubits to achieve massive superiority over classical systems then this would be useful to know. And if there is a bunch of algorithmic work that can be used to achieve solutions on classical systems that can beat 500 qubits or 20,000 qubits or more [IBM Watson – Sanjeeb Dash claims for an integer version of CPLEX] and those solutions are for useful optimization problems. Then why have these geniuses been holding back on applying there efforts to solving optimizations worth billions and which companies and the military are spending billions ?

Aaronson grade is that he insulted Dwave for not being able to produce immediately (in 2007, 2008, 2009) what was difficult to do but has been mostly done over the last few years and they did those academic things while building a business. Then he hypocritically speculated that his own approach could beat classical systems if a 100 photon system could be built.

Aaronson is talking about scaling of his own boson sampling approach to quantum computing Is it comically premature roast beef sandwich hype ?

Three identical photons having been used to sample from a probability distribution defined in terms of the permanents of 3×3 matrices, thereby demonstrating the Aaronson-Arkhipov BosonSampling protocol. And the results were obtained by no fewer than four independent experimental groups, some of whom have now published in Science.

Q: So, these experiments reported in Science this week have done something that no classical computer could feasibly simulate?

A: No, a classical computer can handle the simulation of 3 photons without much—or really, any—difficulty. This is only a first step: before this, the analogous experiment (called the Hong-Ou-Mandel dip) had only ever been performed with 2 photons, for which there’s not even any difference in complexity between the permanent and the determinant (i.e., between bosons and fermions). However, if you could scale this experiment up to about 30 photons, then it’s likely that the experiment would be solving the BosonSampling problem faster than any existing classical computer (though the latter could eventually solve the problem as well). And if you could scale it up to 100 photons, then you might never even know if your experiment was working correctly, because a classical computer would need such an astronomical amount of time to check the results.

So Aaronson is promising faster than classical with 100 photons with his approach. Where is the proof of scaling ? Why isn’t this faster now ? This 3 photon work is worth less than a roast beef sandwich. I this somehow unfair to the Aaronson research ? Should we give it a few more years before spout off crap ? Or we just say. PROOF NOW ! Roast Beef Sandwich ! Comically premature ! Faster than classical NOW.

I actually hope that Aaronson and his other researchers succeed and do complete the scaling experiments. But by Aaronsons own standard he should be performing the maximum photon simulations that he can achieve on classical computers now.

Aaronson justifies his own work and approach

Yes, the “pessimistic” scenario here is precisely that building a scalable BosonSampling machine would be no easier than building a scalable universal QC.

However, there are three factors that might make BosonSampling easier to implement than full universal QC, and one factor that makes it interesting for complexity theorists even if it turns out not to be any easier to implement.

(1) Non-interacting photons are actually incredibly good in terms of decoherence. (Indeed, the earth is getting bombarded with cosmic microwave photons that have probably maintained their quantum states since 380,000 years after the Big Bang!) The main difficulties you face with photons are synchronization (different photons might arrive at the detectors at slightly different times), photon losses, and single-photon generation (a standard laser sometimes outputs 0 photons, sometimes 1, etc., in a superposition of different photon numbers called a coherent state). However, the optics folks tell me that there are technologies currently under development, especially “optical multiplexing,” that if they worked, would address these problems and allow scaling to at least 10-20 photons without the need for explicit error-correction. (As they say, time will tell.)

(2) With most quantum algorithms like Shor’s, there’s a substantial overhead in mapping the problem you want to solve onto the QC architecture, which means that you need many more than N qubits to do what would classically take 2N steps. By contrast, because BosonSampling is so close to the “native physics” of linear-optical networks, N identical photons really do just correspond directly to NxN matrix permanents. For doing little proof-of-principle demonstrations, this is a huge advantage.

(3) Most interestingly, the game we’re now playing (just sample some distribution that’s hard to sample with a classical computer) has many, many possible ways to win! Even if photon losses, “dark counts” and various other errors turn out to be unavoidable when scaling up to 20-30 photons, it remains plausible that you’d still be sampling from a classically hard distribution. Of course, that will require a new argument analogous to the one we gave, and is an obvious topic for further research.

OK, now suppose that none of our hopes come to pass, and building an interestingly-large BosonSampling machine turns out to be not easier at all than building a scalable universal QC. Even then, BosonSampling would still be of complexity-theoretic interest, for the simple reason that it’s the only natural way known to make permanents show up as a quantum computation’s amplitudes, and the permanent is an extremely interesting and special function in theoretical computer science. Indeed, it’s because of special properties of the permanent (like its random self-reducibility) that we conjecture that even the approximate version of BosonSampling is classically hard. In other words: even if you had a full universal QC, if you wanted to sample a distribution that couldn’t be even approximately sampled efficiently classically without surprising complexity consequences, I wouldn’t be able to tell you anything to do except to use your universal QC to simulate BosonSampling.

As a matter of fact, this—and not any of the experimental physics considerations—was my and Alex’s motivation for thinking about BosonSampling in the first place. We simply wanted to construct a quantum algorithm that samples from a probability distribution whose amplitudes are permanents, and then use the known properties of the permanent to argue about the classical hardness of doing the same thing. And that desire led inevitably to BosonSampling. The fact that Nature provides a physical system—namely, identical photons—that “implement BosonSampling for free,” really just came as a happy coincidence for us!

If you liked this article, please give it a quick review on ycombinator or StumbleUpon. Thanks