Scott Aaronson Comments on the Boson Sampling Quantum Supremacy Paper from China

Scott Aaronson originated the proposal of Boson Sampling as an area where quantum computers should be better than regular computers. He discusses the new claim from China of Quantum Supremacy where quantum computing systems are 10 trillion times faster than regular supercomputers on a particular problem.

A group led by Jianwei Pan and Chao-Yang Lu said they achieved BosonSampling with 40-70 detected photons—up. This is beyond the limit where a classical supercomputer could feasibly verify the results. They achieved a variant called Gaussian BosonSampling. this is a generalization of what Scott called Scattershot BosonSampling in a 2013 post on his blog.

Many colleagues told Scott they thought experimental BosonSampling was a dead end, because of photon losses and the staggering difficulty of synchronizing 50-100 single-photon sources. They said that a convincing demonstration of quantum supremacy would have to await the arrival of quantum fault-tolerance—or at any rate, some hardware platform more robust than photonics. Scott always agreed that they might be right.

What is BosonSampling? It is a proposal for achieving quantum supremacy by simply passing identical, non-interacting photons through an array of beamsplitters, and then measuring where they end up.

Is BosonSampling at least a step toward universal quantum computing?

In 2000, Knill, Laflamme, and Milburn (KLM) famously showed that pure, non-interacting photons, passing through a network of beamsplitters, are capable of universal QC, provided we assume one extra thing: namely, the ability to measure the photons at intermediate times, and change which beamsplitters to apply to the remaining photons depending on the outcome. In other words, “BosonSampling plus adaptive measurements equals universality.” Basically, KLM is the holy grail that experimental optics groups around the world have been working toward for 20 years, with BosonSampling just a more achievable pit stop along the way.

Scott refereed the Science paper. He asked why the authors directly verified the results of their experiment only for up to 26-30 photons, relying on plausible extrapolations beyond that. While directly verifying the results of n-photon BosonSampling takes ~2n time for any known classical algorithm. Surely it should be possible with existing computers to go up to n=40 or n=50? A couple weeks later, the authors responded, saying that they’d now verified their results up to n=40, but it burned $400,000 worth of supercomputer time so they decided to stop there. This was by far the most expensive referee report Scott ever wrote!

Response from Quantum Skeptics from the Aaronson Blog

The Google team, along with Gil Kalai, have raised questions about whether the results of the new BosonSampling experiment might be easier to spoof classically than the USTC team thought they were, because of a crucial difference between BosonSampling and qubit-based random circuit sampling. Namely, with random circuit sampling, the marginal distribution over any k output qubits (for small k) is exponentially close to the uniform distribution. With BosonSampling, by contrast, the marginal distribution over k output modes is distinguishable from uniform, as Arkhipov and I noted in a 2013 followup paper. On the one hand, these easily-detected nonuniformities provide a quick, useful sanity check for whether BosonSampling is being done correctly. On the other hand, they might also give classical spoofing algorithms more of a toehold. The question is whether, by spoofing the k-mode marginals, a classical algorithm could also achieve scores on the relevant “HOG” (Heavy Output Generation) benchmark that are comparable to what the USTC team reported.

SOURCES- Science, Shtetl Optimized – Scott Aaronson blog
Written By Brian Wang, Nextbigfuture.com