The quantum singularity – doing the first thing quantumly that classical computers cannot

A beam splitter is a device, like the one depicted here, that bifurcates a beam of light. An experiment proposed by MIT researchers, which relies on beam splitters, would exploit the strange behavior of quantum particles to perform calculations that are hopelessly time consuming on conventional computers.
Graphic: Christine Daniloff

A new experiment would use quantum effects to perform otherwise intractable calculations, but conducting it should be easier than building a quantum computer.

Quantum Computing with Noninteracting Particles

Weak model of QC
* Probably not universal
* Restricted kind of entanglement
* Not qubit-based

Why do we care?
* Gains with less quantum
* Easier to build
* Mathematically pretty

* Encode values into mirrors
* Generate single photons
* Have photons hit mirrors at same time
* Detect output photons

The Computational complexity of linear optics (96 pages)

We give new evidence that quantum computers—moreover, rudimentary quantum computers built entirely out of linear-optical elements—cannot be efficiently simulated by classical computers. In particular, we define a model of computation in which identical photons are generated, sent through a linear-optical network, then nonadaptively measured to count the number of photons in each mode. This model is not known or believed to be universal for quantum computation, and indeed, we discuss the prospects for realizing the model using current technology On the other hand, we prove that the model is able to solve sampling problems and search problems that are classically intractable under plausible assumptions.

Quantum computers are computers that exploit the weird properties of matter at extremely small scales. Many experts believe that a full-blown quantum computer could perform calculations that would be hopelessly time consuming on classical computers, but so far, quantum computers have proven devilishly hard to build. The few simple prototypes developed in the lab perform such rudimentary calculations that it’s sometimes difficult to tell whether they’re really harnessing quantum effects at all.

At the Association for Computing Machinery’s 43rd Symposium on Theory of Computing in June, associate professor of computer science Scott Aaronson and his graduate student Alex Arkhipov will present a paper describing an experiment that, if it worked, would offer strong evidence that quantum computers can do things that classical computers can’t. Although building the experimental apparatus would be difficult, it shouldn’t be as difficult as building a fully functional quantum computer.

If the experiment works, “it has the potential to take us past what I would like to call the ‘quantum singularity,’ where we do the first thing quantumly that we can’t do on a classical computer,” says Terry Rudolph, an advanced research fellow with Imperial College London’s Quantum Optics and Laser Science, who was not involved in the research.

Aaronson and Arkhipov’s proposal is a variation on an experiment conducted by physicists at the University of Rochester in 1987, which relied on a device called a beam splitter, which takes an incoming beam of light and splits it into two beams traveling in different directions. The Rochester researchers demonstrated that if two identical light particles — photons — reach the beam splitter at exactly the same time, they will both go either right or left; they won’t take different paths. It’s another of the weird quantum behaviors of fundamental particles that defy our physical intuitions.

More light!

The MIT researchers’ experiment would use a larger number of photons, which would pass through a network of beam splitters and eventually strike photon detectors. The number of detectors would be somewhere in the vicinity of the square of the number of photons — about 36 detectors for six photons, 100 detectors for 10 photons.

For any run of the MIT experiment, it would be impossible to predict how many photons would strike any given detector. But over successive runs, statistical patterns would begin to build up. In the six-photon version of the experiment, for instance, it could turn out that there’s an 8 percent chance that photons will strike detectors 1, 3, 5, 7, 9 and 11, a 4 percent chance that they’ll strike detectors 2, 4, 6, 8, 10 and 12, and so on, for any conceivable combination of detectors.

Calculating that distribution — the likelihood of photons striking a given combination of detectors — is an incredibly hard problem. The researchers’ experiment doesn’t solve it outright, but every successful execution of the experiment does take a sample from the solution set. One of the key findings in Aaronson and Arkhipov’s paper is that, not only is calculating the distribution an intractably hard problem, but so is simulating the sampling of it. For an experiment with more than, say, 100 photons, it would probably be beyond the computational capacity of all the computers in the world.

The biggest problem, Barry Sanders, director of the University of Calgary’s Institute for Quantum Information Science, believes, is generating individual photons at predictable enough intervals to synchronize their arrival at the beam splitters. “People have been working on it for a decade, making great things,” Sanders says. “But getting a train of single photons is still a challenge.” Rudolph agrees. “At the moment, the hard thing is getting enough single photons into the chip,” he says. But, he adds, “my hope is that within a few years, we’ll manage to build the experiment that crosses the boundary of what we can practically do with classical computers.”

Sanders points out that even if the problem of getting single photons onto the chip is solved, photon detectors still have inefficiencies that could make their measurements inexact: in engineering parlance, there would be noise in the system. But Aaronson says that he and Arkhipov explicitly consider the question of whether simulating even a noisy version of their optical experiment would be an intractably hard problem for a conventional computer. Although they were unable to prove that it was, Aaronson says that “most of our paper is devoted to giving evidence that the answer to that is yes.” He’s hopeful that a proof is forthcoming, whether from his research group or others’.

Scott Aaronson’s post

If you liked this article, please give it a quick review on ycombinator or StumbleUpon. Thanks