Quantum Boson Sampling 10 Trillion Times Faster Than Regular Supercomputers

Chinese researchers performed Gaussian boson sampling by sending 50 indistinguishable single-mode squeezed states into a 100-mode ultralow-loss interferometer with full connectivity and random matrix—the whole optical setup is phase-locked—and sampling the output using 100 high-efficiency single-photon detectors. The obtained samples are validated against plausible hypotheses exploiting thermal states, distinguishable photons, and uniform distribution. The photonic quantum computer generates up to 76 output photon clicks, which yields an output state-space dimension of 10^30 and a sampling rate that is 100 trillion faster than using the state-of-the-art simulation strategy and supercomputers.

In 2019, Google claimed their 53-qubit processor generated a million noisy (~0.2% fidelity) samples in 200 seconds, while a supercomputer would take 10,000 years. It was soon argued that the classical algorithm can be improved to cost only a few days to compute all the 253 quantum probability amplitudes and generate ideal samples. If the competition were to generate a much larger size of samples, for example 10 billion, the quantum advantage would be reversed provided with sufficient storage. This sample-size-dependence of the comparison—an analog to loopholes in Bell tests suggests that quantum advantage would require long-term competitions between faster classical simulations and improved quantum devices.

Boson sampling, proposed by Scott Aaronson and Arkhipov was the first feasible protocol for quantum computational advantage. In boson sampling and its variants non-classical light is injected into a linear optical network, and in the output highly random, photon-number- and path-entangled state is measured by single-photon detectors. The dimension of the entangled state grows exponentially with both the number of photons and the modes, which fast renders the storage of the quantum probability amplitudes impossible.

They estimated the classical computational cost to simulate an ideal GBS (Gaussian boson sampling) device. They have benchmarked the GBS on Sunway TaihuLight using a highly optimized algorithm. The time cost to calculate one Torontonian scales exponentially as a function of output photon clicks. Moreover, to obtain one sample, one usually needs to calculate ~100 Torontonians of the candidate samples. The GBS simultaneously generates samples of different photon-number coincidences (Fig. 3C), which can be seen as a high-throughput sampling machine. For each output channel and the registered counts in Fig. 3C, we calculate the corresponding time cost for the supercomputer. Summing over the data points, they estimate that the required time cost for the TaihuLight (Fugaku) to generate the same amount of samples in 200 seconds with the GBS device would be eighty quadrillion seconds (2 × 10^16 s), which is 2.5 billion years. This should inspire new theoretical efforts to quantitatively characterize large-scale GBS, improve the classical simulation strategies optimized for the realistic parameters and challenge the observed quantum computational advantage of about 100 trillion.

SOURCES – Science
Written By Brian Wang, Nextbigfuture.com

2 thoughts on “Quantum Boson Sampling 10 Trillion Times Faster Than Regular Supercomputers”

  1. This is certainly a new meaning to me of the the word 'Torontonian'. From context it doesn't seem to mean 'person from Toronto'.

Comments are closed.