60 graphical processing units (GPUs) have been used to solve the Google quantum supremacy computer problem. Researchers generated one million correlated bitstrings with some entries fixed, from the Sycamore circuit with 53 qubits and 20 cycles, with linear cross-entropy benchmark (XEB) fidelity equals 0.739, which is much higher than those in Google’s quantum supremacy experiment.

The Google sycamore processor solved the problem in 200 seconds and estimated it would take 10,000 years on the Summit supercomputer. IBM performed a paper study determined a different architecture that used both RAM and hard drive space to store and manipulate the state vector, a high performance classical computer could solve it in 2.5 days. Two scientists from the Chinese Academy of Sciences have actually performed this calculation in a period of 5 days.

This algorithm has several advantages over Google’s hardware sampling of the Sycamore circuits.

1. this method is able to output the exact amplitude and probability of any bitstring, while Google’s samples are not verified, with XEB estimated using extrapolations;

2. this method is less noisy than Google’s experiments in obtaining samples from the Sycamore circuits, because the XEB of the post-selected 1 million bitstrings obtained using our method is higher than those in Google’s experiments;

3. They are able to compute conditional probabilities and sample from this distribution accordingly, which is hard for Google’s quantum circuit hardwares.

At the same time, Google’s hardware has several advantages over this algorithm.

The most significant one is that Google’s hardware is much faster in sampling the quantum circuits with sufficient depth, while our algorithm has exponential complexity, hence is not scalable to both depth and qubit number;

Although being noisy, Googles’ samples are not correlated to each other, while in this work we generated correlated samples, which belong to a subspace of the whole bitstring space.

It would be very interesting to combine these classical computations and NISQ quantum computations, for solving challenging problems in the real-world.

This algorithm is generally designed. The big-head shape and bottleneck structure are general phenomenons existing in many tensor networks, as well as in many real-world networks, which can be detected using clustering algorithms. The proposed algorithm can be used straightforwardly for simulating and verifying existing and near-future NISQ quantum circuits. Our code is implemented straightforwardly using Python and Pytorch.

SOURCES- Arxiv

Written by Brian Wang, Nextbigfuture.com

Brian Wang is a Futurist Thought Leader and a popular Science blogger with 1 million readers per month. His blog Nextbigfuture.com is ranked #1 Science News Blog. It covers many disruptive technology and trends including Space, Robotics, Artificial Intelligence, Medicine, Anti-aging Biotechnology, and Nanotechnology.

Known for identifying cutting edge technologies, he is currently a Co-Founder of a startup and fundraiser for high potential early-stage companies. He is the Head of Research for Allocations for deep technology investments and an Angel Investor at Space Angels.

A frequent speaker at corporations, he has been a TEDx speaker, a Singularity University speaker and guest at numerous interviews for radio and podcasts. He is open to public speaking and advising engagements.

In that instance it is “better”. But without parameters to quantify quality how can you say it’s better? Sometimes a ballpark answer now is better than a precise answer tomorrow.

What if accuracy is a factor?

How Can you say it’s better? What if time is a factor?

The 5 day GPU solution is better than the 200 second solution.

10k years? Well, off by a bit, but 200 seconds vs 5 days is a real achievement.

A billion here, a billion there, pretty soon, you're talking real difference.

So my napkin estimate says Google’s 10,000 year estimate was off by a billion.