If you can’t measure it, you can’t improve it. IBM created the Quantum Volume metric to measure the power of quantum computers.
Quantum Computers have the potential to be vastly more powerful than regular computers.
IBM created a Quantum Volume Metric to integrate all of the factors that effect the processing capability of quantum computers.
IBM recently updated its Quantum Volume metric from an earlier definition.
The single-number metric, quantum volume, can be measured using a concrete protocol on near-term quantum computers of modest size (less than 50 qubits) and measure it on several state-of-the-art transmon devices, finding values as high as 8. The quantum volume is linked to system error rates, and is empirically reduced by uncontrolled interactions within the system. It quantifies the largest random circuit of equal width and depth that the computer successfully implements. Quantum computing systems with high-fidelity operations, high connectivity, large calibrated gate sets, and circuit rewriting toolchains are expected to have higher quantum volumes. The quantum volume is a pragmatic way to measure and compare progress toward improved system-wide gate error rates for near-term quantum computation and error-correction experiments.
Quantum volume is architecture independent, and can be applied to any system that is capable of running quantum circuits. We implement this metric on several IBM Q devices, and find a quantum volume as high as 8. We conjecture that systems with higher connectivity will have higher quantum volume given otherwise similar performance parameters.
From numerical simulations for a given connectivity, IBM found that there are two possible paths for increasing the quantum volume. Although all operations must improve to increase the quantum volume, the first path is to prioritize improving the gate fidelity above other operations, such as measurement and initialization. This sets the roadmap for device performance to focus on the errors that limit gate performance, such as coherence and calibration errors. The second path stems from the observation that, for these devices and this metric, circuit
optimization is becoming important. They implemented various circuit optimization passes (far from optimal) and showed a measurable change in the experimental performance. IBM introduced an approximate method for NISQ devices, and used it to show experimental improvements.
IBM has determined that their quantum devices are close to being fundamentally limited by coherence times, which for IBM Q System One averages 73 microseconds.
SOURCES- IBM Research, Arxiv Validating quantum computers using randomized model circuits
Written By Brian Wang
Brian Wang is a Futurist Thought Leader and a popular Science blogger with 1 million readers per month. His blog Nextbigfuture.com is ranked #1 Science News Blog. It covers many disruptive technology and trends including Space, Robotics, Artificial Intelligence, Medicine, Anti-aging Biotechnology, and Nanotechnology.
Known for identifying cutting edge technologies, he is currently a Co-Founder of a startup and fundraiser for high potential early-stage companies. He is the Head of Research for Allocations for deep technology investments and an Angel Investor at Space Angels.
A frequent speaker at corporations, he has been a TEDx speaker, a Singularity University speaker and guest at numerous interviews for radio and podcasts. He is open to public speaking and advising engagements.