The goal of the Google Quantum AI lab is to build a quantum computer that can be used to solve real-world problems. Their strategy is to explore near-term applications using systems that are forward compatible to a large-scale universal error-corrected quantum computer. In order for a quantum processor to be able to run algorithms beyond the scope of classical simulations, it requires not only a large number of qubits. Crucially, the processor must also have low error rates on readout and logical operations, such as single and two-qubit gates.
Today Google presented Bristlecone, their new quantum processor, at the annual American Physical Society meeting in Los Angeles. The purpose of this gate-based superconducting system is to provide a testbed for research into system error rates and scalability of their qubit technology, as well as applications in quantum simulation, optimization, and machine learning.
The guiding design principle for this device is to preserve the underlying physics of our previous 9-qubit linear array technology which demonstrated low error rates for readout (1%), single-qubit gates (0.1%) and most importantly two-qubit gates (0.6%) as their best result. This device uses the same scheme for coupling, control, and readout, but is scaled to a square array of 72 qubits. They chose a device of this size to be able to demonstrate quantum supremacy in the future, investigate first and second order error-correction using the surface code, and to facilitate quantum algorithm development on actual hardware.
2D conceptual chart showing the relationship between error rate and number of qubits. The intended research direction of the Quantum AI Lab is shown in red, where we hope to access near-term applications on the road to building an error corrected quantum computer.
Before investigating specific applications, it is important to quantify a quantum processor’s capabilities. Google’s theory team has developed a benchmarking tool for exactly this task. They can assign a single system error by applying random quantum circuits to the device and checking the sampled output distribution against a classical simulation. If a quantum processor can be operated with low enough error, it would be able to outperform a classical supercomputer on a well-defined computer science problem, an achievement known as quantum supremacy. These random circuits must be large in both number of qubits as well as computational length (depth). Although no one has achieved this goal yet, Google calculates quantum supremacy can be comfortably demonstrated with 49 qubits, a circuit depth exceeding 40, and a two-qubit error below 0.5%. They believe the experimental demonstration of a quantum processor outperforming a supercomputer would be a watershed moment for the field, and remains one of their key objectives.
They are looking to achieve similar performance to the best error rates of the 9-qubit device, but now across all 72 qubits of Bristlecone. They believe Bristlecone would then be a compelling proof-of-principle for building larger scale quantum computers. Operating a device such as Bristlecone at low system error requires harmony between a full stack of technology ranging from software and control electronics to the processor itself. Getting this right requires careful systems engineering over several iterations.
They are cautiously optimistic that quantum supremacy can be achieved with Bristlecone, and feel that learning to build and operate devices at this level of performance is an exciting challenge! They look forward to sharing the results and allowing collaborators to run experiments in the future.
A Bristlecone chip being installed by Research Scientist Marissa Giustina at the Quantum AI Lab in Santa Barbara
Competing quantum systems
IBM announced a 50-qubit quantum computer in November 2017.
Intel announced a 49-qubit test chip in January.
D-Wave Systems is selling a 2000 qubit adiabatic quantum annealing system and could be testing a 5000 qubit quantum annealing system.
In a presentation in Sept, 2017, IBM indicated that they are doubling qubits every 8 months.
If IBM maintains an 8 month qubit doubling rate then they will announce a
100 qubit quantum computer in June, 2018
a 200 qubit system in Feb, 2019 and
400 qubits in October 2019.
Google is tracking to a similar rate of qubit improvement. IBM, Google, Intel and Rigetti will likely be at about 150-300 qubits by the end of 2018.
Currently IBM, Google and Rigetti have created or are creating about 50-100 qubit gate systems with no error correction. They are not universal quantum computing systems but approximate gate model systems.
An ideal quantum computer would have at least hundreds of millions of qubits and an error rate lower than 0.01%.
With 8-12 month qubit doubling
800 qubits in mid-2020
1600 qubits in Q1 of 2021
3200 qubits at the end of 2021
6400 qubits in 2022
20000 qubits in 2023
40000 qubits in 2024
80000 qubits in 2025
160,000 qubits in 2026
320,000 qubits in 2027
600,000 to 1 million qubits in 2028
IBM Research introduced the concept of quantum volume. If we want to use quantum computers to solve real problems, the number of qubits is important, but so is the error rate. In practical devices, the effective error rate depends on the accuracy of each operation, but also on how many operations it takes to solve a particular problem as well as how the processor performs the operations.
The quantum volume measures the useful amount of quantum computing done by a device in space and time.
As we build larger quantum computing devices capable of performing more complicated algorithms, it is important to quantify their power. The origin of a quantum computer’s power is already subtle, and a quantum computer’s performance depends on many factors that can make assessing its power challenging. These factors include:
1. The number of physical qubits;
2. The number of gates that can be applied before errors make the device behave essentially classically;
3. The connectivity of the device;
4. The number of operations that can be run in parallel.
The quantum volume, to summarize performance against these factors.
Brian Wang is a Futurist Thought Leader and a popular Science blogger with 1 million readers per month. His blog Nextbigfuture.com is ranked #1 Science News Blog. It covers many disruptive technology and trends including Space, Robotics, Artificial Intelligence, Medicine, Anti-aging Biotechnology, and Nanotechnology.
Known for identifying cutting edge technologies, he is currently a Co-Founder of a startup and fundraiser for high potential early-stage companies. He is the Head of Research for Allocations for deep technology investments and an Angel Investor at Space Angels.
A frequent speaker at corporations, he has been a TEDx speaker, a Singularity University speaker and guest at numerous interviews for radio and podcasts. He is open to public speaking and advising engagements.