IBMs Progress to Practical Fault Tolerant Quantum Computers

Many experts predict that practical fault tolerant quantum computing (FTQC) will require millions of physical quantum bits (qubits) but in August, 2023 IBM scientists published the discovery of new error correction codes that work with ten times fewer qubits. Practical error correction is far from a solved problem. However, these new codes and other advances across the field are increasing our confidence that fault tolerant quantum computing isn’t just possible, but is possible without having to build an unreasonably large quantum computer.

The Quantum computing field needs error correction code that works on relatively noisy qubits. Physical error rates for most types of quantum computers will be difficult to reduce much below the ~.0001 scale, or approximately a one-in-ten thousand error rate. The code must supply low enough logical qubit error rates thatcan perform complex calculations.

A rough estimate is that the desired logical error rate be below the inverse of the total number of logical operations (i.e. if an algorithm requires 1 million logical operations one should aim for at least a 1 per million error rate for logical qubits or better). Finally the code must have an acceptable overhead — additional physical qubits increase the cost and complexity of the computer, and if too extreme, they may make the QEC code impractical.

There are many QEC codes under development, representing the different ways you can encode quantum information over physical qubits to protect the information from errors. Some of the most promising codes today are called quantum low-density parity check, or LDPC codes. These codes satisfy many of the practical constraints above, in particular the fact that each qubit is connected to only a few others and errors can be detected using simple circuits without spreading them to too many others.

The surface code is an example of an LDPC code which encodes information into a two-dimensional grid of qubits. Surface codes have been thoroughly studied and have been the up-front approach for many envisioned QEC architectures. Teams of researchers have already been able to demonstrate small examples of surface codes; including related work by IBM.

Surface codes have drawbacks that make them unlikely to be viable for constructing a useful quantum computer: they require too many physical qubits, possibly 20 million qubits6 for problems of interest. IBM continues to hunt for QEC codes which reduce this overhead. IBM has found promising LDPC codes beyond the surface code.

New codes for efficient error correction
IBM scientists have now discovered LDPC codes with a few hundred physical qubits that feature a more than ten-fold reduction of the number of physical qubits relative to the surface code. This is possible because the code encodes more information into the same number of physical qubits, while still showing excellent performance at error rates below 0.001. Furthermore, each qubit is coupled to six others, a reasonable hardware requirement for realizing the error correction protocol.

The surface code uses a 2D lattice of qubits, connected like the edges of the squares on a checkerboard. To reach more efficient codes, like this one, scientists need to break the plane — including edges that curve above the checkerboard and connect distant qubits. At IBM, we are developing such non-local “c” couplers that can create these extra edges.

These hardware requirements are beyond present-day superconducting quantum computing processors. The six-way connectivity and need for long wires is more challenging than what we have built at IBM so far. This is the main technological challenge, but not an insurmountable one. IBM is developing higher degrees of connectivity as well as non-local “c” couplers that act like long wires between distant qubits.

The requirements of six way connection and longer wires and non-local couplers is far easier than the 20 million qubit system requirements.

Arxiv – High-threshold and low-overhead fault-tolerant quantum memory

Quantum error correction becomes a practical possibility only if the physical error rate is below a threshold value that depends on a particular quantum code, syndrome measurement circuit, and a decoding algorithm. Here we present an end-to-end quantum error correction protocol that implements fault-tolerant memory based on a family of LDPC codes with a high encoding rate that achieves an error threshold of 0.8% for the standard circuit-based noise model. This is on par with the surface code which has remained an uncontested leader in terms of its high error threshold for nearly 20 years. The full syndrome measurement cycle for a length-n code in our family requires n ancillary qubits and a depth-7 circuit composed of nearest-neighbor CNOT gates. The required qubit connectivity is a degree-6 graph that consists of two edge-disjoint planar subgraphs. As a concrete example, we show that 12 logical qubits can be preserved for ten million syndrome cycles using 288 physical qubits in total, assuming the physical error rate of 0.1%. We argue that achieving the same level of error suppression on 12 logical qubits with the surface code would require more than 4000 physical qubits. Our findings bring demonstrations of a low-overhead fault-tolerant quantum memory within the reach of near-term quantum processors.

IBM is extending the coupling map to 7 connections but this will require significant microwave modeling; however, typical transmons have about 60fF of capacitance and each gate is around 5fF to get the appropriate coupling strengths to the buses, so it is fundamentally possible to develop this coupling map without changing the properties of the transmon qubits which have been shown to have larger coherence and are stable.

The final challenge is the most difficult. For the buses that are short enough so that the fundamental mode can
be used the standard circuit QED model holds. However, to demonstrate the 144-qubit code some of the buses will
be long enough that we will require frequency engineering. One way to achieve this is with filtering resonators, and a
proof of principle experiment was demonstrated.

IBMs work leaves several open questions concerning quasi-cyclic LDPC codes and their applications.
1. What are tradeoffs between the code parameters n, k, d and can one achieve a constant non-zero encoding rate and a growing distance?
2. Are there more general LDPC codes compatible with our syndrome measurement circuit(s)? IBM expects that the same circuit applies to any two-block LDPC code based on an Abelian group. However their circuit analysis breaks down for non-Abelian groups.

3. IBM work gives a depth-7 syndrome measurement circuit, as measured by the number of CNOT layers. Is it possible to reduce the circuit depth? Numerical experiments performed for the code indicate that this code may have no depth-6 syndrome measurement (SM) circuit.

4. They observed that a depth-7 SM circuit is not unique. A natural next step is identifying a SM circuit that works best for a particular code. In addition, it may be possible to improve the circuit-level distance by using different SM circuits in different syndrome cycles. Even though some low-weight fault paths are not detectable by any single circuit, such fault paths may be detected if two circuits are used in tandem.

5. How much would the error threshold change for a noise biased towards measurement errors? Note that measurements are the dominant source of noise for superconducting qubits. Since the considered quasi-cyclic codes have a highly redundant set of check operators, one may expect that they offer an extra protection against measurement errors.

6. The general purpose BP-OSD decoder used here may not be fast enough to perform error correction in real time. Is there a faster decoder making use of the special structure of quasi-cyclic codes?

7. How to apply logical gates? While our work gives a fault-tolerant implementation of certain logical gates, these gates offer very limited computational power and are primarily useful for implementing memory capabilities.

IBM Research has a goal of scaling quantum systems to a size where they’ll be capable of solving the world’s most challenging problems. IBM wants to deploy a quantum-centric supercomputer powered by 100,000 qubits by 2033.

IBM has laid out a roadmap to reach 1000 qubit processors this year and 4000 qubits within two years.

IBM Plan to Scale Quantum Computers
In 2023, IBM is making the 1121 qubit single chip processor (Condor) which follows up at 433 qubit chip in 2022. They are working with multiple 133 qubit Heron processors connected by a single control system.

In 2024, IBM will debut Crossbill, the first single processor made from multiple chips. In 2024, IBM will also unveil the Flamingo processor. This remarkable processor will be able to incorporate quantum communication links, allowing them to demonstrate a quantum system comprising three Flamingo processors totaling 1,386 qubits.

Then in 2025 IBM will combine multi-chip processors and quantum communication technologies to create our Kookaburra processor. This will demonstrate a quantum system of 3 Kookaburra processors totaling 4,158 qubits. This leap forward will usher in a new era of scaling providing a clear path to 100,000 qubits and beyond.

The new error correction codes will be layered into the systems in the 2024-2026 timeframes.