Improved quantum error correction could enable universal quantum computing

Quantum Circuits long-term goal is to develop, manufacture, and sell the first practical and useful quantum computers based on superconducting devices. Along the way to building a quantum computer, QCI will commercialize the components, devices, and software that will accelerate basic research and enable the scaling of quantum computing.

QCI is focusing on the quantum circuit model for quantum computation, which enables universal and fault-tolerant operations using error correction. Their technology relies on quantum circuits – electrical devices using superconducting Josephson junctions that can act as solid-state quantum bits, or qubits.

QCI was founded by three scientists from the Department of Applied Physics at Yale University: Michel Devoret, Luigi Frunzio, and Robert Schoelkopf. They are world-leading experts in quantum devices and quantum information processing with solid-state devices, with a decades-long record of innovation in new devices, techniques, and fundamental concepts. Their group has produced many scientific firsts, including the development of a “quantum bus” for entangling qubits with wires, the first implementation of a quantum algorithm with a solid-state device, and the demonstration of solid-state qubits with scalable levels of coherence.

Sequoia has made an investment in QCI. The QCI researchers have extended the lifetime of a quantum bit about 20 times.

Nature – Extending the lifetime of a quantum bit with error correction in superconducting circuits (2016)

Abstract
Quantum error correction (QEC) can overcome the errors experienced by qubits1 and is therefore an essential component of a future quantum computer. To implement QEC, a qubit is redundantly encoded in a higher-dimensional space using quantum states with carefully tailored symmetry properties. Projective measurements of these parity-type observables provide error syndrome information, with which errors can be corrected via simple operations. The ‘break-even’ point of QEC—at which the lifetime of a qubit exceeds the lifetime of the constituents of the system—has so far remained out of reach. Although previous works have demonstrated elements of QEC they primarily illustrate the signatures or scaling properties of QEC codes rather than test the capacity of the system to preserve a qubit over time. Here we demonstrate a QEC system that reaches the break-even point by suppressing the natural errors due to energy loss for a qubit logically encoded in superpositions of Schrödinger-cat states of a superconducting resonator. We implement a full QEC protocol by using real-time feedback to encode, monitor naturally occurring errors, decode and correct. As measured by full process tomography, without any post-selection, the corrected qubit lifetime is 320 microseconds, which is longer than the lifetime of any of the parts of the system: 20 times longer than the lifetime of the transmon, about 2.2 times longer than the lifetime of an uncorrected logical encoding and about 1.1 longer than the lifetime of the best physical qubit (the |0〉f and |1〉f Fock states of the resonator). Our results illustrate the benefit of using hardware-efficient qubit encodings rather than traditional QEC schemes. Furthermore, they advance the field of experimental error correction from confirming basic concepts to exploring the metrics that drive system performance and the challenges in realizing a fault-tolerant system.

What is needed according to other researchers (Jay M. Gambetta, Jerry M. Chow & Matthias Steffen)

Other researchers have envisioned
the systems view of a quantum information processor.
It consists of a physical and logical layer. The physical layer provides the error correction and consists of a physical quantum processor that has both input and output lines that are controlled by the QEC processor. This processor is in turn controlled by the logical layer, where the encoded qubits are defined and the logical operations are performed for the desired quantum algorithm.

What is now necessary for a demonstration of quantum computing on a modest-sized system and what such a demonstration might look like. With current experiments scaling into a double-digit number of qubits, a lattice of O(100) physical qubits which can perform QEC experiments is well within possibility in the near term. Such a system would serve as an invaluable learning tool not just for testing the feasibility of QEC, but also for enabling insight into how to scale a system to the next level of 10^4–10^8 physical qubits. With such numbers of physical qubits, some of the canonical quantum algorithms could possibly be tested in a universal fault-tolerant system. Getting to this important intermediary stage of O(100) qubits would represent a major stepping stone towards bringing the next level of quantum computing to reality.

There are still a number of important technological challenges to address to successfully demonstrate an O(100) qubit system. Aside from advances in coherence times, optimal control and calibration routines for high-fidelity quantum gates, the following list represents other critical areas for exploration and advancement:

Breaking the plane
Arrangements of multi-qubit devices to date have been limited to a single physical plane. This has serious limitations for systems beyond an n × 2 square lattice. In such a scenario, qubits on the interior of the grid require a path in and out for addressability. There are many options for breaking this ‘third dimension’ ,which include standard silicon-based lithographic techniques such as thru-silicon-vias, a flip-chip multi layer stack, or employing waveguide package resonance modes Ultimately, the solution must also be cryogenically compatible, preserve coherence times and gate fidelities, while not also introducing any new loss mechanisms.

Substrate modes
The device substrate size will need to increase in order to accommodate a larger number of qubits. Due to the boundary conditions of the substrate die, it will host electromagnetic modes that decrease in frequency for increasing sizes. These modes can facilitate both cross-talk between pairs of qubits in the plane and a reduction in coherence times. While at the moment this problem can be circumvented by clever design, this challenge in the long term is an outstanding question. Metallic vias are a potential route, although these must be cryogenically compatible and the additional fabrication processing and materials must also not negatively impact coherence times.

On-chip microwave integrity
As the complexity of the network grows, it becomes more difficult to insure that broken ground planes in the designed network are still properly tied together at relevant microwave frequencies. Improper grounding can result in undesired slot-line modes and other spurious microwave resonances, which again will lead to cross-talk and reduced coherence. Air-bridge cross-overs, vias, and flip-chip lid are all potential paths towards improved microwave integrity.

Josephson-junction reproducibility and accuracy
In a large network of qubits, variations in the qubit frequency will lead to undesired frequency collisions of the fundamental and higher levels. Such collisions can lead to strong correlated interactions, leakage effects, and addressability errors. Currently, numerical simulations of qubit device designs have allowed an accurate prediction of the qubit capacitance and the couplings. However, fluctuations in the critical current of the JJ are currently on the order of 10%, which result in an ~280 MHz variation in our designed qubits. This is a substantial spread, which will only be improved through more reliable JJ fabrication. Another aspect is the long-term critical current variability of fabricated JJs, and investigating what might influence perturbations on successive experimental cooldowns.

Extensible control and readout hardware
With current qubit network devices, the usage of commercial-off-the-shelf (COTS) equipment for control and readout is not yet cost prohibitive. However, moving towards O(100) networks would require a substantial lowering of the cost per qubit. This can be achieved by shifting from COTS towards customized and targeted electronics solutions. Nonetheless, another important caveat to consider is the overall noise performance for different hardware (e.g. phase, amplitude noise), and ensuring that these would not limit ever-decreasing gate and readout error rates. The extensibility of readout hardware for the application towards QEC also hinges upon having low latency, and the ability to perform a full qubit state discrimination, meter and qubit reset, at a fast desired measurement rate. This could potentially involve customized design for fast-feedback on FPGAs, which are also amenable to programming new concepts for discrimination.

Cryogenic system integrity
Larger devices also mean more signal-carrying wires and ancillary microwave equipment, which all sit inside of a dilution refrigerator. The cryogenic load will need to be handled with care, especially in the proper appropriation of filtering, attenuation, isolation, and amplification so as to not degrade any coherence times, while providing the ability to perform fast and high-fidelity operations. The exact engineering of the cryogenic system environment (e.g. thermalization, impedance matching, infrared radiation shielding) and paths towards reduction of component size and mass for isolators, amplifiers, circulators, etc., are important topics of open study.

System calibration
State-of-the-art high-fidelity gate experiments have already shown that the accuracy of gates can crucially depend on the ability to calibrate all necessary microwave pulse parameters. How, then, will the complexity of the calibration set grow with system size? With more connected qubits, the possibility of correlated errors increases, and new sequences need to be developed to ensure that these play no significant effect on the performance of the independent controls. Determining how to make robust and extensible system calibrations will be critical for high-performance experiments.

Verification & validation
Tools currently exist to measure the accuracy of one-qubit or two-qubit gates in a relatively straightforward manner. These can in principle be extended to larger systems but typically scale exponentially with increasing number of qubits94 or give only partial information. Moving forward, tools that determine how accurate quantum gates operate on a subset of a larger fabric of qubits will have relevance for QEC. One current technique, simultaneous benchmarking is a starting point, but it is not yet clear how sensitive it can be towards adverse errors. Overall, verification and validation methods will likely grow from bootstrapping techniques on smaller subsystems, extended upwards towards larger lattices.

New QEC codes
Even though the surface code and its variants are very attractive for guiding current experiments, there is still a significantly large overhead associated with proper functioning, especially when going towards logical qubit operations. The community is actively working towards reducing this overhead through either finding new codes with inherent universal transversal operations or reducing the requirements for magic state distillation.

In the near future, systems of O(100) qubits are within reach and already beyond what can be emulated in full generality on a classical computer. This will usher in a new era in quantum information science, with explicit hardware to match broad ideas in theory, and culminate in the demonstration of a useful logical memory. A very promising route forward is the RSC (rotated surface code), offering simplicity in the network, implemented with superconducting qubits, which has shown tremendous and rapid progress in coherence times, controls, and readout. The challenges that we outlined in this review, while difficult, are not insurmountable and with clever engineering and new insights we believe the road ahead is lit.