For the first time ever, a logical qubit meets all criteria, demonstrating that Quantum Error Correction is effective.
Here is the link to the research paper.
Many challenges remain. Although Google might in principle achieve low logical error rates by scaling up our current processors, it would be resource intensive in practice. Extrapolating the projections in Fig. 1d, achieving a 1 in a million error rate would require a distance-27 logical qubit using 1457 physical qubits. Scaling up will also bring additional challenges in real-time decoding as the syndrome measurements per cycle increase quadratically with code distance. The repetition code experiments also identify a noise floor at an error rate of 1 in ten billion caused by correlated bursts of errors. Identifying and mitigating this error mechanism will be integral to running larger quantum algorithms.
However, quantum error correction also provides us exponential leverage in reducing logical errors with processor improvements. For example, reducing physical error rates by a factor of two would improve the distance-27 logical performance by four orders of magnitude, well into algorithmically-relevant error rates. They expect these overheads will reduce with advances in error
correction protocols and decoding. The purpose of quantum error correction is to enable large scale quantum algorithms. While this work focuses on building a robust memory, additional challenges will arise in logical computation. On the classical side, they must ensure that software elements including our calibration protocols, real-time decoders, and logical compilers can scale to the sizes and complexities needed to run multi-surface-code operations. With below- threshold surface codes, they have demonstrated processor performance that can scale in principle, but which they must now scale in practice.



Arxiv – Quantum error correction below the surface code threshold
Quantum error correction provides a path to reach practical quantum computing by combining multiple physical qubits into a logical qubit, where the logical error rate is suppressed exponentially as more qubits are added. However, this exponential suppression only occurs if the physical error rate is below a critical threshold. In this work, we present two surface code memories operating below this threshold: a distance-7 code and a distance-5 code integrated with a real-time decoder. The logical error rate of our larger quantum memory is suppressed by a factor of Λ = 2.14 ± 0.02 when increasing the code distance by two, culminating in a 101-qubit distance-7 code with 0.143% ± 0.003% error per cycle of error correction. This logical memory is also beyond break-even, exceeding its best physical qubit’s lifetime by a factor of 2.4 ± 0.3. We maintain below-threshold performance when decoding in real time, achieving an average decoder latency of 63 μs at distance-5 up to a million cycles, with a cycle time of 1.1 μs. To probe the limits of our error-correction performance, we run repetition codes up to distance-29 and find that logical performance is limited by rare correlated error events occurring approximately once every hour, or 3 billion cycles. Our results present device performance that, if scaled, could realize the operational requirements of large scale fault-tolerant quantum algorithms.
There are three key standards for a logical qubit to serve as a viable component in a useful quantum computer:
✅ Below Threshold: Integrating more physical qubits within a logical qubit significantly reduces errors. The more, the merrier.
✅ Breakeven: The performance of the logical qubit surpasses that of the best physical qubit involved.
✅ Repeatable and in real time: error correction cycles are executed in sequence on the same chip using the same qubits while errors are tracked as they occur.
All benchmarks have been impressively achieved, marking a global first.
This advancement highlights the rapid pace of quantum computing; just two years ago, Google’s researchers had a system that was barely at the threshold and had not yet achieved breakeven.
It also underscores the leadership of superconducting circuits in Quantum Error Correction. 🏆
To the folks at Google Quantum AI: Keep up with the fantastic work! Truly inspiring for us all! Just don’t rest on your laurels, cats can sprint fast

Brian Wang is a Futurist Thought Leader and a popular Science blogger with 1 million readers per month. His blog Nextbigfuture.com is ranked #1 Science News Blog. It covers many disruptive technology and trends including Space, Robotics, Artificial Intelligence, Medicine, Anti-aging Biotechnology, and Nanotechnology.
Known for identifying cutting edge technologies, he is currently a Co-Founder of a startup and fundraiser for high potential early-stage companies. He is the Head of Research for Allocations for deep technology investments and an Angel Investor at Space Angels.
A frequent speaker at corporations, he has been a TEDx speaker, a Singularity University speaker and guest at numerous interviews for radio and podcasts. He is open to public speaking and advising engagements.
Anyone who observes biological systems, knows “fault tolerance” are their hallmark. No, it’s not the fastest way to process information. Perhaps that’s the point. The “information chain” is maintained despite “errors”, internal and externally imposed. The key to the “information” in a biological system (IMO) is less the “information chain”, but it’s foundational integrity. Perhaps quantum communication could be expedited by looking less at each bit, but the total content/context.