QuEra CoFounder Vladan Vuletic Targets Useful Error Corrected Quantum Computers in 2025-2028

Vladan Vuletic (MIT professor and QuEra Co-founder) had a scientific and technical presentation at the 2023 Q2B conference today. The main part of the talk came on the last slide where Vladan laid out his projection for how QuEra and its research partners will be able to advance their breakthrough Quantum Error Correction work.

Vladan expects sometime in 2025, QuEra will be able to get 10,000 to 100,000 physical qubits.
They will be able to get 100 error corrected qubits with 1 in a million to 1 in 100 million error rates.
This will mean very robust error correction with more physical qubits per logical qubit.
If this target is met then there QuEra will be delivering rapid progress and before 2028 commercial quantum error corrected computer systems that will be delivering commercially valuable work.

Prof. Vuletic did not commit that QuEra will have a 100 Qubit system with less 1 in a million in two years. A computer with such capabilities will be demonstrated in the labs (e.g. Harvard or MIT) in two years, and later then perhaps turned into a commercial product.

In general, the process is roughly the following:

Year “X” / 2025: lab demo of a capability
Year “X+1” / 2026 : early-access customers can use this capability through QuEra
Year “X+2” / 2027: this capability becomes available for on-prem systems

Q-CTRL is a quantum computer error suppression, error mitigation and software and quantum control company. Q-CTRL has identified use cases where having 30-500 quantum error corrected qubits with the ability to handle 6000-100,000 programming steps will deliver commercially valuable services. The use cases are in supply chain and logistics. Other use case possibilities with commercial value would be in material science.

Scott Aaronson Summary of the Harvard Led Work on QuEra’s Quantum ComputerSystem

Quantum Computer Professor Scott Aaronson gave this summary of the Harvard led working with QuEra neutral atom system. There were flaws and further work to be done for what is the major advance.

They ran experiments with up to 280 physical qubits, which simulated up to 48 logical qubits.
They demonstrated surface codes of varying sizes as well as color codes.
They performed over 200 two-qubit transversal gates on their encoded logical qubits.
They did a couple demonstrations, including the creation and verification of an encoded GHZ state and (more impressively) an encoded IQP circuit, whose outputs were validated using the Linear Cross-Entropy Benchmark (LXEB).
Crucially, they showed that in their system, the use of logically encoded qubits produced a modest “net gain” in success probability compared to not using encoding, consistent with theoretical expectations (though see below for the caveats). With a 48-qubit encoded IQP circuit with a few hundred gates, for example, they achieved an LXEB score of 1.1, compared to a record of ~1.01 for unencoded physical qubits.
At least with their GHZ demonstration and with a particular decoding strategy (about which more later), they showed that their success probability improves with increasing code size.

Here are what I [Scott Aaronson] currently understand to be the limitations of the work:

They didn’t directly demonstrate applying a universal set of 2- or 3-qubit gates to their logical qubits. This is because they were limited to transversal gates, and the Eastin-Knill Theorem shows that transversal gates can’t be universal. On the other hand, they were able to simulate up to 48 CCZ gates, which do yield universality, by using magic initial states.
They didn’t demonstrate the “full error-correction cycle” on encoded qubits, where you’d first correct errors and then proceed to apply more logical gates to the corrected qubits. For now it’s basically just: prepare encoded qubits, then apply transversal gates, then measure, and use the encoding to deal with any errors.
With their GHZ demonstration, they needed to use what they call “correlated decoding,” where the code blocks are decoded in conjunction with each other rather than separately, in order to get good results.
With their IQP demonstration, they needed to postselect on the event that no errors occurred (!!), which happened about 0.1% of the time with their largest circuits. This just further underscores that they haven’t yet demonstrated a full error-correction cycle.
They don’t claim to have demonstrated quantum supremacy with their logical qubits—i.e., nothing that’s too hard to simulate using a classical computer. (On the other hand, if they can really do 48-qubit encoded IQP circuits with hundreds of gates, then a convincing demonstration of encoded quantum supremacy seems like it should follow in short order.)

Slides from Vladan Vuletic (MIT professor and QuEra Co-founder)