IBM has a 433 qubit quantum processor called Osprey and has a roadmap for 4000 qubits in 2025 and 100,000 qubits by 2030.

These are more powerful qubits than the 5000+ adiabatic qubits from DWave Systems. Most other quantum computer companies are at about 50-100 qubits.

The number of usable error-mitigated qubits is being improved. Error-mitigated qubits with over 99.99% reliability could enable quantum computers more powerful than classical exaflop supercomputers for some applications.

Startup Xanadu’s has the Borealis chip, which was tested with 216 qubits. Xanadu claims that they achieved quantum supremacy with a faster operation than classical supercomputers for one algorithm.

The IBM Quantum System Two will be released by the end of 2023. This modular system will form the framework of the company’s quantum supercomputers. It will have multiple processors with communication links between them. These are all stepping stones on the path towards IBM’s plans of building a quantum system with over 4,000 qubits by 2025.

Error correction has been improved and users can now more easily choose between increased speed or precision.

IBM quantum computing speed metric known as circuit layer operations per second (CLOPS), has gone from 1,400 with the Eagle to 15,000 CLOPS with the Osprey.

IBMs next quantum processor will be the 1121 qubit Condor. Then later in 2023, a modular processor called the Heron will stack multiple 133-qubit units together to make more powerful quantum processors.

IBM is also preparing to include optional error mitigation techniques within the cloud software for its quantum computers.

By the end of 2024, IBM expects that error mitigation with multiple Heron chips running in parallel in their ‘100 by 100 initiative’ can lead to systems of 100 qubits wide by 100 gates deep, enabling capabilities way past those of classical computers.

In 2021, IBM introduced the 127 qubit quantum processor called Eagle.

**Xanadu Borealis**

Quantum computational advantage was reported using Borealis, a photonic processor offering dynamic programmability on all gates implemented. They carry out Gaussian boson sampling (GBS) on 216 squeezed modes entangled with three-dimensional connectivity, using a time-multiplexed and photon-number-resolving architecture. On average, it would take more than 9,000 years for the best available algorithms and supercomputers to produce, using exact methods, a single sample from the programmed distribution, whereas Borealis requires only 36 μs. This runtime advantage is over 50 million times as extreme as that reported from earlier photonic machine.

Gaussian boson sampling is not a useful application.

Brian Wang is a Futurist Thought Leader and a popular Science blogger with 1 million readers per month. His blog Nextbigfuture.com is ranked #1 Science News Blog. It covers many disruptive technology and trends including Space, Robotics, Artificial Intelligence, Medicine, Anti-aging Biotechnology, and Nanotechnology.

Known for identifying cutting edge technologies, he is currently a Co-Founder of a startup and fundraiser for high potential early-stage companies. He is the Head of Research for Allocations for deep technology investments and an Angel Investor at Space Angels.

A frequent speaker at corporations, he has been a TEDx speaker, a Singularity University speaker and guest at numerous interviews for radio and podcasts. He is open to public speaking and advising engagements.

Great concise article. THANKS!

This was a very interesting presentation. IBM had lost its mojo for a while. Nice to see it on the cutting edge of computation power again.

Thanks for sharing this, Brian.