Craig Gidney at Google in Santa Barbara and Martin Ekerå at the KTH Royal Institute of Technology in Stockholm, Sweden have found orders of magnitude gains for quantum computers code-breaking performance. They also are working on further optimizations including distributing the decryption problem among a network of smaller quantum computers.
In 2015, researchers estimated that a quantum computer would need a billion qubits to break 2048-bit RSA encryption. Current quantum computers have about 70-100 qubits for noisy superconduction qubits and will soon have 5600 for D-Wave Quantum annealing systems.
A quantum computer will be able to break regular commercial financial encryption using 20 million qubits in just eight hours.
They found a more efficient way to perform a mathematical process called modular exponentiation. This is the process of finding the remainder when a number is raised to a certain power and then divided by another number. This process is the most computationally expensive operation in Shor’s algorithm.
The multiplication circuits that they are using have a Toffoli count that scales quadratically (up to polylog factors). There are multiplication circuits with asymptotically better Toffoli counts. For example, the Karatsuba algorithm has a Toffoli count of O(n lg 3) and the Schonhage–Strassen algorithm has a Toffoli count of O(n lg n lg lg n). However, there are difficulties when attempting to use these asymptotically efficient algorithms in the context of Shor’s algorithm.
Future work is needed to determine if multipliers with asymptotically lower Toffolis counts help at practical sizes, such as n = 2048? They believe that they don’t, but only a careful investigation can properly determine the answer to this question.
Optimize distillation is Ongoing Work
To produce their magic states, they use slightly-modified copies of the CCZ factory. They have many ideas for improving on this approach.
1. the factory they are using is optimized for the case where it is working in isolation. Error correcting codes get better when working over many objects instead of one object. It is likely that a factory using a block code to produce multiple CCZ states at once would perform better than the factory they used. A distillation protocol could produces good CCZ states ten at a time.
They have realized there are two obvious-in-hindsight techniques that could be used to reduce the volume of the factories in that paper. They assumed that topological errors that can occur within the factories were undetected, but actually many of them are heralded as distillation failures. By taking advantage of this heralding, it should be possible to reduce the code distance used in many parts of the factories.
2. When considering the set of S gates to apply to correct T gate teleportations performed by the factories, there are redundancies that they were not previously aware of.
3. They will be able to detection events produced by the surface code to estimate how likely it is that an error occurred, and that this information can be use to discard “risky factory runs” in a way that increases reliability. This would mean increased reliability for a decreased code distance.
Distributed Quantum Computing
They could use 8 machines each with perhaps 4 million qubits, as long as they were connected by quantum channels with bandwidths of 150qb/s. But they have not carefully explored this question, and perhaps there are even better ways to distribute the computation with even lower overheads. Only future work can tell.
The 20 million qubit estimate was the worst case situation. The low end estimate should also drop. However, the low end of the estimate is highly sensitive to advances in the design of quantum error correcting codes, the engineering of physical qubits, and the construction
of quantum circuits.
They estimate the approximate cost of their construction using plausible physical assumptions for large-scale superconducting qubit platforms: a planar grid of qubits with nearest-neighbor connectivity, a characteristic physical gate error rate of 1 in a 1000, a surface code cycle time of 1 microsecond, and a reaction time of 10 micro-seconds. They account for factors that are normally ignored such as noise, the need to make repeated attempts, and the spacetime layout of the computation. When factoring 2048 bit RSA integers, our construction’s spacetime volume is a hundredfold less than comparable estimates from earlier works.
SOURCES- Arxiv, Technology Review
Written By Brian Wang
Brian Wang is a Futurist Thought Leader and a popular Science blogger with 1 million readers per month. His blog Nextbigfuture.com is ranked #1 Science News Blog. It covers many disruptive technology and trends including Space, Robotics, Artificial Intelligence, Medicine, Anti-aging Biotechnology, and Nanotechnology.
Known for identifying cutting edge technologies, he is currently a Co-Founder of a startup and fundraiser for high potential early-stage companies. He is the Head of Research for Allocations for deep technology investments and an Angel Investor at Space Angels.
A frequent speaker at corporations, he has been a TEDx speaker, a Singularity University speaker and guest at numerous interviews for radio and podcasts. He is open to public speaking and advising engagements.