The findings—described as a “thought experiment” by NIST’s Stephen Jordan—are about a different aspect of quantum computing speed than another group of NIST researchers explored about two years ago. While the previous findings were concerned with how fast information can travel between two switches in a computer’s processor, Jordan’s new paper deals with how quickly those switches can flip from one state to another.
The rate of flipping is equivalent to the “clock speed” of conventional processors. To make computations, the processor sends out mathematical instructions known as logic operations that change the configurations of the switches. Present day CPUs have clock speeds measured in gigahertz, which means that they are capable of performing a few billion elementary logic operations per second.
Because they harness the power of quantum mechanics to make their calculations, quantum computers will necessarily have vastly different architectures than today’s machines. Their switches, called quantum bits or “qubits,” will be able to represent more than just a 1 or 0, as conventional processors do; they will be able to represent multiple values simultaneously, giving them powers conventional computers do not possess.
Jordan’s paper disputes longstanding conclusions about what quantum states imply about clock speed. According to quantum mechanics, the rate at which a quantum state can change—and therefore the rate at which a qubit can flip—is limited by how much energy it has. While Jordan believes these findings to be valid, several subsequent papers over the years have argued that they also imply a limit to how fast a quantum computer can calculate in general.
“At first glance this seems quite plausible,” Jordan said. “If you’re performing more logic operations, it makes sense that your switches would need to go through more changes. In both conventional and quantum computing designs, each time a logic operation occurs”—making its switches flip—“the computer hops to a new state.”
Using the mathematics of quantum systems, Jordan shows is that it is possible to engineer a quantum computer that does not have this limitation. In fact, with the right design, he said, the computer “could perform an arbitrarily large number of logic operations while only hopping through a constant number of distinct states.”
Counterintuitively, in such a quantum computer, the number of logic operations carried out per second could be vastly larger than the rate at which any qubit can be flipped. This would allow quantum computers that embrace this design to break previously suggested speed limits.
What advantages might this faster clock speed grant? One of the primary applications envisioned for quantum computers is the simulation of other physical systems. The theoretical speed limit on clock speed was thought to place an upper bound on the difficulty of this task. Any physical system, the argument went, could be thought of as a sort of computer—one with a clock speed limited by the system’s energy. The number of clock cycles needed to simulate the system on a quantum computer should be comparable to the number of clock cycles the original system carried out.
However, these newly discovered loopholes to the computational speed limit are a “double-edged sword.” If energy does not limit the speed of a quantum computer, then quantum computers could simulate physical systems of greater complexity than previously thought. But energy doesn’t limit the computational complexity of naturally occurring systems either, and this could make them harder to simulate on quantum computers.
Jordan said his findings do not imply that there are no limits to how fast a quantum computer could conceivably calculate, but that these limits derive from other aspects of physics than merely the availability of energy.
“For example, if you take into account geometrical constraints, like how densely you can pack information, and a limit to how fast you can transmit information (namely, the speed of light), then I think you can make more solid arguments,” he said. “That will tell you where the real limits to computational speed lie.”
One version of the energy-time uncertainty principle states that the minimum time for a quantum system to evolve from a given state to any orthogonal state is the energy uncertainty. A related bound called the Margolus-Levitin theorem states that the expectation value of energy and the ground energy is taken to be zero. Many subsequent works have interpreted a minimal time for an elementary computational operation and correspondingly a fundamental limit on clock speed determined by a system’s energy. Here we present local time-independent Hamiltonians in which computational clock speed becomes arbitrarily large as the number of computational steps goes to infinity. They argue that energy considerations alone are not sufficient to obtain an upper bound on computational speed, and that additional physical assumptions such as limits to information density and information transmission speed are necessary to obtain such a bound.