Dwave announced the general availability of the latest generation of D-Wave quantum computers, the D-Wave 2X
In addition to scaling beyond 1000 qubits, the new system incorporates other major technological and scientific advancements. These include an operating temperature below 15 millikelvin, near absolute zero and 180 times colder than interstellar space. With over 128,000 Josephson tunnel junctions, the new processors are believed to be the most complex superconductor integrated circuits ever successfully used in production systems. Increased control circuitry precision and a 50% reduction in noise also contribute to faster performance and enhanced reliability.
The D-Wave 2X demonstrates a factor of up to 15x gains over highly specialized classical solvers in nearly all classes of problems examined. Measuring only the native computation time of the D-Wave 2X quantum processor shows performance advantages of up to 600x over these same solvers.
A summary of the TTT benchmark is as follows:
* The D-Wave 2X finds near-optimal solutions up to 600x faster (depending on inputs) than comparable times for the best known and highly tuned, classical solvers. This comparison uses the quantum anneal time of the D-Wave processor.
* The D-Wave 2X finds near-optimal solutions up to 15x faster than the solvers using total time measurements.
* The greatest performance advantage for the D-Wave 2X compared to the software solvers is seen on inputs with more challenging structures than simple random cases that have been the predominant focus in previous benchmarks. This means the hardware performance is showing its best performance against software solvers on hard problem instances.
* In cases where it could be calculated, the difference between “near-optimal” and “optimal” is quite small, less than one percent of the latter. The D-Wave 2X is up to 100x faster at finding good near-optimal solutions than optimal solutions.
To characterize performance of the new system we performed a set of benchmark tests against the best known, highly tuned software solvers running on classical systems. The benchmark includes a set of synthetic discrete combinatorial optimization problems intended to be representative of real world challenges.
One of the challenges in benchmarking a quantum processor at the 1000+ qubit scale is that computation time for both quantum and classical processors grows exponentially with problem size. This makes finding the optimal solution to a problem computationally prohibitive at the 1000+ qubit scale of the D-Wave 2X.
Instead of solving for optimal solutions many important optimization applications work to attain near-optimal solutions. Typically, large-scale solvers try to find a solution that is considered ‘close enough’ or they are given a fixed time budget and return the best solution achieved in the specified time.
Following this approach, we established the Time to Target (TTT) metric and compared the performance of the D-Wave 2X on a host of native hardware problems against highly optimized and tuned solvers.
In the evaluation of quantum annealers, metrics based on ground state success rates have two major drawbacks. First, evaluation requires computation time for both quantum and classical processors that grows exponentially with problem size. This makes evaluation itself computationally prohibitive. Second, results are heavily dependent on the effects of analog noise on the quantum processors, which is an engineering issue that complicates the study of the underlying quantum annealing algorithm. We introduce a novel time-to-target” metric which avoids these two issues by challenging software solvers to match the results obtained by a quantum annealer in a short amount of time. We evaluate D-Wave’s latest quantum annealer, the D-Wave 2X system, on an array of problem classes and find that it performs well on several input classes relative to state of the art software solvers running single-threaded on a CPU
In this study Dwave introduces a new metric that avoids problem of prohibitive runtimes and makes evaluation far less sensitive to analog noise. We do this by having the solvers race to a target energy determined by the D-Wave processor’s energy distribution. We call this the time-to-target” (TTT) metric. Our use of the D-Wave processor as a reference solver in computing the TTT metric allows us to circumvent the diffculties of evaluating performance in finding ground states, and to explore this interesting property of fast convergence to near-optimal solutions.
The TTT metric identifies low-cost target solutions found by the D-Wave processor within very short time limits (from 15ms to 352ms in this study), and then asks how much time competing software solvers need to find solution energies of matching or better quality. Our results may be summarized as follows.
We observe that the D-Wave processor performs well both in terms of total computation time (including I/O costs to move data on and off the chip), and pure computation times (omitting I/O costs).
• Considering total time from start to finish (including I/O costs), D-Wave 2X TTT times are 2x to 15x faster than the best competing software (at largest problem sizes), for all but one input class that we tested, in which a solver speci c to that input class is faster.
• Considering pure computation time (omitting I/O costs), the D-Wave 2X TTT times are 8x to 600x faster than competing software times on all input classes we tested.
In these TTT metrics, with the exception of the RAN1 problem class, the single-threaded software solvers evaluated have not kept up with the D-Wave hardware at the full 1097-qubit problem scale. While it is possible that a new software algorithm could be developed that could beat the DW2X in these metrics on a single thread, and we encourage researchers to continue such efforts, we believe that the real question has now turned to multithreaded, multi-core software solvers.
Isakov et al. evaluated parallelized versions of their simulated annealing code with up to 16 threads, but this would be insufficient to match the anneal-time-only performance of the DW2X in many cases, even with idealized perfect parallelism. For these TTT metrics it remains unclear how many CPU cores it would take to match the performance of the DW2X. It bears investigating this question using actual timing on multi-CPU platforms, where memory access and communication costs are likely to dominate single-core instruction times at least as much as programming and readout times dominate the quantum annealer.
While this study has only used CPU-based software solvers, GPUs are becoming increasingly popular as sources of cheap parallelism and are a viable means to fast, cheap Monte Carlo simulation. We are currently investigating GPU-based algorithms to determine how many GPU cores itwould take to match the DW2X in these TTT metrics.
SOURCES – Dwave, Arxiv
Brian Wang is a Futurist Thought Leader and a popular Science blogger with 1 million readers per month. His blog Nextbigfuture.com is ranked #1 Science News Blog. It covers many disruptive technology and trends including Space, Robotics, Artificial Intelligence, Medicine, Anti-aging Biotechnology, and Nanotechnology.
Known for identifying cutting edge technologies, he is currently a Co-Founder of a startup and fundraiser for high potential early-stage companies. He is the Head of Research for Allocations for deep technology investments and an Angel Investor at Space Angels.
A frequent speaker at corporations, he has been a TEDx speaker, a Singularity University speaker and guest at numerous interviews for radio and podcasts. He is open to public speaking and advising engagements.