DARPA challenges : describe limits of quantum computing and apply QC to improve machine learning

DARPA Defense Sciences Office (DSO) is seeking information on new capabilities that could be enabled by current and next-generation quantum computers for understanding complex physical systems, improving artificial intelligence (AI) and machine learning (ML), and enhancing distributed sensing. The field of Quantum Computing (QC) has seen considerable progress in recent years, both in the number of qubits that can be physically realized, and in the formulation of new quantum search and optimization algorithms. However, numerous challenges remain to be solved to usefully employ QC to solve real-world problems. These include challenges of scale, environmental interactions, input/output, qubit connectivity, quantum memory (or lack thereof), quantum state preparation and readout, and numerous other practical and architectural challenges associated with interfacing to the classical world.

Richard Feynman’s original vision for quantum computing sprang from the insight that there are hard problems, e.g. in quantum physics, quantum chemistry, and materials, that are nearly intractable using classical computation platforms but that might be successfully modeled using a universal quantum computer. DARPA seeks to challenge the community to address the fundamental limits of quantum computing and to identify where quantum computing can relevantly address hard science and technology problems, thus realizing Feynman’s original vision. Both near-term (next few years) and longer term (next few decades) capabilities and their limitations are of interest.

An issue of particular interest is the potential impact of QC on “second wave” AI/ML optimization. ML has shown significant value in a broad range of real-world problems, but the training time (due to the size and variety of the data needed for learning)and also network design space (due to a paucity of detailed analysis and theory for ML/deep learning (DL) systems)are large. It has been suggested that QC could significantly decrease training time of currently standard ML approaches by providing quantum speedup on optimization subroutines.

One other approach that has received some attention is the possibility of new capabilities that may be unleashed by the combination of limited quantum computers with either existing quantum sensors or classical computing resources. Such combination might bypass the problems of state preparation and interfacing to classical memory. In this case it has been posited that by aggregating quantum data from distributed sensors, a quantum computer may improve the performance beyond what could be classically achievable.

There is the possibility of adapting to classical computers some of the techniques that are being developed for handling quantum data (both at the algorithm level as well as protocols for loading, storing and transferring data). These “quantum inspired” approaches may provide novel capabilities in terms of efficiency and speed.

DARPA Quantum computing challenges

Challenge 1: Fundamental limits of quantum computing. In order to establish such limits, respondents should address some of the following relevant questions:
* What are the near-term wins in mapping QC to hard science modeling problems?

We impose no constraints on what is meant by quantum computing; e.g. this could be a collection of physical or logical qubits, a quantum annealing machine, a quantum computational liquid, or some other quantum emulation platform that can serve as a proxy for the system to be modeled.

* Address the questions of scale. How many degrees of freedom in the problem of interest must be mapped to the QC platform to realistically model the system? At what scale do known classical computation platforms and algorithms become inadequate, and what are the potential gains brought by QC?

* How should the problem be framed; i.e. what are the questions to be addressed in modeling the physical system with a QC proxy system, and how should the quantum states be initialized and read out? Are there any new algorithms to usefully map the real-world quantum system to the proxy system?

* What are the known fundamental limitations to QC and scaling, including limits due to decoherence, degeneracy, environmental interactions, input-output limitations, and limited connectivity in the qubit-to-qubit interaction Hamiltonian? How will coherence times scale with the size of the QC system? Discuss error
correction techniques and their scaling. How will errors scale with the size of the system? How valid are assumptions of uncorrelated noise?

* What is the real speedup for known QC algorithms (e.g. HHL, Grover), taking into account maximum realizable size N of the system, quantum state preparation and readout, limited connectivity in the Hamiltonian, and interfacing to classical memory and the classical world?

* Experimentally, can we use existing NISQ systems with N ~ 50 to 100 to test our underlying assumptions? This includes: How do errors empirically scale with system size? Were the assumptions of incoherent noise valid? Do error-correction coding algorithms really work as expected? Is it possible to implement any of the proposed quantum algorithms, and a) calculate the correct answer, and b) achieve predicted speedup efficiencies?

* In the case where the size of the system being modeled exceeds the size of the QC platform, are there any algorithms or approaches or hybrid quantum-classical architectures that efficiently break the problem into smaller pieces that can map to one or more small QC platforms? How large a QC (e.g. in number of Qubits N) is
required to realize significant gains over a classical system (for the most optimistic real world use case you can find)? Are we there yet, will we get there in the next decade, or will we hit fundamental scaling limits before we reach useful speedups?

Challenge 2: Hybrid approaches to machine learning. DARPA is interested in approaches that dramatically improve the total time taken to construct a high-performing ML/DL solution by leveraging a hybrid quantum/classical computing approach. For example, a hybrid approach may incorporate a small scale quantum computer to efficiently implement specific subroutines that require limited resources in a ML/DL task that is being handled by a classical computer. The challenge here is to identify the best approaches for achieving significant speed up as compared to the capabilities of the best known algorithms that run solely on classical computers. Some of the relevant questions are:

* What approaches can be used to efficiently implement ML/DL tasks using a hybrid quantum/classical system using near-term and future QC devices? Are there specific tasks for which such approaches are more beneficial than others?

* How does the speedup depend on the size of the available quantum resources (e.g. number of qubits N)?

* What are the challenges in implementing this idea? For example, what issues have to be dealt with in order to interface quantum and classical resources? Can we efficiently transfer data between the classical and quantum processors in order to see any gains in performance?

* Is there a need to develop additional auxiliary technology to implement such approaches?

• Challenge 3: Interfacing quantum sensors with quantum computing resources. Some of the relevant questions are:
* What new capabilities can be gained through the combination of a quantum computer and distributed quantum sensors? How large does the quantum computer need to be and how well does it need to operate (e.g. how large of a two-qubit gate error can the system tolerate)? How many distributed sensors are needed to see a benefit and what level of performance do they need to have (e.g. operate at the standard quantum limit or near the Heisenberg limit, etc.)?

* What quantum computer platform (e.g. trapped ion qubits, superconducting qubits, etc.) and sensors (atomic clocks, magnetometer, etc.) could potentially be leveraged in this approach?

Challenge 4: QC inspired algorithms and processes that are applicable to classical computers.

o What systematic processes can be learned from the QC-inspired algorithms to date? Are there recurring themes and structures that have arisen in these new solutions? o Are there approaches to identify classical algorithm improvement when it has been shown to have a quantum supremacy approach? In other words, can we predict
these kinds of inspirations?

* As we learn about interfacing data and computation from challenges 1, 2, and 3, do we learn better classical architectures for mixing data input/output, memory, and computing together?