The Dwave quantum computer has a grid of connected qubits and problems need to be mapped onto them to be solved.
What can we say about whether or not we can embed the problem graph into a hardware graph? Here are some things that we know:
1. You can always embed a fully connected graph on N variables into a Chimera graph with N^2 variables.
2. You can never embed a problem graph that has either more vertices or more edges than a hardware graph.
3. Any instance that is in the middle needs to be determined by solving what is in general a hard problem in its own right, that is does G_p embed in G. In practice if you wanted to operate in this regime, you’d probably have a fast heuristic embedder to see if a problem natively embeds.
Based on these observations, there are three obvious ways these chips can be operated.
1. Use the chip as a solver for complete graphs, where the algorithm calling the chip has a hard-coded embedding scheme for any problem edge set. This option has the advantage that it is the most flexible for inclusion in the master algorithm (no restrictions on problem edge set), and there is no runtime cost for embedding. Its key disadvantage is that the number of variables of problems you can solve in this mode is upper bounded by roughly the square root of the number of qubits in hardware, which is a significant cost with the current chips & their low qubit count.
[Noted in comments: 2000 qubits seems to be where operating at number 1 start to take off. You would have K 256 and K 192.]
2. Use the chip as a solver for problem graphs that exactly match the hardware graph by making this a hard constraint in the master algorithm. In other words, as an algorithm designer you are constrained to only pose problems to the hardware that have problem graphs that exactly match the hardware graph. This has the advantage of having trivial hard-coded embedding and maximally using the resources of the hardware. Its key disadvantage is a severe loss of flexibility in algorithms design possibilities.
3. Use the chip for arbitrary problem sizes by running an embedding heuristic each time a possibly embeddable graph is generated by the master algorithm. This has the advantage of a significant gain in flexibility for algorithm designers. The main disadvantage is the increased runtime burden of having to compute embeddings on the fly.
My own strong preference is for #2 above. The loss of flexibility in algorithms design is a big problem, but we’ve been able to find ways to build algorithms respecting fixed hardware graphs already so I think that the advantages of #2 carry the day, at least in the short term.