Analog Solver Could Find the Best Solutions to NP-hard Problems

Zoltan Toroczkai and collaborators have been working toward developing a novel analog mathematical approach that will help advance computation beyond the digital framework.

They have a mathematical, analog “solver” that can potentially find the best solution to NP-hard problems. NP-hardness is a theory of computational complexity, with problems that are famous for their difficulty. And when the number of variables is large, problems associated with scheduling, protein folding, bioinformatics, medical imaging, and many other areas are nearly unsolvable with known methods. After testing their new method on a variety of NP-hard problems, the researchers concluded their solver has the potential to lead to better, and possibly faster, solutions than can be computed digitally.

Analog computers were used to predict tides from the early to mid-20th century, guide weapons on battleships, and launch NASA’s first rockets into space. They first used gears and vacuum tubes, and later, transistors, that could be configured to solve problems with a range of variables. They perform mathematical functions directly. For instance, to add 5 and 9, analog computers add voltages that correspond to those numbers, and then instantly obtain the correct answer. However, analog computers were cumbersome and prone to “noise” — disturbances in the signals — and were difficult to re-configure to solve different problems, so they fell out of favor.

Restrictions may prevent digital computers from solving NP-hard problems with many variables. One such problem is the “Traveling Salesman” problem, in which a salesperson must start in one city and return to that city at the end of a trip, but in between, must travel to all the different cities on its list. What’s the most efficient route among all the points? The problem becomes exponentially more challenging with the addition of more cities. The difficulty with such optimization problems, Toroczkai noted, is “while you can always come up with some answer, you cannot determine if it’s optimal. Determining that there isn’t a better solution is just as hard as the problem itself.”

A challenge for analog computing rests with the design of continuous algorithms. Unlike digital computing which has a long history in algorithm development, algorithms for analog computers lack a similar knowledge base and thus are very difficult to design. Toroczkai’s approach is different from the types of algorithms for digital computers, in all aspects.

The next step is to design and build devices based on this approach, a process that will be tackled within Notre Dame’s College of Engineering. The analog computers would be built for specific tasks, and not for everyday computing needs. This work is part of a larger scale, multi-institutional effort, Extremely Energy Efficient Collective Electronics (EXCEL), led by Notre Dame’s Suman Datta, Freimann Chair of Engineering and professor of electrical engineering, in collaboration with Sharon Hu, professor of computer science and engineering.

Nature Communications – A Continuous-time MaxSAT Solver with High Analog Performance

Abstract
Many real-life optimization problems can be formulated in Boolean logic as MaxSAT, a class of problems where the task is finding Boolean assignments to variables satisfying the maximum number of logical constraints. Since MaxSAT is NP-hard, no algorithm is known to efficiently solve these problems. Here we present a continuous-time analog solver for MaxSAT and show that the scaling of the escape rate, an invariant of the solver’s dynamics, can predict the maximum number of satisfiable constraints, often well before finding the optimal assignment. Simulating the solver, we illustrate its performance on MaxSAT competition problems, then apply it to two-color Ramsey number R(m, m) problems. Although it finds colorings without monochromatic 5-cliques of complete graphs on N ≤ 42 vertices, the best coloring for N = 43 has two monochromatic 5-cliques, supporting the conjecture that R(5, 5) = 43. This approach shows the potential of continuous-time analog dynamical systems as algorithms for discrete optimization.

Nextbigfuture Commenter Goatguy Provided his Favorable Analysis of the Analog Innovation

Basically, digital-domain computing is vexed by whole ‘constellations’ of problems that are ‘NP-hard’ and “NP-complete”. They tend toward exponential-complexity perfect solutions.

A perfect solution is one that proveably finds the ‘best result possible’ solution, regardless of the input, no matter how contrived to be tricky.

Moreover, computer science for a long time has known that efficient, time-fast “solvers” for ANY NP-complete problem become mappable onto solving ALL NP-complete problems in similarly revolutionary time frames. So, the hunt for solvers for ANY of the NP-complete problems is high stakes poaching.

One of the things you find while doing computationally intense math analysis of continuous systems, is how irritatingly error-prone it is to chop up the problems into little discrete bits, and efficiently ‘integrate’ the relationships between the bits in such a way as to accurately model what the real-world, actual-analog system(s) will do. The discrete integration and differentiation algorithms are woefully inclined to “go crazy” and not actually deliver useful results. No matter how many damned steps (bits) are used. Vexing is a mild description of it.

Yet, and perhaps even MORE vexing, is that when “analog computers” were first set up back in the 1950s and all through the 1960s to solve many-a-complex problem, they unerringly came up with really, really good solutions, even though their accuracy was rarely above 1% overall. 3 significant figures out of an analog computing solution were rare! But these darn things sent Men to the Moon and got them back again. Digital couldn’t do that (at the time) at all.

So it is a good article.
Good thoughts.
Compelling and fine research.

37 thoughts on “Analog Solver Could Find the Best Solutions to NP-hard Problems”

  1. Yes I think that programmers are uncomfortable with “randomness” as a solution. It is a fine thing to introduce but it is best to make sure that it is at least deterministic in terms of how you establish randomness.

    Greedy-randomization as an input to a traditional heuristic is called GRASP and dovetails wonderfully with traditional heuristics (simulated annealing, tabu search, local search, etc) as a way of providing thoughtful starting states to optimize. It is a simple and well established metaheuristic and is probably what the DWave team talks about when they mention traditional computers helping out their quantum optimizers.

    https://en.wikipedia.org/wiki/Greedy_randomized_adaptive_search_procedure

  2. I do remember some super vague idea about bubble movement being dependent on Brownian motion to get out of local maxima sticking points.
    But don’t ask me for a reference.

  3. In retrospect I can see how my comment could come off as being aggressive. Sorry for any misunderstanding.

  4. My physical issues are (according to my wife) due more to immaturity than maturity.

    I was riding on my bike on Saturday… and I regained consciousness in the ambulance on the way to hospital.

    That’s it. That’s what I remember.

    Evidence at the crash scene indicates going too fast down a hill littered with too many broken branches from a recent storm.

    Anyway, holding my left arm high enough to type causes pain and cramping after about 2 paragraphs.

  5. I’m so sorry to hear, Doc. Nothing about aging seems very graceful. I, with heart artery stents. My brother with an aneurism (caught in time). Neither of us terribly ancient. 

    I had a right nice invite to a New Year’s Eve party, with my guitar-and-conga bunch of graybeards; as I’m reminded every year, somewhat more bittersweet each time, grace and aging don’t mix. 

    In any case, my best wishes to you. 
    Keep on quipping like a quirky quaint quetzal. 

    GoatGuy

  6. That, good sir, is brilliant. Love it. I could regale you with similar coding stories, but they’re longish and somewhat opaque. The one top-shelf example though is a Sudoku solver that had the annoying propensity to get ‘stuck’ in LONG, deep try-trees, and failing. I found however that by introducing a randomize-list-before-evaluating-it ‘enhancement’, that in nearly all cases, the annoying try-tree digging ‘went away’. Full solutions to really devilishly difficult Sudoku boards in a few seconds, instead of dying on the vine for over a half hour (to find the same solution). Indeed: the most reliable quick solver algorithm had a separate thread that monitored how things were going, and simply re-seeded the random number generator if the current solver had taken more than 3 or 4 seconds Reseed, restart, and sure enough… a solution in less than 2 seconds, 90% of the time.

    Which then brought an interesting thought: if 90% of the time, randomizing the walk-tree resulted in markedly near-optimal runtime, and if restarting it got “another 90%” chance, then one could easily guarantee the odds after say 5 or so reseedings.

    Another fairly similar problem (that is almost ridiculously fiendish) is the minimum-seeking edge-and-membrane area solver for bubbles arranging themselves on a complex wire frame. In real-life, bubbles tend to find almost ridiculously outstanding solutions in parallel, and in the blink of the eye. In practice, algorithmically, the code labors.

  7. Story time! Fun true life story of how “noise” can “help”.

    See heuristic/metaheuristic code isn’t like normal code. Normal code with a bug will for the most part always crash or produce some awful result.

    Years ago (when I picked my name) I was working on writing some multi stage metaheuristic code to solve a relatively common NP problem in computer science.

    This code would work well and produced good results in a timely fashion. Passed all its test cases. Everybody is happy, I go on to other code that needs care and feeding.

    Years later I have some free time and am looking through the code wanting to speed things up. So I do a code review of the code I wrote years ago but with fresh eyes. I find an epic bug in the third heuristic stage.

    But how can this be? Good tests were made, the code had at this point been working without issue for years, everybody was happy. What gives?

    See metaheuristc code isn’t like normal code. Stage 3 of the metaheuristic code is producing a bad output that parts 1 and 2 deal with and work around.

    So I fix step 3. All better right? I Rerun tests and I find out that some are better and some are worse. The bug introduced noise that was cleaned up by the other parts of the code. The noise was working around some local optima and could result in better solutions.

    Metaheuristc code can be helped by noise.
    Metaheuristic code can be fault tolerant.
    Metaheuristic code is quite hard to thoroughly test.

  8. In the real world if you have a bound for optimality then you terminate when you are within some percentage of optimal. In many real world problems there is a certain degree of GIGO in the problem inputs and it doesn’t make sense to spend a week getting an optimal solution when the inputs are off by 1-5%.

  9. The really interesting questions for me:

    1. Can you build a general purpose analog optimization engine computer?
    2. Can you provide a nice front end that translates a discrete or continuous multi-variable optimization problem to use the general purpose optimization engine?
    3. How will this compare to dedicated quantum optimizers?

    So for 1 I think the answer is probably a yes.
    For 2 the answer seems to be a definite yes.
    For 3 the answer is that the analog machine will be slower but probably cost one one hundredth of a DWave. This is ok as we do not need real time protein folding- you can wait an hour if it saves you money.

  10. The most ancient analog computer that is known is the Antikythera mechanism that resolved astronomical differential equations (as a calendar) centuries before that was invented by Newton and Leibniz. This device, approx. from the dates of Christ birth, shows that sometimes technology arrives much before theory.

  11. Ah, sorry. As I said, English is not my first language. And it sounded for me, as such. But if you were no, then it is OK.

  12. Yep. tho’ sometimes that noise actually helps solve the hard problems more accurately than noise-free solvers. There definitely are tradeoffs. 

    Just saying,
    GoatGuy

  13. …continued…

    Moreover, as you now can imagine, it can take a LOT of time, nearly exponential, to decide what the actual absolute best solution is. Hence … NP problems. 

    In ANALOG, the issue becomes getting all those really-close cases to compare accurately. The D→D thing. GoatGuy

  14. The problem … “in a nutshell” is that while most real-world datasets (upon which one might desire to use an algorithm which is NP-hard or NP-complete), the “Devil is in the Details”. Namely precision and accumulated error (or for approximations, accumulated approximation limits). 

    For instance: there is a particularly hairy problem called “The Travelling Salesman Problem”, which sets out the conditions of many cities (vertices, abstract positions), many roads (edges, well defined distances) and sometimes with the wrinkle of variable maximum speeds (dv/dt rates). In the case of Google Maps, usage fees, existance of commuter lanes, roadway efficiency (elevation changes, etc). 

    But mostly “vertices and edges” at constant velocity. And the usual goal is “how should the salesman traverse the map (graph) given the need to minimize the time of her transit?” With the caveat being “must visit all cities”. But not “must take all roads”!

    If you think about the problem “like a human”, your eye naturally starts seeing clusters of close vertices, and ones much further away. You naturally try to find good solutions to the clusters, then decent solutions between the small clusters to get them to all ‘stick together’. 

    However, there are quite a few graphs where it just isn’t obvious whether to include ‘this node’ with ‘this cluster’, or the other one, or maybe have it be a nucleus of a new cluster (stealing vertices from neighbors). Takes a lot of trial and error to get a good solution.

  15. Now I see it. I meant that for those problems it is not neccessary to have efficient algorithms FOR BEING IN THE CLASS OF NP COMPLETE PROBLEMS.

    And I think that it can be deduced from my post, so it would be nice if you at first tried to clarify misunderstanding before bashing people in discussions.

  16. That is the mathematical definition of those problems. English is not my first language, so I am sorry if that sounded wrong, so I will try once again:

    The class of those problems is defined as the class of problems for which any claimed solution can be easily checked for correctness, but the solving itself is not necessarily fast/easy.
    So from the mathematical point of view the problems in this class are defined just as being easily verifiable.

    But of course if we had an efficient algorithm for solving NP-complete problems, it would change the world as we know it – it would allow for better understanding of almost everything, and so it would enable a rapid development of technology in all areas humans are capable of imagining.

  17. “These are problems for which it is not necessary to have efficient algorithms,…”

    Speak for yourself. I mean how can we judge what is or is not necessary merely from the mathematics?

    For all we know, solving NP problems might give us full understanding of protein folding and lead to the solution to aging. Now tell me that stopping millions from dying every year “is not necessary”.

  18. Just a slight misconception – NP actually stands for “nondeterministic polynomial”. These are problems for which it is not necessary to have efficient algorithms, but for which exists so called witnesses – proofs of correctness of the solution, which can be verified in polynomial time (for example factorization – while it hard to factor integers, if someone were to give you a factorization, you can easily verify, that it is correct).

    NP-complete problems are the hardest problems from this class. But while it is widely believed, that there are no polynomial algorithms for solving them, it is not as obvious as one would think. Indeed, it is one of the most famous problems in computer science – the P vs NP problem.

  19. I have a sliderule app on my phone.
    Doesn’t solve the out-of-battery problem, but does produce good reactions when someone asks to borrow my calculator.

  20. Always useful to remind the audience what the question actually means, but I think that if this article gets popular we’ll need a primer that’s even more basic than this.

    I’d write it myself, but I have physical issues right now. 🙁

  21. Am/fm radio is really an analog computer… it’s just doing Fourier math with electronic components..

  22. In my opinion, the modern digital computer can simulate the analog computer. The issue was always to zero out all error each step like the real world does with a Physics Hamiltonian; “Thou shalt not create nor destroy matter nor equivalent energy”!

  23. There is a research trend to use analog computation in some neural network chips. They tend to bundle them with analog storage elements so they can operate with very low power compared with digital circuitry.

  24. Reminding people: “NP” means “non-polynomial“. Which is a wee bit difficult to translate into a “mental image” description. 

    In a nutshell tho’, is the idea of computational complexity: algorithms that “do computations”, usually are given input data that has “N” elements.  You know, like 1000 data points from NOAA remote sensing telemetry, for a certain quadrangle of someplace in Switzerland over a 10 year period.  

    Algorithms that are simple and LINEAR execute in proportion to N.  We call that O(n) or “on the order of N” which means “N times some factor ‘k’, which is related to the processor speed, memory speed, generation, and all that.  But definitely a factor of N*k in time to solve.

    A lot of more complicated algorithms are O(n²) or O(n³)… which is to say, that the time it takes depends on kN³ + jN² + dN … (for O(n³)).  At large N, clearly the cubic (N³) term will dominate the runtime.  

    NP problems tho’ are known NOT to be related to any polynomial. Usually they are simple exponentials or power-of-N players. Even when for small N the complete runtime is quick, as N grows, suddenly even the best algorithms can come to their knees.  

    Just saying,
    GoatGuy

  25. Hey… I used one. In fact, in my backpack / rucksack, to this day, lodged in there somewhere is a slide-rule. Which invariably proves useful every once in awhile when I don’t have access to my phone (usual reason: batteries exhausted, OR, hands covered with something sticky, grimy, bad.)

    On the idea tho’ that it is an analog computer, it is not. It is an ingenious mathematical estimator, and no more. Unlike a true ‘computer’, it hasn’t a single facility to store a multistep program, and repeatably re-execute it.  

    However — in light of the sentiment — the invention of the Operational Amplifier (“OpAmp”) was directly the result of needing high-reliability, good-accuracy, ageing-and-manufactury variation independent “analog computing” elements. We’ve long given up using OpAmps for this purpose (mostly), but it is good to see that its coming back. Could be quite the breakthrough. 

    Just saying,
    GoatGuy

Comments are closed.