Analog Solver Could Find the Best Solutions to NP-hard Problems.

Zoltan Toroczkai and collaborators have been working toward developing a novel analog mathematical approach that will help advance computation beyond the digital framework.

They have a mathematical, analog “solver” that can potentially find the best solution to NP-hard problems. NP-hardness is a theory of computational complexity, with problems that are famous for their difficulty. And when the number of variables is large, problems associated with scheduling, protein folding, bioinformatics, medical imaging, and many other areas are nearly unsolvable with known methods. After testing their new method on a variety of NP-hard problems, the researchers concluded their solver has the potential to lead to better, and possibly faster, solutions than can be computed digitally.

Analog computers were used to predict tides from the early to mid-20th century, guide weapons on battleships, and launch NASA’s first rockets into space. They first used gears and vacuum tubes, and later, transistors, that could be configured to solve problems with a range of variables. They perform mathematical functions directly. For instance, to add 5 and 9, analog computers add voltages that correspond to those numbers, and then instantly obtain the correct answer. However, analog computers were cumbersome and prone to “noise” — disturbances in the signals — and were difficult to re-configure to solve different problems, so they fell out of favor.

estrictions may prevent digital computers from solving NP-hard problems with many variables. One such problem is the “Traveling Salesman” problem, in which a salesperson must start in one city and return to that city at the end of a trip, but in between, must travel to all the different cities on its list. What’s the most efficient route among all the points? The problem becomes exponentially more challenging with the addition of more cities. The difficulty with such optimization problems, Toroczkai noted, is “while you can always come up with some answer, you cannot determine if it’s optimal. Determining that there isn’t a better solution is just as hard as the problem itself.”

A challenge for analog computing rests with the design of continuous algorithms. Unlike digital computing which has a long history in algorithm development, algorithms for analog computers lack a similar knowledge base and thus are very difficult to design. Toroczkai’s approach is different from the types of algorithms for digital computers, in all aspects.

The next step is to design and build devices based on this approach, a process that will be tackled within Notre Dame’s College of Engineering. The analog computers would be built for specific tasks, and not for everyday computing needs. This work is part of a larger scale, multi-institutional effort, Extremely Energy Efficient Collective Electronics (EXCEL), led by Notre Dame’s Suman Datta, Freimann Chair of Engineering and professor of electrical engineering, in collaboration with Sharon Hu, professor of computer science and engineering.

Nature Communications – A Continuous-time MaxSAT Solver with High Analog Performance

Abstract
Many real-life optimization problems can be formulated in Boolean logic as MaxSAT, a class of problems where the task is finding Boolean assignments to variables satisfying the maximum number of logical constraints. Since MaxSAT is NP-hard, no algorithm is known to efficiently solve these problems. Here we present a continuous-time analog solver for MaxSAT and show that the scaling of the escape rate, an invariant of the solver’s dynamics, can predict the maximum number of satisfiable constraints, often well before finding the optimal assignment. Simulating the solver, we illustrate its performance on MaxSAT competition problems, then apply it to two-color Ramsey number R(m, m) problems. Although it finds colorings without monochromatic 5-cliques of complete graphs on N ≤ 42 vertices, the best coloring for N = 43 has two monochromatic 5-cliques, supporting the conjecture that R(5, 5) = 43. This approach shows the potential of continuous-time analog dynamical systems as algorithms for discrete optimization.

Nextbigfuture Commenter Goatguy Provided his Favorable Analysis of the Analog Innovation

Basically, digital-domain computing is vexed by whole ‘constellations’ of problems that are ‘NP-hard’ and “NP-complete”. They tend toward exponential-complexity perfect solutions.

A perfect solution is one that proveably finds the ‘best result possible’ solution, regardless of the input, no matter how contrived to be tricky.

Moreover, computer science for a long time has known that efficient, time-fast “solvers” for ANY NP-complete problem become mappable onto solving ALL NP-complete problems in similarly revolutionary time frames. So, the hunt for solvers for ANY of the NP-complete problems is high stakes poaching.

One of the things you find while doing computationally intense math analysis of continuous systems, is how irritatingly error-prone it is to chop up the problems into little discrete bits, and efficiently ‘integrate’ the relationships between the bits in such a way as to accurately model what the real-world, actual-analog system(s) will do. The discrete integration and differentiation algorithms are woefully inclined to “go crazy” and not actually deliver useful results. No matter how many damned steps (bits) are used. Vexing is a mild description of it.

Yet, and perhaps even MORE vexing, is that when “analog computers” were first set up back in the 1950s and all through the 1960s to solve many-a-complex problem, they unerringly came up with really, really good solutions, even though their accuracy was rarely above 1% overall. 3 significant figures out of an analog computing solution were rare! But these darn things sent Men to the Moon and got them back again. Digital couldn’t do that (at the time) at all.

So it is a good article.
Good thoughts.
Compelling and fine research.

logo

Don’t miss the latest future news

Subscribe and get a FREE Ebook