Inexact computing can improve quality of supercomputing answers by 1,000 times

omputer scientists from Rice University, Argonne National Laboratory and the University of Illinois at Urbana-Champaign have used one of Isaac Newton’s numerical methods to demonstrate how “inexact computing” can dramatically improve the quality of simulations run on supercomputers.

The research is summarized in a paper on the preprint server ArXiv and is part of an ongoing effort by scientists at Rice University’s Center for Computing at the Margins (RUCCAM) to dramatically improve the resolution of weather and climate models with new ultra-efficient approaches to supercomputing.

Computer scientists from Rice University, Argonne National Lab and the University of Illinois at Urbana-Champaign have used one of Isaac Newton’s numerical methods to demonstrate how “inexact computing” can dramatically improve the quality of simulations run on supercomputers. (Image courtesy of Shutterstock.com/Rice University)

Arxiv – Doing Moore with Less – Leapfrogging Moore’s Law with Inexactness for Supercomputing

Accuracy and energy are exchangeable in computation, and sacrificing minimal accuracy can yield tremendous energy savings.

“In many situations, having an answer that is accurate to seven or eight decimal places is of no greater value than having an answer that is accurate to three or four decimal places, and it is important or realize that there are very real costs, in terms of energy expended, to arrive at the more accurate answer,” Palem said. “The discipline of inexact computing centers on saving energy wherever possibly by paying only for the accuracy that is required in a given situation.”

Palem, who won a Guggenheim Fellowship in 2015 to adapt these approaches to climate and weather modeling, collaborated with Oxford University physicist and climate scientist Tim Palmer to show that inexact computing could potentially reduce by a factor of three the amount of energy needed to run weather models without compromising the quality of the forecast.

In the new research, Palem, working with colleagues at Rice, with a team at Argonne National Laboratory headed by Sven Leyffer and Stefan Wild, and with Marc Snir of the University of Illinois at Urbana-Champaign (UIUC) showed it is possible to leapfrog from one part of a computation to the next and reinvest the energy saved from inexact computations at each new leap to increase the quality of the final answer while retaining the same energy budget.

Palem likened the new approach to calculating answers in a relay of sprints rather than in a marathon.

“By cutting precision and handing off the saved energy, we achieve significant quality improvements,” said Palem, Rice’s Kenneth and Audrey Kennedy Professor of Computer Science. “This model allows us to change the way computational energy resources are utilized in supercomputers to dramatically improve solutions within a fixed energy budget.”

The research team took advantage of one of the most commonly used tools of numerical analysis, a method known as Newton-Raphson that was created in the 1600s by Isaac Newton and Joseph Raphson. In supercomputing, the method is used to allow high-performance computers to find successively better approximations to complex mathematical functions.

The researchers demonstrated that the solution’s quality could be improved by more than three orders of magnitude for a fixed energy cost when an inexact approach to calculation was used rather than a traditional high-precision approach.

“In simple terms, it is analogous to rebalancing an investment portfolio,” said Snir, the Michael Faiman Professor in the Department of Computer Science at UIUC. “If you have one investment that’s done well but has maxed out its potential, you might want to reinvest some or all of those funds to a new source with more potential for a much better return on investment.”

Palem said, “A specific goal is to encourage the application of this approach as a way to advance the quality of weather and climate modeling by improving model resolution.”

Conclusions and Outlook

We have developed in this paper a somewhat paradoxical procedure for reducing the errors in numerical computations by reducing the precision of the floating point operations used. Our focus has been on using in the best possible manner a given energy budget. The paper illustrated one possible tradeoff between computation budget and quality, namely the one achieved by changing numerical precision of floating point numbers. As mentioned in the introduction, there are many other potential “knobs” to trade off quality against computation budget: One can use different approximations in the mathematical model, different computation methods, different discretizations of the continuous model, different levels of asynchrony, etc. Each of these knobs has been studied in isolation. But there are not independent; we are missing a methodology for finding the combination of choices for these knobs that achieve the best tradeoff between quality and computation effort.

The “dual” problem, or reducing energy consumption for a given result quality, is also important. Our results essentially show that one can reduce energy consumption by a factor of 2.x, without affecting the quality of the result, by smarter use of single precision. For decades, increased supercomputer performance has meant more double precision floating point operations per second. This brute force approach is going to hit a brick wall pretty soon. Smarter use of available computer resources is going to be the main way of increasing the effective performance of supercomputers in the future.

Among the many scientific domains where effective supercomputing has come to play a central role, none are perhaps more important than weather prediction and climate modeling. Inexactness or phase I of our approach has been shown in earlier work to yield benefits to weather prediction models with lower energy consumption while preserving the quality of the prediction. This has spurred interest among climate scientists who view inexactness through precision reduction as a way of achieving speedups in the traditional sense, and also cope with energy barriers. However, it is well understood that for serious advances in model quality, weather and climate models need to be resolved at much higher resolutions that is possible today with current computational budgets including energy. We hope that the new direction that the results in this paper have demonstrated through the novel approach of reinvestment to raising application quality significantly, will be a harbinger for broader adoption of our two-phased approach by the weather and climate modeling community. In particular, building on our work here, the goal of selectively reducing precision resulting in energy savings, while increasing the resolution of the weather and climate models through energy reinvestment provides a path of considerable societal value.

Abstract

Energy and power consumption are major limitations to continued scaling of computing systems. Inexactness, where the quality of the solution can be traded for energy savings, has been proposed as an approach to overcoming those limitations. In the past, however, inexactness necessitated the need for highly customized or specialized hardware. The current evolution of commercial off-the-shelf(COTS) processors facilitates the use of lower-precision arithmetic in ways that reduce energy consumption. We study these new opportunities in this paper, using the example of an inexact Newton algorithm for solving nonlinear equations. Moreover, we have begun developing a set of techniques we call reinvestment that, paradoxically, use reduced precision to improve the quality of the computed result: They do so by reinvesting the energy saved by reduced precision.