Computational Chemistry advance: 100 times more accurate

James Sims of NIST and Stanley Hagstrom of IU announced a new high-precision calculation of the energy required to pull apart the two atoms in a hydrogen molecule (H2). Accurate to 1 part in 100 billion, these are the most accurate energy values ever obtained for a molecule of that size, 100 times better than the best previous calculated value or the best experimental value. This advance could be useful for creating better computer simulations for molecular nanotechnology. The algorithmic improvement to get faster or more accurate solutions is adapted to use parallel processing (about 140 processors used for the calculation over a weekend).

More computer systems are being developed that will help take advantage of this kind of algorithmic advance. 1000 processors via FPGAs for $100,000 later this year and next.

Intel is promising hundreds of processor cores within ten years.

Backgroun on supercomputer architectures
1. vector processors that can execute particular types of mathematical problems very quickly. (traditional Cray type machines)
2. large numbers of regular processors typically placed in a large number of networked computers. (Big Blue type supercomputers)
3. field-programmable gate arrays (FPGAs), chips that can be reconfigured on the fly to run specific programs very quickly.
4. Multithreaded chips

Details on the algorithmic advance:
The calculation requires solving an approximation of the Schrödinger equation, one of the central equations of quantum mechanics. It can be approximated as the sum of an infinite number of terms, each additional term contributing a bit more to the accuracy of the result. For all but the simplest systems or a relative handful of terms, however, the calculation rapidly becomes impossibly complex. Precise calculations have been done for systems of three components but this is for four. Their calculations were carried out to 7,034 terms. Two earlier algorithms were merged. They also developed improved computer code for a key computational bottleneck (high-precision solution of the large-scale generalized matrix eigenvalue problem) using parallel processing. The final calculations were run on a 147-processor parallel cluster at NIST over the course of a weekend.