Designing HPC Systems: OPS Versus FLOPS

HPCWire (by Steve Wallach, Chief Scientist, Co-Founder, and Director, Convey Computer)- Building computer systems of any sort, but especially very large systems, is somewhat akin to the process an apartment real-estate developer goes through. The developer has to have an idea of what the final product will look like, its compelling features, and the go-to-market strategy.

Do they build each unit the same, or provide some level of heterogeneity, different floor plans. Do they construct one monolithic building or a village with walkways? What level of customization, if any, should be permitted?

In contemporary HPC design we face similar decision-making. Do we build tightly coupled systems, emphasizing floating point and internode bandwidth, or do we build nodes with extensive multi-threading that can randomly reference data sets? In either case, we always need to scale out as much as possible.

Examining the attributes listed above would initially lead one to the observation that there are substantive differences between the two. However, looking at a hardware logic design reveals a somewhat different perspective. Both systems need as much physical memory as can be directly supported, subject to cooling and power constraints. Both systems also would like as much real memory bandwidth as possible.

For both systems, the logic used by the ALU’s tends to be minimal. Thus the amount of actual space used for a custom design floating point ALU is relatively small. This is especially true when one considers that 64×64 integer multiplication is an often-used primitive address calculation in big data and HPC applications. In many cases, integer multiplication is part of the design of an IEEE floating point ALU.

If we dig a little deeper, we come to the conclusion that the major gating item is sustained memory bandwidth and latency. We have to determine how long it takes to access an operand and how many can be accessed at once, Given a specific memory architecture, we need to figure out the best machine state model for computation. Is it compiler managed-registers using the RAM that would normally be associated with a L3 cache, or keep scaling a floor plan similar to the one below?

In summary, this article discusses the single-node processor architecture for data-centric and conventional high performance computing. There are many similarities and many differences. The major divergence is in the main memory reference model and interface. Data caches were created decades ago, but it’s not clear if that this architecture is still optimal. Will Hybrid Memory Cube (HMC) and Processor in Memory (PIM) architectures make tradeoffs for newer designs that move away from the traditional memory designs? Time will tell.

The next article will discuss the design approaches for global interconnects.

If you liked this article, please give it a quick review on ycombinator or StumbleUpon. Thanks