Power has become the primary design constraint for chip designers today. While Moore’s law continues to provide additional transistors, power budgets have begun to prohibit those devices from actually being used. To reduce energy consumption, voltage scaling techniques have proved a popular technique with subthreshold design representing the endpoint of voltage scaling. Although it is extremely energy efficient, subthreshold design has been relegated to niche markets due to its major performance penalties. This paper defines and explores near-threshold computing (NTC), a design space where the supply voltage is approximately equal to the threshold voltage of the transistors. This region retains much of the energy savings of subthreshold operation with more favorable performance and variability characteristics. This makes it applicable to a broad range of power-constrained computing segments from sensors to high performance servers. This paper explores the barriers to the widespread adoption of NTC and describes current work aimed at overcoming these obstacles.
Near Threshold Computing could enable future computer systems to reduce energy requirements by 10 to 100 times or more, by optimizing them for low-voltage operation.
Near-threshold computing could be the key to decreasing power requirements without overturning the entire CMOS framework, the researchers say. Although low-voltage computing is already popular as an energy-efficient technique for ultralow-energy niche markets such as wristwatches and hearing aids, its large circuit delays lead to large energy leakages that have made it impractical for most computing segments. So far, these ultralow-energy devices have operated at extremely low “subthreshold” voltages, from around 200 millivolts down to the theoretical lower limit of 36 millivolts. Conventional voltage operation is about 1.0 volts. Meanwhile, near-threshold operation occurs around 400-500 millivolts, or near a device’s threshold voltage
Near-threshold computing challenges
1. a 10 times performance loss
2. five times increase in performance variation
3. an increase in functional failure rate of five orders of magnitude
With power becoming a key design constraint, particularly in server machines, emerging architectures need to leverage reconfigurable techniques to provide an energy optimal system. The need for a single chip solution to fit all needs in a warehouse sized server is important for designers. This allows for simpler design, ease of programmability, and part reuse in all segments of the server. A reconfigurable design would allow a single chip to operate efficiently in all aspects of a server providing both single thread performance for tasks requiring it, and efficient parallel processing helping to reduce power consumption. In this paper we explore the possibility of a reconfigurable server part and discuss the benefits and open questions still surrounding these techniques.
Brian Wang is a Futurist Thought Leader and a popular Science blogger with 1 million readers per month. His blog Nextbigfuture.com is ranked #1 Science News Blog. It covers many disruptive technology and trends including Space, Robotics, Artificial Intelligence, Medicine, Anti-aging Biotechnology, and Nanotechnology.
Known for identifying cutting edge technologies, he is currently a Co-Founder of a startup and fundraiser for high potential early-stage companies. He is the Head of Research for Allocations for deep technology investments and an Angel Investor at Space Angels.
A frequent speaker at corporations, he has been a TEDx speaker, a Singularity University speaker and guest at numerous interviews for radio and podcasts. He is open to public speaking and advising engagements.