If engineers can use new technology to create an exascale system that consumes only 20 MW of power, the same technology can also be used to dramatically lower the power consumption of lower performance systems, to the point where giga-scale systems consuming only 20 milliwatts of power can be used in small toys and mega-scale systems that consume only 20 microwatts could be used in heart monitors.
Borkar said the way forward is to improve both energy per transistor and energy per compute operation. Conventional CMOS scaling improves both, he said, but not to a large enough degree. And indication are that energy per transistor at the circuit level will not decline as much as it has in the past, he said.
“Clearly we need to do something more than just scaling of technology,” Borkar said.
Borkar said near threshold voltage circuit design both reduces total power consumption and improves energy efficiency. “Clearly this is very promising technology,” Borkar said. “But as you start solving the problem of energy efficiency, leakage power dominates.”
During Borkar’s 40-minute address Tuesday, he made several other observations. One was to emphasize the importance of “local computing” at a time when everyone is talking up the virtues of cloud computing. He noted that communications technologies used for moving data, including Bluetooth, Ethernet and Wi-Fi, use far more power than those used for local computing within a chip or system. “Clearly, data movement energy will dominate the future,” Borkar said.
ElectroIQ – Borkar also discussed the use of systems on chip (SoC) for targeted efficiency and flexibility — using single-purpose blocks that are extremely energy efficient along with the flexible blocks, such as microprocessor transistors, to make a chip accomodate various operations. Borkar calls this “valued performance.”
Other energy-saving device architectures include stacking DRAM memory with a logic buffer to direct access to a specific page. Intel is developing this concept with Micron, calling it a Hybrid Memory Cube.
Finally, Borkar shared some unconventional interconnect strategies for package-to-system energy savings, such as top-of-package interconnect. He stressed that circuits and interconnects should be co-optimized to maximize energy efficiency.
At CEA-Leti’s research meeting, Hughes Metras, VP of strategic partnerships in North America, also projected that the next step in super computing, exascale, would be insupportably energy intensive. Leti’s solutions to the energy and bandwidth demands of future computing include a planar fully depleted silicon on insulator (FDSOI) transistor architecture, silicon photonics for light-based data communication rather than electrical, and 3D integration for lower-loss and shorter interconnects.
Maud Vinet, Leti assignee at IBM, focused on planar FDSOI transistors. The benefit of planar technology is that we already have the major of the wafer processing technologies we need from bulk CMOS. The biggest change is that planar FDSOI uses extremely thin (a couple nanometers) silicon films, so extra attention must be paid at any step where silicon could be lost. The smaller gate lengths of planar FDSOI prevent parasitics, for faster operation. Back bias allows the device’s threshold voltage to be tuned, a concept discussed during Intel’s keynote as well. Other elements — strain on the NFET, silicon germanium (SiGe) for the PFET — combine in the planar FDSOI to enable 30% less power dissipation, or wasted energy, than bulk transistors.
2. Technology Review – As chip makers aggressively scale down integrated circuits to provide ever more computing power, much of the focus has been on improving transistors. But performance has also been limited by the copper wiring that shuttles information around the chips.
Today, at the Semicon West conference in San Francisco, semiconductor equipment maker Applied Materials announced a tool that it says solves a part of this problem by making chip wires that have fewer errors. Industry watchers say the new technology may stave off expensive manufacturing problems in the short term.
The company says its new copper-deposition machine, called Endura Amber, can make copper interconnects smaller than 10 nanometers without impacting yield. Like previous machines, it uses a process called ionized physical vapor deposition to coat the chip with a layer of copper. What’s new is that the machine then heats up the chip so that the copper will flow into the hole, reducing the likelihood of defects. Carrying out the deposition and heating steps in the same chamber is not trivial and was something engineers at the company originally considered a “cockamamie idea,” says Kevin Moraes, who manages Applied Materials’ metal deposition products.
This cockamamie idea could help manufacturers use existing chip-making infrastructure for the next generation of chips. But it won’t solve the bigger problem: the fact that smaller copper wires cause major performance problems.
The solution that would cause the least disruption to chip-making infrastructure would be to find another metal that remains conductive even when made into very thin wires, and which doesn’t heat up as much as copper, says Jonathan Candelaria, director of interconnect sciences at the Semiconductor Research Corporation. Researchers are looking at various alloys, tungsten, or the possibility of returning to aluminum, the interconnect material of choice until about 20 years ago.
For a while, researchers put great hope in new carbon nanomaterials, including graphene. Part of the problem with copper is that electrons scatter off imperfections in the material. Nanotubes and graphene, by contrast, provide smooth sailing for electrons. But researchers are still learning how to work with these materials. So Geer is trying to develop new ways of structuring conventional metals so that, like nanotubes and graphene, they conduct without scattering.