Diane Bryant, executive vice president and general manager of the Data Center Group at Intel, says the Intel Nervana platforms will “produce breakthrough performance and dramatic reductions in the time to train complex neural networks.” She predicted that Intel would deliver a 100x increase in performance of deep learning training.
Last week at SC16, Intel revealed its product roadmap for embedding its processors with key capabilities and attributes needed to take artificial intelligence (AI) to the next level.
Intel will test the first AI-specific hardware, code-named “Lake Crest,” in the first half of 2017, with limited availability later in the year. Lake Crest will be optimized for running neural network workloads, and will feature “unprecedented compute density with a high-bandwidth interconnect.”
Intel has been moving strongly into the deep learning area with several key acquisitions, including Nervana; Movidus Systems, a developer of low-power machine vision technology that it bought in September; and Saffron Technology, a developer of “natural learning” solution profiled in Datanami.
Intel revealed details about other hardware initiatives, including the next generation of Intel Xeon Phi co-processors, code-named “Knights Mill,” which will deliver up to 4x better performance than the previous generation for deep learning. Those Knights Mill chips will be available in 2017.
If Intel can execute on and deliver what they said they would do today, Intel will be a future player in AI.”
While GPUs are doing most of the “heavy lifting” for deep neural net training today, there’s no reason why Intel — which acquired FPGA manufacturer Altera in 2015 — can’t influence the technological direction that AI follows.
Stratix® 10 FPGAs and SoCs deliver breakthrough advantages in performance, power efficiency, density, and system integration: advantages that are unmatched in the industry. Featuring the revolutionary HyperFlex