Last week at SC16, Intel revealed its product roadmap for embedding its processors with key capabilities and attributes needed to take artificial intelligence (AI) to the next level.
Intel will test the first AI-specific hardware, code-named “Lake Crest,” in the first half of 2017, with limited availability later in the year. Lake Crest will be optimized for running neural network workloads, and will feature “unprecedented compute density with a high-bandwidth interconnect.”
Intel has been moving strongly into the deep learning area with several key acquisitions, including Nervana; Movidus Systems, a developer of low-power machine vision technology that it bought in September; and Saffron Technology, a developer of “natural learning” solution profiled in Datanami.
Intel revealed details about other hardware initiatives, including the next generation of Intel Xeon Phi co-processors, code-named “Knights Mill,” which will deliver up to 4x better performance than the previous generation for deep learning. Those Knights Mill chips will be available in 2017.
If Intel can execute on and deliver what they said they would do today, Intel will be a future player in AI.”
While GPUs are doing most of the “heavy lifting” for deep neural net training today, there’s no reason why Intel — which acquired FPGA manufacturer Altera in 2015 — can’t influence the technological direction that AI follows.
Altera has had 10 Teraflop FPGA chips for two years.
Stratix® 10 FPGAs and SoCs deliver breakthrough advantages in performance, power efficiency, density, and system integration: advantages that are unmatched in the industry. Featuring the revolutionary HyperFlex™ core fabric architecture and built on the Intel® 14 nm Tri-Gate process, Stratix 10 devices deliver 2X core performance gains over previous-generation, high-performance FPGAs with up to 70% lower power.