July 10, 2019 |
Brian Wang |
Nvidia’s AI platform set eight records in training performance, including three in overall performance at scale and five on a per-accelerator basis. The AI platform now slashes through models that once took a whole workday to train in less than …
June 19, 2019 |
Brian Wang |
Using HPL-AI, a new approach to benchmarking AI supercomputers, Oak Ridge National Laboratory’s Summit supercomputer system reached 445 petaflops or nearly half an exaflops. The system’s official Linpack performance is 148 petaflops announced in the new TOP500 list of the …
May 7, 2018 |
Brian Wang |
Teal is already selling its dumbed down but ultra-fast Sport version (at a suggested retail of $799). And in 2018, the company says its Teal 2 will hit the market powered by Nvidia’s Jetson GPU and Neurala’s artificial brain. The …
March 27, 2018 |
Brian Wang |
NVIDIA launched NVIDIA DGX-2, the first single server capable of delivering two petaflops of computational power.A DGX-2 has the deep learning processing power of 300 servers occupying 15 racks of datacenter space, while being 60x smaller and 18x more power …
March 21, 2018 |
Brian Wang |
The Piz Daint supercomputer is a Cray XC50. It was upgraded for about $42 million with almost 5000 Nvidia processors late in 2016. With a peak performance of seven petaflops, “Piz Daint” has been Europe’s fastest supercomputer since its debut …
January 9, 2018 |
Brian Wang |
With more than 9 billion transistors, Nvidia’s Xavier is the most complex system on a chip ever created, representing the work of more than 2,000 NVIDIA engineers over a four-year period, and an investment of $2 billion in research and …
December 8, 2017 |
Brian Wang |
NVIDIA TITAN V is the most powerful graphics card ever created for the PC, driven by the world’s most advanced architecture—NVIDIA Volta. NVIDIA’s supercomputing GPU architecture is now here for your PC, and fueling breakthroughs in every industry. * Titan …
November 23, 2017 |
Brian Wang |
There are many established and startup companies developing deep learning chips. Google and Wave Computing have working silicon and are conducting customer trials. * Wave Computing says its 3U deep learning server can train AlexNet in 40 minutes, three times …