1000X Regular Chips from New Cerebras AI Wafer Chip

Cerebras has launched its second-generation wafer chip, the WSE2. It is created with a 7-nanometer process instead of the 16-nanometer process used two years ago on WSE1.

The WSE-2 covers 46,255 square millimeters — 56 times larger than the largest graphics processing unit. With 850,000 cores, 40 Gigabytes of on-chip SRAM, 20 petabytes/sec of memory bandwidth, and 220 petabits/sec of interconnect bandwidth, the WSE-2 contains 123 times more compute cores, 1,000 times more high-speed on-chip memory, 12,862 times more memory bandwidth, and 45,833 times more fabric bandwidth than its graphics processing competitor. In effect, it provides the compute capacity of an entire cluster in a single chip, without the cost, complexity, and bandwidth bottlenecks involved with lashing together hundreds of smaller devices.

The Cerebras CS-2 eliminates the primary impediment to the advancement of artificial intelligence, reducing the time it takes to train models from months to minutes and from weeks to seconds, allowing researchers to be vastly more productive. In so doing the CS-2 reduces the cost of curiosity, accelerating the arrival of the new ideas and techniques that will usher forth tomorrow’s AI.

Cerebras has ~300 staff across Toronto, San Diego, Tokyo, and San Francisco. Cerebras is already profitable, with dozens of customers. Beyond AI, Cerebras is getting from commercial high-performance compute markets, such as oil-and-gas and genomics. Deployments of CS-2 will occur later this year in Q3, and the price has risen from ~$2-3 million to ‘several’ million.

SOURCES Cerebras, Anandtech
Written By Brian Wang, Nextbigfuture.com