This article is more than 1 year old

Cerebras' wafer-size AI chips play nice with PyTorch, TensorFlow

Better support for top ML frameworks means stronger chip competition

Good news for those who like their AI chips big: Cerebras Systems has expanded support for the popular open-source PyTorch and TensorFlow machine-learning frameworks on the Wafer-Scale Engine 2 processors that power its CS-2 system.

The chip designer says the expanded support, announced today, is an important milestone because it will make running mainstream AI/ML models on Cerebras' machines easier, which will, in turn, help the six-year-old startup compete with AI systems and processor makers that have broad language and model support.

"From the start, our goal was to seamlessly support whichever machine learning framework our customers wanted to write in," said Emad Barsoum, senior director of AI framework at Cerebras.

The expanded framework support is now cooked into the Cerebras CSoft software stack, which allows ML researchers to write their models to run on the CS-2 using TensorFlow or PyTorch without any modifications. Just as crucial, Cerebras' improved integrations that allow models previously written for GPUs or CPUs to run on the CS-2 without any changes.  

"Our customers write in TensorFlow and in PyTorch, and our software stack, CSoft, makes it quick and easy to express your models in the framework of your choice," Barsoum said.

Cerebras claims this is a big deal because its WSE-2 chip is much faster and better equipped at handling models of various sizes than GPUs, including Nvidia's two-year-old flagship A100 chip.

In the case of PyTorch models, Barsoum wrote in a blog post that Cerebras' hardware avoids pitfalls experienced by conventional processors because of the WSE-2's massive number of cores, large amount of memory, and high memory bandwidth.

This means that small and medium-sized models don't have to be split up between multiple processors, which can slow down data movement.

As for large PyTorch models that don't fit on the WSE-2 chip, Barsoum said the wafer-sized processor can still perform well by storing what are called the model's activations on the chip and moving parameters called weights to and from the chip on a layer-by-layer basis.

This all sounds good, but Cerebras still has a way to go before it can take even a small market share slice from Nvidia, and that doesn't even take into account the AI acceleration efforts of Intel, AMD and several other startups that are at various stages of deployment.  

Even though the wafer scale processor maker isn't poised to make a massive market entry against the largest semiconductor companies pushing processors and accelerators, Cerebras has racked up customers across North America, Asia, Europe and the Middle East. The list includes companies like GlaxoSmithKline, AstraZeneca and TotalEnergies as well as national and regional labs like Argonne National Laboratory and the Pittsburgh Supercomputing Center.

Just today, the Pittsburgh Supercomputing Center said today that it has upgraded its Neocortex high-performance AI computer with two new CS-2 systems, which the lab said creates "new potential for rapidly training AI systems capable of learning from vast data sources."

With an expanding cadre of end users to point to and a more relatable software stack for AI developers, we might expect to see more big names back the unique architectural approach. If nothing else, the company has shown that the once far-out concept of wafer scale chip manufacturing and use is within reach and the software stack to support on-chip complexity can be worked out, even if it takes years. ®

More about

TIP US OFF

Send us news


Other stories you might like