Cerebras Systems has launched a new transistor chip with 1.2 trillion transistors, 400,000 processing cores, and 18GB of on chip memory.- The California-based start up fit all of that on a single wafer measuring 46,225 square millimeters.
- The main purpose of the chip is to remove the bottleneck of limited data processing in the training of
artificial intelligence (AI) algorithms.
The biggest, baddest and largest of them all has finally made it into the market after three years. It took 173 engineers and $112 million venture capital funding from Benchmark, Foundation Capital, Eclipse and other companies, to make it.
Cerebras Systems’ ‘Wafer Scale Engine’ as it is called has more cores, more memory and more power than any chip till date. The California-based startup is trying to pace the evolution of chips to the increasing needs of the tech industry.
The
Over a trillion transistors
The Wafer Scale Engine has 1.2 trillion transistors — the highest number of transistors on any transistor chip so far.
To put that in perspective, any laptop with Intel Core i3, i5 or i7 processors have about 1.5 million transistors.
Even Advanced Micro Devices’ (AMD) 7 nanometer Epyc “Rome” CPU with 32 billion transistors, which was announced earlier this month, doesn’t even come close to matching this.
In addition to the massive amount of transistors, the Wafer Scale Engine has 400,000 processing cores.
Total recall
The Wafer Scale Engine also has more memory than other chip in the market with 18GB — and it’s one of the key components of a computer’s architecture.
More memory on-chip means faster calculation, lower latency, and better power efficiency as data moves from one place to the other.
More impressively, all of this is on a single wafer that functions as one single chip instead of dividing a single wafers into individual chips — what is normal every chip until now.
The reason it hasn’t been done till now is because there are multiple issues in trying to make it happen.
One, the software to handle these chips has to be written from scratch.
And two, since all of it is one wafer, there had to be a fail safe in place in case even a single core went down — rendering the entire flow inoperative.
Cerebras overcame the laws of physics by simply building in redundancies that could backup in the event that one of the core’s stopped working. It’s not a cheap fix but it works, according to the company.
Size does matter
The Wafer Scale Engine has been specifically designed to process AI applications. And, when it comes to AI, chip size is important.
Bigger chips can handle more information and produce answers in a shorter time space. The ‘training time’ that machine learning algorithms is reduced substantially.
Researchers, in turn, can test a lot more ideas and feed in bigger chucks on data to solve their problems.
The Wafer Scale Engine, in this regard, measures 46,225 square millimeters.
Nvidia — known as a market leader when it comes to graphic processing units (GPUs) — only offers 815 square millimeters and 21.1 billion transistors on its largest processor. That means the Wafer Scale Engine is 56.7 times larger, 3,000 times faster and has 10,000 times more memory bandwidth.
See also:
AMD's latest chip reveal is 'one of the biggest turning points in the history of Silicon Valley,' one analyst says
Qualcomm’s New Chipset Will Change The Way You Drive