
OpenAI is completing work on its first custom AI training chip design, reportedly in the midst of the mid-tape-out stage - the step before semiconductor fabrication. According to a Reuters report, the chip will be manufactured by Taiwan Semiconductor Manufacturing Company (TSMC) using 3nm node technology.
The new chip is likely to come with a systolic array architecture along with high-bandwidth memory similar to what Nvidia's latest AI accelerators carry. This architecture is known for its efficiency and performance in handling the intensive computations required for training large-scale AI models.
It's supposed to enter mass production in 2026-a key milestone for the company. OpenAI has been hiring aggressively into its hardware team, including the hire of Richard Ho, former Lightmatter chip engineering lead and Google TPU head, to lead its hardware efforts.
Even with such a massive investment in the project, it is not guaranteed that the first tape-out will result in a fully functional chip. Currently, the company has around 40 people working in its custom silicon team and is working together with Broadcom, which was greatly involved in developing Google's TPU AI chip.
Reuters report says it initially will be deployed in a modest way inside OpenAI's infrastructure, playing a limited role while the custom chip will be capable of training and running AI models.
The move to develop a custom AI training chip is not entirely unexpected, with OpenAI long rumored to be considering building its own hardware. Indeed, CEO Sam Altman reportedly had been trying to seek investment to build an AI chip company codenamed Tigris.
This probably will be the reason for developing a custom chip, as OpenAI wants to reduce its dependence on Nvidia GPU. OpenAI also launched the Stargate data center project, a $500 billion venture backed by SoftBank, Oracle, and investment firm MGX.
Source: Reuters
2 Comments - Add comment