OpenAI’s first AI chip enters final design stage, mass production planned for 2026: Report


OpenAI is finalising the design for its first in-house chip in the next few months and plans to send it to Taiwan Semiconductor Manufacturing Co (TSMC) for fabrication, Reuters reported. The ChatGPT maker’s efforts to produce its first-generation AI chip aim to reduce its reliance on Nvidia.

The process of sending initial designs to a chip fab for manufacturing is called ‘taping out’. A tape-out process can take around six months and cost tens of millions of dollars. There is also no guarantee that the chip will work as expected on the first tape-out, and if there is a problem OpenAI will have to diagnose it and repeat the whole process.

What will OpenAI do with its first-generation AI chip?

If the initial tape-out goes smoothly, OpenAI will reportedly be able to mass-produce its AI chips by 2026. OpenAI’s AI chip will be built on TSMC’s 3-nanometer process technology and will also use a common systolic array architecture with high-bandwidth memory, also used by Nvidia for its chips, according to the report.

The AI chip will be capable of training and running AI models, but will only be used to a limited extent and mainly for running AI models. Initially, the chip will likely play a limited role within the OpenAI infrastructure.

Why is OpenAI building its own AI chip? 

Generative AI chatbots such as ChatGPT, Gemini and Meta AI have traditionally required large numbers of chips to train their foundational models. The powerful chips required for these operations are mostly supplied by Nvidia, which has around 80% of the market share.

OpenAI’s efforts to build its own chip will likely allow it to increase its bargaining power with chip suppliers, including Nvidia. If the first chip is a success, OpenAI is reportedly planning to develop more advanced processors, with capabilities increasing with each generation.

Is OpenAI ahead of other big tech? 

OpenAI’s plans to send its first chip design to TSMC later this year would put it ahead of the curve, with other chip designers taking years longer. Notably, other big tech companies like Satya Nadella-led Microsoft and Mark Zuckerberg -led Meta have failed to produce satisfactory chipset even after years. 

Meanwhile, the recent rise of DeepSeek, built at a fraction of the cost and computing power of ChatGPT and other western AI chatbots, suggests that fewer chips could be required in the future for developing powerful large language models. 



Source link

Leave a Reply

Back To Top