OpenAI Clarifies Position on Google TPUs

OpenAI has clarified that it has no immediate plans to adopt Google’s in-house tensor processing units (TPUs) at scale, despite reports suggesting otherwise. A spokesperson confirmed that while early testing with Google’s TPUs is underway, there is no commitment to deploying them broadly at this time.

Google Cloud partnership sparked speculation

The clarification follows recent reporting by Reuters, which revealed that OpenAI had signed up for Google Cloud services to meet surging compute requirements. This collaboration had led to speculation that OpenAI might be shifting toward Google’s AI chip ecosystem. However, the company reaffirmed its reliance on Nvidia GPUs and AMD’s AI chips, which continue to serve as the backbone of its infrastructure.

Custom chip efforts remain on track

In parallel, OpenAI is developing its own AI chip. The project is on track to reach the “tape-out” milestone this year, marking the point where the chip’s design is finalized and sent for manufacturing. This in-house effort reflects OpenAI’s long-term ambition to diversify its hardware ecosystem while maintaining performance and efficiency at scale.

Also read: Sam Altman Says AI Demands New Hardware Revolution

Hardware diversification remains common, but scaling is complex

Testing multiple chip options is standard practice across the AI industry, but deploying them at scale requires changes in system architecture and software integration. These shifts are resource-intensive and often take considerable time to implement. Currently, most of OpenAI’s compute power is supported by GPU servers from CoreWeave, a neocloud provider.

Meanwhile, Google has started offering its TPUs to more external partners, winning clients like Apple, Anthropic, and Safe Superintelligence. Despite this expansion, OpenAI remains focused on its current hardware stack.

(Credit: Reuters)

Latest articles

Related articles