OpenAI has clarified that, despite initial trials, it does not currently plan to deploy Google's Tensor Processing Units (TPUs) at scale for its AI services. The announcement follows reports suggesting the company was exploring TPUs to support growing compute needs.
A company spokesperson told that OpenAI is in early testing with some of Google’s chips but has no active intentions to roll them out broadly. For now, the company continues to rely primarily on Nvidia GPUs, supplemented by AMD’s AI chips, to power its models.
OpenAI is also progressing with its own custom AI chip development, targeting a tape-out milestone by the end of 2025, signaling readiness for production later.
Previously, OpenAI began renting Google Cloud’s TPUs to meet computing demands, marking a strategic test with non‑Nvidia hardware. However, it appears the company is not ready to scale that architecture widely.
Why it matters:
By opting to continue with established technology from Nvidia and AMD, while simultaneously working on its own chip, OpenAI maintains independence and cost‑control. The TPU tests signal a preference for flexibility and supply diversification as AI compute demands continue to rise.