Discover ihyperg’s apps on Google Play! Open
Posts

OpenAI Confirms It Won’t Scale Use of Google’s In-House Chips

OpenAI says no plans to deploy Google’s TPUs at scale, sticking with Nvidia/AMD and its own custom chip.


OpenAI has clarified that, despite initial trials, it does not currently plan to deploy Google's Tensor Processing Units (TPUs) at scale for its AI services. The announcement follows reports suggesting the company was exploring TPUs to support growing compute needs.

A company spokesperson told that OpenAI is in early testing with some of Google’s chips but has no active intentions to roll them out broadly. For now, the company continues to rely primarily on Nvidia GPUs, supplemented by AMD’s AI chips, to power its models.

OpenAI is also progressing with its own custom AI chip development, targeting a tape-out milestone by the end of 2025, signaling readiness for production later.

Previously, OpenAI began renting Google Cloud’s TPUs to meet computing demands, marking a strategic test with non‑Nvidia hardware. However, it appears the company is not ready to scale that architecture widely.

Why it matters:
By opting to continue with established technology from Nvidia and AMD, while simultaneously working on its own chip, OpenAI maintains independence and cost‑control. The TPU tests signal a preference for flexibility and supply diversification as AI compute demands continue to rise.

Post a Comment

Don't be angry in the comments :O

© 2019-2024 IHYPERG.COM - All Rights Reserved.

Did someone say … cookies?

IHYPERG uses cookies to make your experience better and brighter! By accepting, you’ll help us remember your preferences, like your favorite theme, and keep everything running smoothly. Enjoy your time here!