OpenAI’s Bold Move: Building Custom AI Chips with Broadcom and What It Means for the Future
Oct 14, 2025
The AI landscape is rapidly evolving, and OpenAI is making some of the most strategic moves to stay ahead — notably, by partnering with Broadcom to build its own custom AI chips. This bold step marks a significant shift towards greater control over AI hardware infrastructure, reducing reliance on existing chip giants like NVIDIA.
OpenAI’s Collaboration with Broadcom: An Industry Game-Changer
OpenAI and Broadcom recently announced a strategic collaboration to deploy 10 gigawatts of custom AI accelerators and networking systems. This partnership, quietly in development for 18 months, aims to tailor hardware specifically for OpenAI’s workloads, with deployment starting in late 2026 and expected completion by 2029. OpenAI’s own AI models contributed to designing these chips, optimizing components far faster than human engineers alone could have achieved — a testament to AI’s accelerating role in its own development.
By investing in its own chip architecture, OpenAI is not just designing processors but building an entire system optimized at scale. This move helps OpenAI mitigate its dependence on NVIDIA, which currently holds over 90% of the AI chip market. Introducing competitors like Broadcom and AMD fosters industry resilience and healthier competition.
Scaling AI Compute — From Megawatts to Gigawatts
To put the scale into perspective, OpenAI started with a modest 2-megawatt cluster (about the power of a large Costco), scaled to 2 gigawatts earlier this year — supporting around 800 million weekly users — and with recent partnerships, it aims to approach nearly 30 gigawatts of power, comparable to the entire electricity consumption of Australia.
This explosive growth demands not only vast hardware resources but also innovations in efficiency and cost management. Custom chips designed specifically for OpenAI’s models offer the potential to lower operational costs and scale more sustainably.
The Broader AI Ecosystem: What This Means for Businesses
The AI chip race exemplifies how companies are moving towards integrated hardware-software solutions, enhancing performance while reducing risks associated with supply chains and vendor lock-in. Businesses looking to harness AI’s potential should closely watch these hardware innovations because:
Custom AI accelerators can drastically reduce latency and increase throughput for AI applications.
Competitive chip markets may lower costs and improve accessibility over time.
Innovations will enable more advanced AI models and use cases, prompting enterprises to rethink AI deployment strategies.
Gemini 3.0 and Nanochat: AI Advancements Reflect Hardware Progress
While OpenAI is securing its hardware future, other AI innovations continue to emerge. Google's anticipated Gemini 3.0 release, rumored for October 22, promises smarter and faster AI capabilities, including impressive demonstrations like generating a fully functional web-based macOS on a single HTML file — a feat that speaks volumes about AI’s creative potential.
On another front, AI expert Andrej Karpathy released "nanochat," a code repository allowing users to train their own ChatGPT-style model for roughly $100. Though the $100 version produces rudimentary outputs, scaling the investment to $1,000 creates more capable AI agents, offering a surprisingly accessible entry point for developers and businesses experimenting with AI.
NVIDIA DGX Spark: Democratizing AI Power
NVIDIA continues to expand AI accessibility by shipping DGX Spark desktop systems — powerful petaflop AI platforms designed to empower developers worldwide. This highlights a market trend toward distributing HL-level AI compute power beyond centralized hyperscale clouds.
How Leida Helps You Navigate AI’s Rapid Evolution
At Leida, we understand how fast AI technologies evolve and the challenges businesses face in integrating these tools effectively. Whether it's leveraging optimized AI models or planning for the right hardware infrastructure, Leida's AI advisory and implementation services help you unlock operational efficiencies and drive innovation.
Closing Thoughts
OpenAI’s commitment to building its own chips signals a new phase where AI companies demand control over their hardware to optimize performance, scale efficiently, and reduce risk. For businesses, staying informed about these developments is critical to making smarter AI adoption decisions.
If you want to find out where your business could be saving time and cutting costs, let’s talk — book a call today.
Book Discovery Call