Broadcom Secures Long-Term Deal to Build Google’s Next Wave of Custom AI Chips

Broadcom has signed a long-term agreement with Google to develop and supply future generations of the company’s custom artificial intelligence chips, extending a critical partnership at a time when large tech firms are racing to build alternatives to Nvidia’s dominant AI hardware. Apparently, the deal runs through 2031 and covers not only Google’s custom AI chips but also other components for the company’s next-generation AI racks. The announcement underscores how central custom silicon has become in the competition to lower costs, improve performance, and gain more control over AI infrastructure. 

The core of the agreement is Google’s continued investment in its tensor processing units, or TPUs. These custom chips are designed specifically for AI workloads and have become increasingly important as demand for generative AI services grows. Interest in custom chips has surged because many companies want alternatives to Nvidia’s graphics processors, which remain the market leader but are also expensive and often supply-constrained. By locking in Broadcom as a long-term development and supply partner, Google is signaling that it sees proprietary chip design as a major strategic advantage rather than a side project. 

The move also fits with a broader trend already visible at Google. The company was pushing to make its TPUs a viable alternative to Nvidia GPUs, and that effort has become more important as investors demand proof that heavy AI spending can translate into real business growth. TPU sales are now a crucial engine of Google Cloud revenue and that means this chip partnership is not just about internal technology. It is also about strengthening Google’s commercial cloud offering by giving customers access to a different kind of AI computing stack. 

Broadcom announced a second major arrangement at the same time, this one involving Anthropic. Seemingly, Broadcom signed a deal to provide the AI startup access to about 3.5 gigawatts of AI computing capacity, drawing on Google’s AI processors beginning in 2027. Anthropic said the agreement builds on its commitment to invest $50 billion in strengthening U.S. computing infrastructure. The company also said demand for its Claude model has accelerated sharply in 2026, with run-rate revenue now exceeding $30 billion, up from about $9 billion at the end of 2025. Those numbers suggest the new infrastructure deal is tied to a rapid increase in usage and commercial scale. 

Also, Anthropic trains and runs Claude using a mix of AI hardware, including Amazon Web Services’ Trainium, Google TPUs, and Nvidia GPUs, while Amazon remains its primary cloud provider and training partner. That detail matters because it shows the AI infrastructure market is becoming more diversified. Instead of relying exclusively on Nvidia, leading AI companies are increasingly spreading workloads across multiple chip types and cloud relationships depending on cost, availability, and performance. 

For Broadcom, the Google agreement is a strong validation of its position in the custom chip market. The company has become one of the main enablers of large technology firms that want to design their own silicon rather than depend entirely on standard processors sold by outside vendors. Shares of Broadcom rose about 3% in extended trading after the news, indicating investors saw the agreement as a meaningful long-term win. Financial terms were not disclosed, but the duration of the deal alone suggests a deep strategic relationship rather than a short-term supply arrangement. 

Overall,  the AI race is moving beyond software models and into the architecture of the underlying hardware. Google wants more control over the engines powering its AI products and Broadcom is becoming a critical builder of that custom infrastructure, and customers like Anthropic are seeking vast computing capacity across several platforms. Together, these deals highlight a new phase of AI competition in which owning or shaping the chip stack may be just as important as building the models that run on top of it.  

Facebook
Twitter
LinkedIn
Pinterest
Pocket
WhatsApp

Leave a Reply

Your email address will not be published. Required fields are marked *

Subscribe to our newsletter.

Other News

Related News