The Rise of Custom AI Chips Is Breaking Nvidia’s Grip
Listen to the audio version of this article (generated by AI).
Nvidia (NVDA) built a near-monopoly on AI compute.
For the past three years, every major AI company — Alphabet (GOOGL), Meta (META), Amazon (AMZN), Microsoft (MSFT) — has relied on its GPUs to train and run models at scale.
That worked — until the economics changed.
Today, the real cost isn’t training. It’s inference — the billions of times those models are used every day.
At that scale, even small inefficiencies become massive, recurring expenses.
So Big Tech isn’t just buying chips anymore.
They’re building their own.
Why Custom AI Chips Are Replacing Nvidia GPUs
Nvidia’s GPUs are general-purpose chips. They’re powerful and flexible — they can train AI models, run video games, render 3D animations, and simulate physics. That versatility made them the backbone of the AI boom.
But versatility has a cost. A chip designed to do everything isn’t optimized for any one task.
As AI has scaled into mass consumer and enterprise adoption, inference has become the dominant — and fastest-growing — compute cost in the entire industry.
That’s the opportunity that custom chips — Application-Specific Integrated Circuits (ASICs), or XPUs — are built to capture. Instead of doing everything, these chips are built for a single task. Less flexible, yes. But the payoff is better performance-per-watt and significantly lower operating costs at the scale these hyperscalers operate at.
Broadcom (AVGO) and Marvell (MRVL) are the two leading designers of these custom chips — what we’ve previously called “The Builders” in the custom silicon investment stack. They sit in the middle of the value chain: Big Tech brings the specifications, Broadcom and Marvell do the engineering, and Taiwan Semiconductor (TSM) manufactures the final product.
And in just the last two weeks, the news coming out of those Builders has been staggering — the kind of deal flow that, in our view, Wall Street hasn’t fully woken up to yet…
Inside the Avalanche of Recent Custom Chip Deals
The drumbeat of recent announcements is telling a very clear story.
Let’s walk through what happened.
Broadcom Secures Google Through 2031
On April 6, Broadcom filed an 8-K with the SEC — the kind of routine regulatory document most people skip past. But this one contained something remarkable: a new five-year agreement with Google to develop and supply custom AI chips for future generations of its Tensor Processing Units (TPUs) — custom chips optimized for AI workloads — through 2031.
This isn’t a new relationship. Broadcom and Google have been co-designing TPUs since 2016. But extending the partnership through 2031 is a strong statement of intent.
Google is building its entire next-generation AI infrastructure stack around custom silicon, and it’s telling the world it plans to do so through the end of the decade, at least.
Anthropic Bets on TPU-Based Infrastructure
Buried in that same announcement was a second important revelation: Broadcom has struck a separate deal giving Anthropic — the maker of Claude and one of the hottest AI startups in the world — access to approximately 3.5 gigawatts of TPU-based computing capacity, with delivery starting in 2027.
Just months ago, Anthropic was consuming around 1 gigawatt of compute. The new commitment nearly quadruples that before the year is even out. And the reason Anthropic needs so much more power? Its growth has been parabolic — annualized revenue reportedly crossed $30 billion in 2026, up from roughly $9 billion at the end of 2025.
Anthropic has committed to investing $50 billion in U.S. computing infrastructure. The decision to build that infrastructure on Google’s custom TPUs rather than Nvidia’s GPUs is a significant strategic bet — one that validates the entire custom silicon thesis.
Meta Broadens Its Custom Silicon Strategy
On April 15, Meta and Broadcom jointly announced an extension of their existing partnership through 2029, with Meta committing to using Broadcom’s technology for future generations of Meta’s Training and Inference Accelerator (MTIA) chips — the company’s proprietary custom AI processor.
Meta’s commitment here is not small. The company has already paid Broadcom $2.3 billion for AI chip design and related services in just the past year. And with Meta planning to spend up to $135 billion on AI infrastructure in 2026 alone, custom silicon from Broadcom is a central pillar of that strategy.
Meta’s reasoning for going custom mirrors those of every other hyperscaler: cost control (Nvidia chips are expensive), performance optimization (custom ASICs outperform GPUs on targeted workloads), and supply chain independence. When the world’s largest social media company is building a competitive moat in AI, it doesn’t want that moat to be controlled by someone else’s chip.
OpenAI Begins Moving Beyond Nvidia
Perhaps the most symbolically powerful development is this: OpenAI — the company that essentially launched the modern AI era and has been almost entirely dependent on Nvidia — is now developing its first custom AI chip with Broadcom, targeting deployment in 2027 with over 1 gigawatt of compute capacity.
The company that made Nvidia the most important chipmaker in the world is now building its own silicon to reduce its dependence on Nvidia.
Google Expands to Marvell for Next-Gen Chips
Then, on April 19, came another flashing signal. The Information reported that Google is in talks with Marvell to develop not one but two new AI chips: a Memory Processing Unit designed to work alongside existing TPUs and a brand-new TPU built specifically for inference.
The talks came just days after the Broadcom deal was announced — meaning Google isn’t content with one world-class custom silicon partner. It’s diversifying its design relationships, building redundancy, and optimizing for the emerging inference economy.
Marvell’s stock jumped on the news and reached a fresh all-time high of $151.44. The market understood the implication immediately: Marvell is on the verge of landing one of the most significant chip design relationships in the industry.
How to Invest In the Custom AI Chip Boom
In our view, the custom silicon revolution is a structural shift with significant room to run. The deals announced in the past two weeks merely represent the first few chapters of a novel-length book.
Here’s our framework for playing it.
The Highest-Conviction Names: Broadcom and Marvell
According to TrendForce, custom AI chip sales are projected to grow 45% in 2026. GPU shipments? 16%. By 2033, the custom ASIC market is expected to hit $118 billion. Broadcom is on track to capture roughly 60% of that market by 2027. Marvell is targeting 20- to 25%. Together, these two companies are positioning to divide the majority of a category growing nearly three times faster than the GPU market they’re displacing.
Broadcom reported $8.4 billion in AI semiconductor revenue for Q1 FY2026, representing 106% year-over-year growth. The company had previously guided for AI revenue to reach $100 billion by 2027.
Let that sink in for a moment. Mizuho analysts estimate that the Anthropic deal alone could generate $21 billion in revenue for Broadcom in 2026 — and $42 billion in 2027. That’s just one client. Now multiply that logic across Google, Meta, and OpenAI. Suddenly, Broadcom’s $100 billion AI revenue target by 2027 looks increasingly conservative. And the client base underpinning it — spanning search, social media, the world’s hottest AI lab, and the biggest cloud platforms on earth — is not a moat any competitor can easily replicate.
Marvell, meanwhile, expects roughly 30% year-over-year revenue growth in fiscal 2027 — before accounting for any new Google deals. In December 2025, the company acquired Celestial AI for up to $5.5 billion, gaining photonic interconnect technology that could further solidify its position at the intersection of custom silicon and high-bandwidth connectivity. Interestingly, Nvidia also invested $2 billion in Marvell at the end of March — a tacit acknowledgment that even the GPU king wants a seat at the custom silicon table.
The Infrastructure Layer
Arm Holdings (ARM) – Every custom chip in this ecosystem is built on ARM’s instruction set architecture. Whether it’s Google’s TPU, Meta’s MTIA, or Amazon’s Graviton, all pay ARM a royalty. It’s like the toll road of the custom silicon revolution.
Synopsys (SNPS) — If Broadcom and Marvell are the architects, Synopsys makes the drafting software. Every custom chip starts as a design in an EDA tool, and Synopsys dominates that market.
Cadence Design Systems (CDNS) — Synopsys’ closest peer in EDA, with the same structural tailwind. More custom chip designs, more licenses for both.
The Manufacturing Backbone
Taiwan Semiconductor (TSM) — Every custom chip designed by Broadcom or Marvell is manufactured by TSM. There is no custom silicon revolution without TSMC’s advanced fabs. Google’s TPU Ironwood, for example, is built on TSM’s 3nm node.
The Bottom Line: Custom AI Chips Are the Next Phase of the AI Boom
The original thesis was simple: given enough resources and motivation, Big Tech would eventually stop buying compute and start owning it.
That shift is happening now. And the companies positioned at the center of that buildout — the Broadcoms and Marvells and ARMs and TSMs of the world — are looking at a combined addressable market north of $700 billion a year.
The last time the semiconductor industry shifted this decisively, it minted a generation of winners. Nvidia was one of them. The next chapter is being written right now — and the pen is in different hands.
The shift here isn’t just about custom silicon. It’s about what that silicon enables.
As inference becomes the dominant cost in AI, the companies that control the platforms — the ones answering billions of queries a day — gain the most leverage.
That’s where the next wave of value accrues.
And right now, one company sits at the center of that shift.
It isn’t public yet. But as AI moves from training models to deploying them at scale, its position only strengthens.
I’ve been looking for a way to get exposure before that becomes obvious…
And I’ve found one.

