Broadcom Reports $8.4 Billion AI Revenue as Trump Preps Global Export Lockdown on Nvidia and AMD
AI Mar 6, 2026 · 7 min read

Broadcom Reports $8.4 Billion AI Revenue as Trump Preps Global Export Lockdown on Nvidia and AMD

Broadcom just posted a 106% surge in AI infrastructure revenue to $8.4 billion, confirming OpenAI as its sixth custom chip customer, while the Trump administration drafts sweeping export controls that would require government approval for nearly all global AI accelerator shipments—including clusters as small as 1,000 Nvidia GB300 GPUs.

FinancialContent, Bloomberg, ScienceDaily

The AI hardware landscape is fracturing along two parallel fault lines this week: Broadcom Inc. has emerged as the new kingmaker of custom silicon, while the Trump administration is quietly preparing the most aggressive export control regime in semiconductor history—one that would give Washington veto power over virtually every major AI data center built anywhere on Earth.

On March 4, 2026, Broadcom reported fiscal first-quarter earnings that redrew the power map of artificial intelligence. The San Jose chipmaker posted total revenue of $19.31 billion, up 29.5% year-over-year, beating Wall Street's $19.21 billion consensus. Adjusted earnings per share hit $2.05, topping the $2.03 estimate. But the number that electrified investors was AI infrastructure revenue: a record $8.4 billion, representing a 106% surge compared to the prior year, according to FinancialContent.

The headline revelation came during the earnings call, when CEO Hock Tan confirmed what had been Silicon Valley's worst-kept secret: OpenAI is now Broadcom's sixth major custom silicon customer. The two companies are co-developing a custom AI inference engine in what Tan described as a $10 billion-plus venture, with mass production expected by late 2026. Broadcom is also ramping production of Alphabet's seventh-generation TPU, the v7p "Ironwood," and has begun selling fully assembled "Ironwood Racks" directly to AI firms—including a massive $21 billion order from Anthropic.

This is no longer a story about selling chips. Broadcom has positioned itself as the essential architect of the modern data center, the company that hyperscalers call when they want to stop paying Nvidia's premium. With AI-related revenue now accounting for 44% of its total business, Broadcom is proving that the road to generative AI dominance runs through custom silicon, not general-purpose GPUs. The company's networking segment saw a 60% year-over-year revenue jump, driven by the launch of the Tomahawk 6 switch, capable of 102.4 terabits per second—the hardware backbone for the "million-processor clusters" currently being planned by cloud giants.

Broadcom's guidance for the second quarter is exceptionally bullish: $22 billion in total revenue, with AI semiconductor revenue alone expected to reach $10.7 billion. The company has secured long-term supply for high-bandwidth memory and advanced packaging through 2028, suggesting this growth trajectory is structural, not cyclical. For Meta, Broadcom is shipping record volumes to support the company's goal of scaling to multiple gigawatts of compute capacity by 2027. The MTIA (Meta Training and Inference Accelerator) roadmap remains "alive and well," according to Tan.

But just as Broadcom is proving that the AI hardware market is diversifying beyond Nvidia's walled garden, the Trump administration is moving to centralize control over who gets to build that hardware—and where. According to Bloomberg, the U.S. government is drafting sweeping new export regulations that would require approval for nearly all global shipments of advanced AI accelerators made by American companies, including AMD and Nvidia. The rules would expand existing country-based restrictions into a worldwide licensing system, giving the Trump administration the power to approve or block large-scale AI infrastructure buildouts anywhere on the planet.

The newly proposed export regime establishes a tiered licensing system based on computing scale. Shipments involving up to 1,000 Nvidia GB300 GPUs would pass through a simplified review and may qualify for limited exemptions. Mid-scale deployments would require "preclearance before seeking export licenses" from the U.S. Department of Commerce. Large-scale AI clusters—those powered by 200,000 GB300 GPUs or equivalent, such as those currently deployed by AWS, Microsoft, Oracle, OpenAI, or xAI—would trigger direct intergovernmental arrangements. In these cases, approvals would depend on national-security assurances as well as commitments to invest in American AI infrastructure.

This is not a total export ban, but it is an extraordinarily powerful tool. If the Trump administration does not approve of a UK or France-based company deploying a cluster of over 200,000 GB300 GPUs, it can deny the necessary export licenses if the host government and companies do not meet Washington's requirements. For companies like AMD or Nvidia, which ship their products globally, everything will depend on how quickly licenses are granted and what conditions are attached. Fast approvals with light restrictions would allow most small projects to move forward, though with more paperwork. But for larger installations, tougher requirements would delay construction and make operating large data centers outside the U.S. considerably more complicated. Building clusters comparable to those currently operated by AWS, Google, Oracle, Microsoft, or xAI outside of the United States would face extreme complications, lowering their economic feasibility.

The timing is not coincidental. The proposed policy is considerably stricter than the highly-criticized AI Diffusion Rule from the Biden era, and it arrives just as the industry is entering what Broadcom calls the "million-XPU" era—the development of data centers capable of housing over a million processors in a single unified fabric. The Trump administration is effectively asserting that the United States will control the global build-out of this infrastructure, or it will not happen at all.

Meanwhile, a separate breakthrough is quietly addressing one of the technical bottlenecks that has limited AI hardware progress. Researchers from Stanford University, Carnegie Mellon University, the University of Pennsylvania, and MIT, working with SkyWater Technology, have created a new multilayer computer chip that stacks memory and computing elements vertically, according to ScienceDaily. Unlike most of today's chips, which are flat and two-dimensional, this prototype is built to rise upward, with ultra-thin parts stacked like floors in a tall building and vertical wiring that moves data quickly. In hardware tests and simulations, the 3D chip beats 2D chips by roughly an order of magnitude—about four times in early tests, with simulations suggesting up to twelve-fold improvements on real AI workloads, including those derived from Meta's open-source LLaMA model.

The significance of the Stanford-led work is not just performance. By demonstrating that monolithic 3D chips can be made in the United States—the entire process was carried out in a domestic commercial silicon foundry—the researchers argue it provides a blueprint for a new era of domestic hardware innovation where the most advanced chips can be designed and manufactured on U.S. soil. "Breakthroughs like this are of course about performance," said H.-S. Philip Wong, the Willard R. and Inez Kerr Bell Professor at Stanford. "But they're also about capability. If we can build advanced 3D chips, we can innovate faster, respond faster, and shape the future of AI hardware."

Taken together, these three developments—Broadcom's custom silicon dominance, the Trump administration's export control regime, and the Stanford 3D chip breakthrough—suggest that the AI hardware market is entering a new phase. The era of buying off-the-shelf GPUs from a single vendor is giving way to a world of custom silicon, vertical integration, and geopolitical leverage. Broadcom is proving that hyperscalers will pay billions to escape Nvidia's ecosystem. Trump's export controls are proving that Washington intends to weaponize that dependency. And the Stanford 3D chip is proving that the next generation of AI hardware may not come from the companies that dominated the last one.

For investors, the takeaway is clear: the AI trade is diversifying. Nvidia remains the undisputed king of general-purpose GPUs, but Broadcom is successfully capturing the "inference" and "customization" phases of the AI cycle, which many analysts believe will eventually outgrow the initial training phase. The company's ability to secure long-term supply and its deep-rooted partnerships with Google and Meta provide it with a supply chain moat that is difficult to breach. Marvell Technology, by contrast, finds itself in a challenging "second-place" position, fighting for a 20% share of the custom ASIC market with operating margins significantly lower than Broadcom's.

The main challenge for Broadcom in the near term will be managing the margin pressure associated with its shift toward "rack-scale" solutions. Selling fully integrated racks involves higher costs than selling individual chips, which could slightly compress the company's legendary 68% EBITDA margins. However, the sheer volume of these orders is expected to more than compensate for the tighter margins. Investors should also watch for the full integration of VMware, which Broadcom acquired in late 2023; the company is increasingly bundling its high-end software with its AI hardware to create a holistic "private cloud" offering for enterprise customers wary of the public cloud's costs.

As for the Trump administration's export controls, the policy is not yet finalized, but if enacted, it would represent a fundamental shift in how the global AI infrastructure develops. The United States would no longer be a passive supplier of chips but an active gatekeeper, deciding which countries and companies get to build the data centers that will power the next decade of artificial intelligence. Whether that strengthens American competitiveness or simply drives innovation offshore remains to be seen. But one thing is certain: the AI hardware market is no longer just about technology. It is about geopolitics, supply chains, and who gets to control the infrastructure that will define the 21st century.

Related Stories