Introduction
The AI investment cycle is the largest infrastructure buildout in technology history, dwarfing the dot-com era's fiber-optic buildout, the mobile infrastructure wave of the 2010s, and the initial cloud computing migration. The five largest US hyperscalers (Amazon, Alphabet, Microsoft, Meta, and Oracle) have collectively committed to spending between $660 billion and $690 billion on capital expenditure in 2026, nearly doubling 2025 levels. According to UBS, global AI capex will reach $423 billion in 2025, $571 billion by 2026, and $1.3 trillion by 2030, growing at a 25% CAGR. Nvidia CEO Jensen Huang has estimated that between $3 trillion and $4 trillion will be spent on AI infrastructure by the end of the decade. Goldman Sachs projects total hyperscaler capex from 2025-2027 will reach $1.15 trillion, more than double the $477 billion spent from 2022-2024. This infrastructure spending is generating massive deal activity across every layer of the AI technology stack: semiconductors, data centers, power generation, cloud infrastructure, and AI-native software. For TMT investment bankers, understanding the AI investment cycle is essential because it is the single most important driver of deal flow across the entire TMT landscape.
The Hyperscaler Infrastructure Layer
The scale of hyperscaler AI infrastructure investment is unprecedented in corporate history. Each of the five major hyperscalers has committed to spending levels that would have been unthinkable five years ago.
- Hyperscaler Capex Commitments (2026)
Amazon: Projected $200 billion in 2026 capex (up from $131 billion in 2025), making it the largest individual spender. Amazon's investment is focused on AWS data center expansion, custom AI chip development (Trainium, Inferentia), and global infrastructure buildout to support the growing demand for AI training and inference compute. Alphabet/Google: Estimated $175-185 billion (up from $91 billion in 2025), directed toward Google Cloud infrastructure, TPU (Tensor Processing Unit) expansion, and data center construction to support Gemini and other foundation models. Meta: $115-135 billion in planned spending, primarily on AI training infrastructure to support the next generation of Llama models and the company's AI-driven advertising platform. Microsoft: Tracking toward $120 billion or more, focused on Azure expansion and the infrastructure required to deliver OpenAI's models at scale. Microsoft's capex has more than tripled since 2023, reflecting the depth of its AI partnership with OpenAI. Oracle: Targeting approximately $50 billion, positioning Oracle Cloud Infrastructure (OCI) as a cost-effective alternative for AI workloads, with partnerships including a major cloud commitment from OpenAI.
Roughly 75% of this aggregate spend (approximately $450 billion in 2026 alone) is directly tied to AI infrastructure: GPU servers, networking equipment, data center construction, and power infrastructure. The remaining 25% covers traditional cloud and enterprise IT infrastructure. The five hyperscalers plan to add approximately $2 trillion in AI-related assets to their balance sheets by 2030, creating a fundamental structural shift in how these companies are financed and valued.
Data Center Buildout
The physical infrastructure required to support AI compute has transformed data centers from a niche real estate category into one of the most active M&A sectors in technology.
The data center buildout has created several distinct M&A categories. Hyperscalers acquire or build data centers directly (owning the real estate and infrastructure), often partnering with real estate developers and energy companies to co-develop large-scale "AI campuses" that combine compute, power generation, and cooling infrastructure on a single site. Colocation providers (Equinix, Digital Realty, CyrusOne) lease data center space and power to hyperscalers and enterprises, and several have pivoted their strategies to focus on AI-optimized facilities with the higher power density (50-100+ kW per rack, compared to 5-15 kW for traditional enterprise workloads) required for GPU clusters. Specialized AI infrastructure companies (neoclouds) build GPU-dense facilities optimized specifically for AI training and inference workloads. Each category represents a different investment thesis and attracts different buyers, from strategic acquirers to infrastructure-focused PE sponsors to sovereign wealth funds and infrastructure funds that treat data centers as long-lived real assets with contracted revenue streams.
The power constraint has created an entirely new category of TMT-adjacent M&A: technology companies acquiring energy assets. Microsoft signed a deal to restart Three Mile Island's Unit 1 nuclear reactor to supply power to its data centers. Amazon has acquired multiple data center campuses with dedicated nuclear power capacity. Google has signed agreements for small modular reactor (SMR) power supply. These transactions blur the line between technology and energy M&A and require TMT bankers to collaborate with utilities and power sector colleagues on deal structuring, regulatory approvals (nuclear facility restarts require NRC approval), and valuation of power purchase agreements (PPAs) that underpin the economics of AI data center operations.
The Neocloud Layer: Infrastructure Verticalization
A new category of AI infrastructure companies has emerged to fill the gap between hyperscaler cloud services and the specialized compute needs of AI model developers. These "neoclouds" (CoreWeave, Lambda, Crusoe, IREN, Nebius) build and operate GPU-dense data centers specifically optimized for AI training and inference, often at lower prices and with more flexible configurations than the major cloud providers.
The neocloud model represents a significant departure from the traditional cloud business, where hyperscalers built general-purpose infrastructure and customers ran diverse workloads. Neoclouds build purpose-specific infrastructure optimized for a single workload type (AI compute), secure long-term contracts with AI model developers, and use that revenue visibility to raise massive amounts of debt financing against future cash flows. CoreWeave is the largest neocloud, but it is not alone: Lambda provides GPU cloud computing focused on AI research and development, Crusoe builds data centers powered by stranded natural gas (reducing energy costs while addressing flared gas waste), IREN operates AI-focused data centers with a focus on sustainable energy, and Nebius (spun out of Yandex) builds AI infrastructure serving the European and Middle Eastern markets. Each neocloud has a different geographic focus, energy strategy, and customer base, but all share the same fundamental business model: aggregate GPU capacity, sell it on long-term contracts, and use those contracts to raise capital for further expansion.
The financial risk is substantial: CoreWeave's debt load has attracted scrutiny from investors who question whether its revenue from a concentrated customer base (with Microsoft, Meta, and OpenAI as primary clients) can sustain its capital structure through an AI investment cycle downturn. The neocloud model depends on sustained AI compute demand growth, and if that demand moderates (due to efficiency improvements in model training, shifts to inference workloads that require less compute per dollar, or a broader AI investment pullback), neoclouds could face the same capital structure crisis that overleveraged telecom companies experienced in 2001-2002.
AI-Driven M&A Activity
The AI investment cycle is generating deal activity across every layer of the technology stack, from chips to applications.
The Financing Gap
The AI buildout requires front-loaded investment for compute, data centers, and energy infrastructure, while the revenue from that investment materializes later, creating a financing gap that is reshaping how technology companies access capital markets.
For TMT investment bankers, the AI financing gap creates advisory opportunities across multiple product groups. In debt capital markets, banks are helping hyperscalers and neoclouds raise infrastructure financing through investment-grade bonds, leveraged loans, and structured project finance. CoreWeave's IPO in March 2025 (raising approximately $1.5 billion at a $23 billion valuation) demonstrated the equity capital markets appetite for AI infrastructure exposure, though the stock's subsequent volatility reflects investor uncertainty about the sustainability of the neocloud business model. In restructuring and special situations, the risk of an AI infrastructure correction creates potential advisory mandates: if compute demand growth moderates or the AI revenue thesis takes longer to materialize than expected, overleveraged infrastructure companies will need to restructure their balance sheets, creating work for restructuring advisors. The parallels to the telecom buildout of the late 1990s are instructive: that cycle produced both enormous wealth creation (companies like Cisco and Juniper) and spectacular bankruptcies (WorldCom, Global Crossing, Adelphia), and the current AI cycle may follow a similar pattern where the best-positioned companies capture disproportionate value while overleveraged, poorly differentiated infrastructure players face existential financial pressure.
The debate over whether the current AI investment cycle represents a sustainable technology transition or a speculative bubble is itself relevant to TMT banking. Bulls argue that AI infrastructure investment is justified by the transformative potential of the technology (comparable to electrification or the internet), the demonstrated willingness of enterprises to pay for AI capabilities, and the structural shift toward inference workloads that will sustain compute demand even as training efficiency improves. Bears point to the concentration of AI revenue in a small number of foundation model companies (with much of the revenue being circular, flowing between hyperscalers and model companies in interconnected partnerships), the historical pattern of infrastructure overbuild during technology transitions, and the possibility that efficiency breakthroughs (such as DeepSeek's demonstration of training competitive models at a fraction of the cost of leading US models) could significantly reduce the compute requirements and undermine the investment thesis for massive infrastructure buildout. For TMT bankers, maintaining a balanced perspective on this debate is essential: advisory credibility requires understanding both the bull and bear cases and advising clients on deal structuring, valuation, and risk management that accounts for multiple scenarios.
Investment Banking Implications
The AI investment cycle affects TMT banking across every coverage area and product group.
In semiconductor coverage, the shift from general-purpose processors to AI-specific accelerators is driving M&A as chipmakers acquire specialized capabilities (Nvidia/Groq, AMD's AI acquisitions) and customers develop custom silicon (Google TPUs, Amazon Trainium, Microsoft Maia). The semiconductor valuation framework must account for the AI premium that the market assigns to companies positioned in AI compute.
In software coverage, AI is both a growth driver (companies embedding AI to justify premium pricing and accelerate growth) and a threat (AI-native competitors disrupting established software categories). Software M&A is increasingly motivated by AI capabilities: acquirers pay premiums for companies with proprietary training data, AI-native product architectures, and AI engineering teams. The valuation framework for AI-native software companies differs substantially from traditional SaaS: investors are assigning 10-50x revenue multiples to companies like OpenAI ($840 billion valuation on $14.2 billion in revenue) and Anthropic ($380 billion valuation on $2.5 billion annualized revenue), compared to 6-7x for mature SaaS companies. These elevated multiples reflect the market's belief that AI companies are building category-defining platforms, but they also create advisory complexity: TMT bankers must help clients distinguish between genuine AI differentiation (proprietary models, unique training data, defensible distribution) and "AI washing" (superficial AI features layered on top of traditional software that do not justify premium multiples).
In media coverage, generative AI is disrupting content creation workflows, advertising targeting, and audience engagement models. AI-powered advertising optimization (which Meta credits with driving over $20 billion in incremental ad revenue) is reshaping the economics of digital media companies. Media M&A advisory must assess how AI affects the target's competitive position, content economics, and long-term revenue growth trajectory.
In telecom coverage, the data center power demand from AI is creating opportunities for telecom-adjacent infrastructure plays, including fiber networks connecting data center clusters, edge computing facilities, and tower companies positioned near data center hubs.
The geographic dimension of the AI buildout also creates cross-border advisory opportunities. While the US accounts for the majority of AI infrastructure investment (driven by the concentration of hyperscalers and AI model companies in the US), significant buildout is occurring in Europe (where data sovereignty requirements create demand for local AI infrastructure), the Middle East (where sovereign wealth funds in Saudi Arabia and the UAE are investing billions in AI data centers), and Asia (where Japan, South Korea, and India are investing in domestic AI compute capacity). The CHIPS Act and its international equivalents are adding government subsidies to the mix, creating opportunities for TMT bankers to advise on public-private partnerships, government-backed financing structures, and cross-border joint ventures between technology companies and sovereign investors.


