Introduction
AI-native companies, businesses built from the ground up around artificial intelligence capabilities rather than adding AI to existing products, represent the most important emerging sub-sector in TMT investment banking. The speed and scale of this market's growth is unprecedented: OpenAI surpassed $25 billion in annualized revenue, Anthropic recently passed $19 billion in run-rate revenue (growing at approximately 10x per year), and foundation model APIs accounted for $12.5 billion in spending in 2025. Gartner projects that 40% of enterprise applications will embed AI agents by the end of 2026, up from less than 5% in 2025. The AI agent market alone is expected to surge from $7.8 billion to over $52 billion by 2030. For TMT bankers, AI-native companies are generating the largest capital raises, the most complex M&A transactions, and the most challenging valuation questions in the sector.
AI-Native Business Model Categories
AI-native companies operate across several distinct business model categories, each with different economics, competitive dynamics, and valuation frameworks.
Foundation Model Providers
Companies like OpenAI, Anthropic, Google DeepMind, and Meta AI build large language models (LLMs) and other foundation models that serve as the intelligence layer for thousands of downstream applications. Revenue comes from API access (pay-per-token or pay-per-request pricing), consumer subscriptions (ChatGPT Plus, Claude Pro), and enterprise platform licenses. This is the highest-investment, highest-risk tier of the AI value chain: training a frontier model can cost hundreds of millions of dollars in GPU compute, and the competitive landscape is intensely concentrated.
- Foundation Model
A large-scale AI model trained on broad datasets that can be adapted to a wide range of downstream tasks. Foundation models (GPT-4, Claude, Gemini, Llama) serve as general-purpose intelligence platforms, similar to how operating systems serve as general-purpose computing platforms. Foundation model providers monetize through API pricing (charging per token of input and output processed), consumer subscriptions, and enterprise licensing agreements. The economics are characterized by massive upfront training costs (tens to hundreds of millions of dollars), ongoing inference costs (the compute required to run the model for each user request), and rapidly declining per-unit costs as hardware improves and optimization techniques advance.
The foundation model layer exhibits strong scale economics: larger models trained on more data tend to produce better results, creating an advantage for companies with the capital to invest in frontier training runs. Microsoft, Amazon, and Alphabet each signaled capital expenditure plans of $80-100 billion in 2025, with a significant share directed toward AI infrastructure, reflecting the massive investment required to compete at the frontier.
The competitive dynamics of the foundation model layer resemble the early cloud computing market more than the SaaS market: a small number of well-capitalized players (OpenAI, Anthropic, Google, Meta, Amazon) are investing billions to build platforms that thousands of downstream companies will build upon. The analogy extends to business model evolution: just as AWS started by selling raw compute and storage, then added higher-level services (databases, analytics, machine learning tools), foundation model providers are moving from raw API access toward integrated platforms that include fine-tuning tools, agent frameworks, and deployment infrastructure. This platform evolution creates a moat that becomes deeper over time as developers build on the provider's ecosystem.
AI Application Layer
Companies that build applications on top of foundation models, using their APIs to deliver AI-powered products for specific use cases: coding assistance (GitHub Copilot), content generation (Jasper, Copy.ai), customer service automation, legal document analysis, medical imaging, and hundreds of other verticals. These companies are closer to the traditional SaaS model in that they sell subscriptions to end users, but their cost structure includes API fees paid to foundation model providers, which creates a dependency and margin structure different from traditional software.
AI Infrastructure and Tooling
Companies that provide the infrastructure layer: GPU cloud providers (CoreWeave, Lambda), MLOps platforms (Weights & Biases, Databricks), data labeling and preparation (Scale AI), model deployment and monitoring, and AI security. This layer has SaaS-like economics with the advantage that customers are locked in through data integration and workflow dependency, similar to vertical SaaS switching costs.
AI infrastructure companies are particularly attractive to TMT bankers because they benefit from the growth of the entire AI ecosystem regardless of which foundation models or applications win. Just as AWS profited from the growth of SaaS regardless of which SaaS companies succeeded, GPU cloud providers and MLOps platforms profit from AI adoption regardless of which AI companies capture end-user value. This "picks and shovels" positioning creates more predictable growth trajectories and reduces the competitive risk that application-layer companies face. CoreWeave's rapid growth (the company was valued at over $35 billion in early 2026) demonstrates the market's appetite for AI infrastructure companies. Databricks, valued at $62 billion, exemplifies how data and AI infrastructure platforms can achieve massive scale by serving as the data foundation layer that AI applications require.
AI Agents
The newest and fastest-growing category. AI agents are autonomous systems that can perform multi-step tasks (booking travel, conducting research, managing workflows, writing and deploying code) with minimal human intervention. The agentic AI market is projected to grow from $7.8 billion to over $52 billion by 2030. Agent-based business models are still emerging, but the dominant pricing approaches include per-task pricing (charging for each completed workflow), outcome-based pricing (charging based on the value delivered), and subscription tiers that include a certain volume of agent actions.
The agent business model represents a potential paradigm shift in software economics. Traditional SaaS charges for access to tools that humans use. Agent-based models charge for work that AI performs autonomously, which means the value proposition is closer to outsourcing or professional services than to software licensing. A legal AI agent that reviews contracts at one-tenth the cost of a junior associate is priced against human labor, not against other software products. This pricing dynamic could support much higher ARPU than traditional SaaS, but it also introduces new risks: if the agent makes an error that causes financial or legal harm, the liability implications are different from a software tool where the human user bears responsibility for outcomes.
The Economics of AI-Native Companies
AI-native companies have fundamentally different unit economics than traditional SaaS, and understanding these differences is essential for TMT analysts building financial models and valuation frameworks.
However, the gross margin trajectory is rapidly improving, and this trajectory is the single most important variable in AI company financial analysis. AI inference costs fell 78% through 2025 for some providers, driven by hardware improvements (more efficient GPUs, custom AI chips), software optimization (better model compression, quantization, and caching techniques), and scale economics (higher utilization rates across larger GPU fleets). If cost-per-inference continues declining at this rate, AI companies that currently operate at 50% gross margins today could reach 70%+ within a few years, assuming they maintain pricing power.
The pricing model landscape reflects this evolving economics. A 2025 industry report found that 92% of AI software companies use mixed pricing models, combining subscriptions with usage-based fees. Pure usage-based pricing (pay per token, per request, or per task) aligns costs with revenue but creates revenue volatility. Subscription pricing provides predictability but risks either undercharging heavy users or overcharging light users. Most enterprise AI deals use hybrid pricing: a base subscription that includes a certain usage volume, with overage charges for additional consumption.
Companies are also pursuing cost optimization strategies to improve margins within the current pricing environment. Custom fine-tuned models deliver 50-70% cost reduction at scale compared to using general-purpose APIs, though custom model development requires investment of $100,000-500,000+ for team, infrastructure, and training. Hybrid inference approaches, using API calls for complex queries (approximately 20% of requests) and locally deployed fine-tuned models for simple queries (approximately 80% of requests), enable immediate cost reduction without a full custom build. Caching strategies eliminate redundant inference for repeated queries. These optimization techniques are creating a new category of AI cost management tools that represent an investment opportunity within the AI infrastructure layer.
Valuing AI-Native Companies
AI-native company valuation presents novel challenges for TMT bankers because many of the standard SaaS valuation frameworks do not directly apply.
Revenue quality differs from traditional SaaS. AI company revenue is often usage-based rather than contractually recurring, which means traditional ARR metrics may significantly overstate revenue predictability and durability. An AI company reporting $100 million in ARR based on annualizing the most recent month's usage-based revenue is presenting a less predictable figure than a SaaS company with $100 million in contractually committed annual subscriptions.
Gross margin trajectory matters more than current gross margin. Because inference costs are declining rapidly, an AI company at 50% gross margins today may be more valuable than its current margins suggest if cost trends continue. TMT analysts must model the gross margin trajectory, not just the current snapshot, when building DCF models for AI companies.
Competitive moats are harder to assess. Traditional SaaS companies build moats through data integration, workflow dependency, and switching costs. AI companies' moats are more fragile: a new, more capable model from a competitor can make an existing product obsolete faster than in traditional software. The most defensible AI companies are those with proprietary training data, deep customer integration, or unique access to domain-specific information that cannot be replicated. The median AI company valuation sits in the mid-20s EV/Revenue range, but there is significant dispersion: companies with demonstrable moats (proprietary data, deep customer integration) command premiums, while "thin wrapper" application companies that merely reskin foundation model APIs trade at discounts that reflect their vulnerability to commoditization.
Capital requirements create a distinct financial profile. AI-native companies require far more capital than traditional SaaS businesses at equivalent revenue scales. A SaaS company generating $100 million in ARR might have raised $50-100 million in total capital. An AI company at the same revenue level might have raised $500 million to $1 billion+, reflecting the compute costs of model training and the infrastructure required for inference at scale. This capital intensity means AI companies face dilution pressure that traditional SaaS companies do not, and TMT bankers advising on AI capital raises must help companies balance growth funding against equity dilution.
AI M&A and Capital Markets Activity
AI-native companies are driving the largest and most complex transactions in TMT. Alphabet's $32 billion acquisition of Wiz, Meta's $14.3 billion investment in Scale AI, and SoftBank's $40 billion investment in OpenAI illustrate the scale of capital flowing into the AI sector. These transactions often involve creative structures (minority investments, nonvoting stakes, acqui-hires) designed to achieve strategic access to AI capabilities while navigating antitrust constraints.
For TMT bankers, AI transactions require a combination of technology understanding (what the model does, how it competes, what its limitations are), financial analysis (unit economics, gross margin trajectory, capital requirements), and regulatory awareness (antitrust considerations for Big Tech acquirers, data privacy regulations, AI-specific regulations emerging in the EU and elsewhere). The rapid evolution of AI technology means that valuations and competitive positions can shift faster than in any other TMT sub-sector, creating both significant advisory opportunity and analytical challenge.
The capital markets activity around AI companies is equally significant. Private funding rounds for AI companies have reached unprecedented scale: OpenAI raised $6.6 billion at a $157 billion valuation, Anthropic has raised billions from Amazon, Google, and other investors, and xAI (Elon Musk's AI venture) raised $6 billion in a single round. These massive private rounds have pushed AI company IPOs further into the future, as companies can access sufficient capital without going public. When AI companies do eventually access public markets, they will represent some of the largest technology IPOs in history, creating major advisory opportunities for TMT investment banks.
The M&A landscape for AI companies also includes a growing number of acqui-hires and talent-motivated transactions. Microsoft's hiring of key Inflection AI staff, Meta's $14.3 billion Scale AI investment paired with hiring the CEO, and similar transactions reflect the extreme scarcity of world-class AI research talent. The economics of these transactions are fundamentally different from traditional M&A: the acquirer is paying for human capital and intellectual property rather than revenue or customer relationships, and the valuation framework must account for the risk of key person departure. TMT bankers advising on AI talent transactions must structure retention mechanisms (vesting schedules, non-competes where enforceable, equity incentives) that align the acquired team's interests with the acquirer's long-term objectives.
The strategic imperative driving AI M&A is clear: 67% of business leaders say they will maintain AI spending even in a recession, and enterprise AI deployment is projected to accelerate through 2026 and beyond. For TMT bankers, AI-native companies represent not just a new sub-sector to cover but a transformational force that is reshaping valuation frameworks, deal structures, and competitive dynamics across the entire technology landscape.


