Interview Questions229

    AI and Valuation: Pricing Disruptive Technology

    How artificial intelligence is creating valuation premiums, why traditional methods struggle with AI assets, and the emerging framework for pricing AI-driven businesses.

    |
    8 min read
    |
    1 interview question
    |

    Introduction

    Artificial intelligence is the most significant thematic driver of M&A valuations since the cloud computing wave of the 2010s. Approximately one-third of the 100 largest corporate M&A transactions in 2025 cited AI as part of the strategic rationale. In the technology sector specifically, nearly all of the largest deals referenced AI in their deal thesis. The average revenue multiple for AI-related M&A in 2025 reached 25.8x, dramatically above the 3-7x range for standard SaaS companies. Total tech M&A deal value surged 65% year-over-year in January 2026 alone, driven overwhelmingly by AI infrastructure transactions.

    But AI's impact on valuation is more nuanced than a uniform premium. The market is developing a tiered framework that differentiates between AI companies based on the depth and defensibility of their technology, and applying very different multiples to each tier.

    The AI Valuation Tier Framework

    Tier 1: AI Infrastructure (Highest Multiples)

    Companies building the foundational layers of AI: semiconductor designers (NVIDIA, AMD), cloud computing infrastructure (hyperscalers), data center operators (Equinix, Digital Realty), and large language model developers (OpenAI, Anthropic). These assets command the most extreme valuations because they are the "picks and shovels" of the AI revolution: every AI application depends on them, and supply is constrained.

    Large language model vendors command revenue multiples of approximately 54.8x, reflecting both the transformative potential and the winner-take-most dynamics of foundation model development. Data intelligence companies trade at approximately 41.7x revenue. NVIDIA's market cap exceeded $3 trillion in 2025, making it one of the three most valuable companies globally.

    Tier 2: AI-Native Applications (High Multiples)

    Companies whose core product is built around AI: AI-powered cybersecurity platforms (like Wiz, acquired by Alphabet for $32 billion), AI-driven drug discovery (Recursion Pharmaceuticals, Isomorphic Labs), and AI-native enterprise software. These companies have AI embedded in their core architecture, not bolted on as a feature. They command 30-50% premiums over comparable non-AI software.

    Tier 3: AI-Enhanced Products (Moderate Premium)

    Established software and technology companies that have integrated AI capabilities into existing products (CRM with AI-powered recommendations, ERP with AI-driven forecasting, marketing platforms with AI content generation). The AI premium for Tier 3 companies is smaller (10-20% above non-AI peers) and increasingly under pressure as AI features become table stakes rather than differentiators.

    AI Valuation Premium

    The incremental valuation multiple that a company commands because of its AI capabilities, measured relative to comparable non-AI businesses. The premium reflects the market's assessment that AI creates sustainable competitive advantages through proprietary data assets, unique model architectures, network effects (more users generate more data, which improves the model), and high switching costs. The premium varies dramatically by tier: infrastructure-grade AI assets command 3-5x the multiple of comparable non-AI businesses, while feature-level AI enhancements may command only 10-20% premiums that erode as competitors adopt similar capabilities.

    Why Traditional Valuation Methods Struggle with AI

    Revenue Quality Is Harder to Assess

    Not all AI revenue is created equal. Consumption-based pricing (common for AI API services, where the customer pays per query or per token) creates variable revenue streams that are harder to project than traditional subscription models. A company reporting $100 million in AI-related revenue may have very different visibility depending on whether that revenue comes from annual contracts (predictable) or usage-based pricing (volatile).

    The Moat Question Is Unresolved

    The most consequential valuation debate in AI is whether foundation models will become commoditized (in which case the premium dissipates) or whether network effects and data advantages create durable moats (in which case the premium is justified). OpenAI's lead in general-purpose language models is significant, but open-source alternatives (Meta's Llama, Mistral) are closing the quality gap rapidly. If the model layer commoditizes, value accrues to the application layer (companies using AI to solve specific vertical problems) rather than the infrastructure layer.

    Landmark AI Deals and Their Valuation Implications

    DealValueImplied MultipleAI Rationale
    Alphabet / Wiz$32B~25-30x NTM ARRCloud-native security: AI-powered threat detection at scale
    Microsoft / Activision$69B~30x EBITDAAI content generation, gaming data for training models
    Electronic Arts LBO$57BLarge-cap gamingAI-powered game development and personalization
    NVIDIA (trading)$3T+ market cap30-40x NTM revenueFoundational AI compute infrastructure

    These deals illustrate the range of AI-driven valuations: from pure infrastructure plays (NVIDIA, where the AI premium accounts for the majority of the valuation) to strategic acquisitions where AI is part of a broader thesis (Alphabet/Wiz, where AI-powered security was one dimension of a cloud platform strategy).

    Valuing AI Companies: A Practical Framework

    For investment bankers advising on AI-related transactions, the valuation approach combines standard tools with AI-specific adjustments:

    Use [EV/Revenue](/guides/valuation-investment-banking/ev-revenue-pe-other-multiples-when-ev-ebitda-not-enough) as the primary multiple for high-growth AI companies, because most are pre-profit or have EBITDA margins compressed by growth investment. Segment the peer group into the appropriate tier (infrastructure vs. application vs. feature-level) to avoid comparing NVIDIA to a small AI SaaS startup.

    Supplement with a [DCF](/guides/valuation-investment-banking/walk-me-through-dcf-end-to-end-framework) that models the path to profitability. AI companies investing heavily in model training, compute infrastructure, and talent will show near-term losses that give way to operating leverage as revenue scales. The projection period should be 7-10 years to capture this transition, and the terminal value should reflect mature, at-scale margins (which for software platforms can reach 30-40% EBITDA).

    Assess the moat. The valuation should explicitly address whether the company's AI advantage is defensible. Proprietary training data that improves with use (a data flywheel), patented model architectures, and exclusive partnerships create lasting value. Generic AI features that any competitor can replicate with off-the-shelf models do not.

    Data Flywheel (AI Moat)

    A competitive advantage where a company's AI models improve as more users generate more data, which trains better models, which attract more users, creating a compounding cycle. Companies with strong data flywheels (Google Search, Tesla Autopilot, Amazon product recommendations) build AI advantages that are extremely difficult for competitors to replicate because the moat grows wider with use. In AI valuation, the strength of the data flywheel is one of the most important qualitative factors: a company with a proven flywheel (measurable model improvement tied to data volume) deserves a structurally higher multiple than one without, because the flywheel creates a self-reinforcing competitive advantage that persists even as the underlying AI technology commoditizes.

    Interview Questions

    1
    Interview Question #1Hard

    How would you value an AI company?

    AI company valuation depends on the company's position in the value chain:

    Infrastructure layer (chips, cloud compute): These companies (NVIDIA, hyperscalers) often have strong revenue and earnings. Use standard EV/EBITDA, EV/Revenue, and DCF. Multiples are elevated (20-30x+ EBITDA) reflecting growth expectations.

    Application layer (SaaS built on AI): If revenue exists, use EV/ARR or EV/Revenue with SaaS-like comps. Apply the Rule of 40. If pre-revenue, use comparable transactions.

    Feature-layer AI (AI embedded in existing products): Value the entire business, not the AI feature. The AI component adds to growth assumptions in the DCF or justifies a premium multiple.

    Key considerations: - Data moat: Does the company have proprietary training data that creates a competitive advantage? - Retention and switching costs: Are customers locked in? - Commoditization risk: Will the AI capability become a commodity as open-source models improve?

    In 2025-2026, AI M&A multiples averaged approximately 25.8x EV/Revenue, significantly above traditional software.

    Explore More

    How to Answer "Tell Me About a Time You Failed" in IB Interviews

    Master the failure question in banking interviews. Learn to choose the right story, structure your answer with STAR, and turn setbacks into proof of resilience.

    November 26, 2025

    How to Build a Merger Model (Beginner Guide)

    Learn to build a merger model step-by-step. Master accretion/dilution analysis, purchase price allocation, synergies, and pro forma financials for M&A transactions.

    December 11, 2025

    What Interviewers Look for in Excel Modeling Tests

    Understand exactly what interviewers evaluate during Excel modeling tests in investment banking and private equity interviews. Learn the key skills, common formats, critical mistakes to avoid, and how to demonstrate technical excellence under time pressure.

    November 12, 2025

    Ready to Transform Your Interview Prep?

    Join 3,000+ students preparing smarter

    Join 3,000+ students who have downloaded this resource