AI Datacenter: State of Development — March 2026
Infrastructure & Technology Division — Sector Analysis, Full | Report T37 | Format D
PRZC Research | March 25, 2026 | Infrastructure & Technology
Executive Note. This is a Format D Sector Analysis. It is intended as a comprehensive state-of-play document, not a trade recommendation bulletin. Scenario tables and positioning guidance at the end are directional frameworks, not investment advice. See Disclaimer.
Table of Contents
- Sector Overview — The Build Cycle in Context
- The Build Pipeline — What Is Under Construction
- Power — The Binding Constraint
- Capital Allocation — Who Is Paying and How
- Demand vs. Supply — Is the Ramp Justified?
- Technology Shifts Reshaping Datacenter Design
- Geopolitical and Regulatory Overhang
- Company-by-Company Profiles
- 8A. Hyperscalers
- 8B. Specialist Infrastructure Operators
- 8C. Neocloud and GPU-Cloud Challengers
- Key Risk Scenarios
- Positioning — What to Own, What to Avoid
- Disclaimer
1. Sector Overview — The Build Cycle in Context
The AI datacenter industry is in the most capital-intensive phase of any technology infrastructure cycle in recorded corporate history. By conservative estimates, the five largest US cloud and AI infrastructure providers — Microsoft, Alphabet, Amazon, Meta, and Oracle — collectively committed between $660 billion and $690 billion in capital expenditure for calendar year 2026, representing a 36% increase over 2025's approximately $445 billion and nearly doubling from $224 billion in 2024. Including global sovereign and private investors, total worldwide spending on datacenter infrastructure in 2025 approached or exceeded $1 trillion for the first time.
The cycle is being driven by three simultaneous forces: (a) the need to train frontier-scale large language models (LLMs) that require clusters of 100,000+ accelerators; (b) the explosion of inference demand as AI products reach consumer and enterprise scale; and (c) competitive signaling — no major hyperscaler can afford to appear under-resourced for AI without capital markets consequences. The result is a spend race with properties more typical of geopolitical arms races than conventional capex cycles.
The structural tension defining 2026 is the divergence between the pace of build and the pace of monetization. Infrastructure is being deployed at a rate that exceeds demonstrable near-term AI revenue by a factor of 10–20x. This does not necessarily make the build wrong — technology infrastructure has always been partially speculative — but it introduces significant risk of capital misallocation, particularly for smaller actors and third-party developers who are dependent on hyperscaler pre-leasing commitments.
Summary Metrics as of March 2026
| Metric |
Value |
| Global datacenter capacity (operational) |
~103–110 GW |
| Global datacenter capacity (under construction, Sept 2025) |
~23 GW |
| Total JLL-tracked hyperscale pipeline |
770 facilities |
| Announced 2030 target capacity (JLL) |
~200 GW |
| Big-5 hyperscaler CapEx 2025 (estimated actuals) |
~$445B |
| Big-5 hyperscaler CapEx 2026 (guidance/consensus) |
$660–$690B |
| Debt raised for datacenter projects in 2025 |
~$182B |
| Percentage of 2025 projects experiencing delays |
30–50% |
| PJM interconnection wait time (avg, 2025) |
8+ years |
| Goldman Sachs projected occupancy peak |
>95% (late 2026) |
2. The Build Pipeline — What Is Under Construction
2.1 Scale of the Global Pipeline
As of the end of 2025, more than 23 GW of datacenter capacity was under construction globally, with approximately 75% of that in the United States. JLL's global tracker identifies 770 hyperscale facilities at various stages of planning, construction, or fit-out. The addressable construction pipeline in MW terms has expanded faster than the industry can actually execute — a mismatch that explains both the equipment backlogs and the 30–50% delay rate cited by Sightline Climate and corroborated by multiple developer surveys.
The 64% of the 35 GW active US construction pipeline that now extends beyond traditional mature markets (Northern Virginia, greater Phoenix, Silicon Valley) reflects both saturation in legacy hubs and deliberate geographic diversification to chase lower power costs, available land, and shorter grid interconnection queues.
2.2 US Geographic Distribution
Northern Virginia / Data Center Alley (Loudoun County)
Still the largest single concentration of digital infrastructure on the planet. Loudoun County alone carries approximately 6,000 MW of active capacity with another 6,300 MW in various planning stages. Virginia statewide hosts 665+ facilities. However, land and power constraints are increasingly acute. Zoning saturation in Ashburn has forced expansions to Prince William County and the broader I-81 corridor. Northern Virginia absorbs roughly 70% of all internet traffic globally, creating strong inertia for further build despite cost pressures.
Texas (Dallas–Fort Worth, Austin, San Antonio, Abilene)
Now ranked second nationally with 413 facilities. Texas is the fastest-growing major market and is on a trajectory to surpass Virginia as the world's largest datacenter market by 2030, per JLL's year-end 2025 North America report. Advantages: large available land parcels, competitive electricity pricing, a deregulated ERCOT grid (both opportunity and risk), and favorable regulatory posture. Abilene has emerged as a major destination specifically tied to the Crusoe Energy / Stargate campus project. Complicating factor: ERCOT has its own grid instability risks separate from PJM.
Iowa, Ohio, Wisconsin, Nebraska (Midwest Corridor)
Increasingly attractive for training clusters where power pricing and availability outweigh latency concerns. Google has operated major Iowa facilities for over a decade. Microsoft's large Iowa campus is one of its most power-dense facilities. The Midwest offers meaningful grid headroom versus the East Coast.
Arizona, Nevada (Sun Belt)
Phoenix remains a major build hub despite growing concerns about water availability for evaporative cooling in an already water-stressed region. Several projects are pivoting to closed-loop liquid cooling specifically to address municipal water commitments. Las Vegas and Reno offer power from Nevada's renewable build-out.
2.3 European Geographic Distribution
Ireland (Dublin)
Ireland remains the dominant European datacenter hub, driven by favorable tax treatment, strong fiber connectivity, and an English-speaking workforce. However, EirGrid has imposed formal moratoria on new large power connections in the Greater Dublin Area at various points since 2021. Multiple hyperscaler projects have faced multi-year delays. Ireland's grid is increasingly renewable-heavy (wind) but lacks the baseload capacity to absorb large new synchronous loads without instability risk.
Netherlands (Amsterdam / AMS-IX)
The Amsterdam interconnect hub is globally significant. The Dutch government has implemented formal zoning restrictions ("datacentrum beleid") capping new construction in the Haarlemmermeer municipality. Several projects have been denied permits or put on hold. Hyperscalers are redirecting Dutch-bound investment to lower-restriction Dutch provinces and to neighboring countries.
Spain
Spain was positioned as a growth market — abundant solar, lower land costs, and EU connectivity — but the April 28, 2025 Iberian blackout fundamentally altered the risk calculus. The event, which saw Spain lose over 2.5 GW of solar generation within seconds due to reactive power control failures and voltage oscillation cascades, left the entire Iberian Peninsula without power for several hours. The ENTSO-E final report (released March 23, 2026) identified systemic regulatory and technical failures rather than a single point of failure. Several planned datacenter projects in Spain have undergone revised grid resilience assessments since May 2025. Power purchase agreement pricing in Spain has widened significantly.
Poland, Germany, and Central Europe
Poland is emerging as a viable European alternative: lower land costs, substantial coal-to-gas transition underway, EU membership, and relative grid stability. Germany remains attractive for network connectivity but energy costs are among the highest in Europe post-Energiewende and the 2022 gas crisis. Google, Microsoft, and a range of European cloud providers are expanding in Warsaw and Lodz corridors.
2.4 Asia-Pacific Distribution
Japan (NTT's home market, now committed to doubling capacity), Singapore (constrained by government moratorium now selectively lifted), South Korea (SK Telecom, KT Group expansions), Australia (Sydney, Melbourne), and India (Mumbai, Hyderabad, Chennai — accelerating significantly post-2024 policy reforms) are the primary nodes. Southeast Asia — particularly Johor, Malaysia — had emerged as a major AI campus destination for capacity redirected from restricted markets, though US export control Tier 2 categorizations have complicated the chip procurement picture for these facilities (see Section 7).
2.5 Notable Project Cancellations, Pauses, and Restructurings
Microsoft — The 2GW Lease Walkback
This is the most widely reported capacity pullback of the current cycle. TD Cowen analysts in early 2025 identified Microsoft walking away from pending datacenter leases with third-party developers totaling approximately 2 GW, primarily non-binding letters of intent (LOIs) in the US and Europe. The SemiAnalysis newsletter subsequently clarified the picture: the 2 GW figure refers to LOI-stage, non-binding pre-lease agreements, not firm contracts. Microsoft separately announced a freeze on approximately 1.5 GW of near-term self-build projects scheduled to come online in 2025–2026. The underlying reason was Microsoft's decision to restructure its multiyear compute agreement with OpenAI — reducing direct Microsoft-hosted OpenAI training workloads as OpenAI gained the ability to source compute from alternative providers — while Microsoft retains a right of first refusal on new OpenAI compute demand. Microsoft's committed contracted pipeline (~5 GW under binding agreements for delivery 2025–2028) remains intact. The company remains on track to spend approximately $80 billion on AI datacenter build in 2025.
OpenAI Stargate — Oracle Texas Expansion Collapse
The Stargate program, announced in January 2025 as a $500 billion, multi-year AI infrastructure joint venture among OpenAI, Oracle, and SoftBank, has experienced significant structural dysfunction. In March 2026, Bloomberg and The Register confirmed that Oracle and OpenAI have abandoned plans to expand the flagship Abilene, Texas data center campus from its existing ~1.2 GW power capacity to the envisioned ~2.0 GW. Financing disagreements and OpenAI's shifting capacity projections were cited. More fundamentally, more than a year after the announcement, the Stargate JV has reportedly not hired permanent staff and lacks a clear governance structure, with the three partners unable to resolve disputes over operational control. Oracle's separate agreement (July 2025) to develop 4.5 GW of datacenter capacity for OpenAI under a direct cloud infrastructure contract remains active. The collapsed Texas expansion created an opening for Meta Platforms to potentially lease the planned expansion site from developer Crusoe Energy.
3. Power — The Binding Constraint
Power availability is not merely a constraint on AI datacenter development — it is the single most determinative factor in the competitive positioning of any geography, developer, or operator over the next five years. The sector is confronting a structural deficit between power demand growth and grid delivery capacity that cannot be resolved on any timeline relevant to near-term build plans.
3.1 PJM Interconnection Queue
PJM Interconnection is the grid operator serving the Mid-Atlantic and Midwest — the region that includes Northern Virginia's Data Center Alley, which accounts for the largest concentration of AI workloads globally. The situation is critical:
- The average timeline from interconnection application to commercial operation has lengthened from under two years in 2008 to more than eight years in 2025.
- PJM's 2025 Long-Term Load Forecast projects 32 GW of peak load growth between 2024 and 2030, with data centers responsible for 94% of that increase.
- Over 170,000 MW of new generation requests have been received by PJM since 2023, with 30,000 MW still in the transition processing queue for 2026.
- Analysis from Synapse Energy Economics projects PJM consumers will pay an extra $100 billion through 2033 due to demand exceeding supply capacity, with $9.4 billion in incremental electricity cost already absorbed by 67 million PJM-served customers in summer 2025 and $1.4 billion more locked in for summer 2026.
On December 18, 2025, FERC issued a unanimous order directing PJM to create formal rules for datacenter colocation at power plants — enabling large loads to connect directly at generation facilities, bypassing transmission queues. Three new transmission service options and compliance deadlines starting January 2026 represent the most significant structural reform to the colocation framework in a decade. Implementation remains uncertain.
3.2 European Grid Constraints — Post-Iberian Blackout
The April 28, 2025 Iberian blackout was the most significant European grid failure since 2006. Within a 48-second window beginning at 12:32:00, cascading photovoltaic and concentrated solar generation disconnections totaling over 2.5 GW in southern Spain collapsed the Iberian synchronous zone. ENTSO-E's 440-page March 2026 final report attributed the event to systemic failures in voltage regulation and reactive power control, exacerbated by low inertia conditions as synchronous conventional generation ran at minimal dispatch levels in a high-renewable-penetration grid.
The datacenter implications extend beyond Spain:
- The event has accelerated regulatory scrutiny of all datacenter load growth across the EU, particularly large synchronous loads in grids with high renewable penetration and low inertia.
- Several planned datacenter projects in southern Spain — attracted by cheap solar PPA pricing — have been reassessed or paused pending updated grid stability studies.
- The EU's proposed Cloud and AI Development Act (expected Q1 2026) has been drafted with explicit grid stability obligations for large datacenter operators, including requirements to participate in demand response and provide reactive power support in exchange for expedited permitting.
- Ireland, the Netherlands, and Germany have separately tightened technical due diligence requirements for new large grid connections in the post-Iberian regulatory environment.
3.3 Nuclear Power Deals — Current Status
Nuclear energy has emerged as the preferred long-duration, carbon-free baseload solution for hyperscalers seeking to pair clean energy with guaranteed dispatchable power. The scale of commitment is without precedent in corporate energy procurement history.
- Microsoft — Three Mile Island (Constellation Energy, ~835 MW): The defining transaction of the cycle. Microsoft signed a 20-year PPA valued at approximately $16 billion with Constellation Energy to restart the shuttered Unit 1 reactor at Three Mile Island (now renamed the Christopher M. Crane Clean Energy Center). The DOE Loan Programs Office closed a $1 billion federal loan to Constellation in November 2025. The restart has been accelerated from the original 2028 target to a projected 2027 operational date.
- Amazon — Multi-Site Nuclear Portfolio: Amazon secured a 1.92 GW PPA from the Susquehanna nuclear plant and committed $500 million to SMR development. Additional deals include contracts with Energy Northwest (960 MW) and Dominion Energy (300+ MW) serving Virginia-region datacenters. Amazon is the largest single corporate nuclear energy buyer by contracted volume.
- Google — Kairos Power SMR Fleet: Google signed the first US corporate small modular reactor fleet contract with Kairos Power, targeting 500 MW of SMR capacity operational by 2030+. The Google/Kairos deal established the commercial template for corporate SMR procurement.
- Meta — 6.6 GW Nuclear Procurement Program: In early 2026, Meta announced an aggressive 6.6 GW nuclear procurement strategy ("Prometheus Program"). Constellation signed a 20-year deal with Meta to supply 1,100 MW from the Clinton, Illinois nuclear plant beginning June 2027. Additional deals are reportedly in late-stage negotiation.
Sector-wide nuclear commitment: 10+ GW in new US corporate nuclear PPAs were signed in 2025 alone. Experts now project "AI-Nuclear Clusters" will become a standard architectural model for hyperscale training facilities.
SMR status note: No SMRs are operational for commercial datacenter use as of March 2026. NuScale's commercial program collapse in 2023 remains a cautionary precedent. The Kairos, X-energy, and TerraPower programs are in various construction or NRC licensing phases, with the earliest commercial operation dates in the 2029–2031 range. The gap between nuclear PPA signing and nuclear power actually flowing represents a multi-year bridge period during which gas generation and grid power must fill the load.
3.4 On-Site Generation as Bridge Solution
Given interconnection delays measured in years and nuclear in decades, a meaningful share of new AI datacenter capacity is being powered by on-site or behind-the-meter gas generation:
- Combustion turbines and reciprocating gas engines are being permitted and installed at AI campuses, particularly in Texas (ERCOT jurisdiction) and in markets where direct interconnection to existing gas generation plants is permitted.
- Fuel cell deployments (Bloom Energy, FuelCell Energy) are scaling rapidly for facilities where gas-to-power conversion at the device level offers permitting speed advantages over grid-connected builds.
- FERC's December 2025 colocation ruling creates a formal regulatory pathway for behind-the-meter generation at grid-connected plants, potentially accelerating gas-turbine colocation arrangements.
- Environmental groups and several state regulators have raised objections to the carbon intensity of on-site gas generation at scale, creating a political liability for hyperscalers with public net-zero commitments.
3.5 The 50% Pipeline Delay Figure — Attribution and Context
The widely circulated figure that "up to 50% of the world's data centers may be delayed" originates from research published by Sightline Climate and corroborated by surveys from Uptime Institute (which found 75%+ of organizations reported supply chain disruptions over the prior 18 months). The figure is a global average; actual delay rates vary significantly by region and project stage:
- Power connection delays are the dominant cause, cited by 48% of respondents as the single largest scheduling constraint in Bain & Company's 2025 datacenter construction survey.
- Electrical equipment lead times — transformers, switchgear, PDUs — are running 12–18 months from order to delivery in many markets, with grain-oriented electrical steel prices up approximately 100% since 2020.
- Cooling system components for liquid-cooled AI builds have lead times of 9–14 months.
- The Bain figure should be read as applying to projects scheduled for completion in 2025–2026; longer-dated projects in the 2027–2029 window have more procurement runway.
4. Capital Allocation — Who Is Paying and How
4.1 Big-5 Hyperscaler CapEx — 2025 Actuals and 2026 Guidance
| Company |
2024 CapEx (A) |
2025 CapEx (Est. A) |
2026 CapEx (Guidance/Consensus) |
| Amazon (AWS) |
~$83B |
~$125B |
~$200B |
| Alphabet (Google) |
~$52B |
~$91–$93B |
~$175–$185B |
| Microsoft |
~$56B |
~$80B |
~$120B+ |
| Meta Platforms |
~$37B |
~$65–$72B |
~$115–$135B |
| Oracle |
~$9B |
~$20B |
~$50B |
| Big-5 Total |
~$237B |
~$381–$390B |
~$660–$690B |
Source: Company earnings disclosures, IEEE ComSoc analysis, Futurum Group, MUFG Americas research (Dec 2025). Figures rounded.
- Amazon's 2026 guidance of ~$200B is the most aggressive in absolute terms and would represent a 60% YoY increase from 2025.
- Alphabet revised its 2025 capex guidance upward three times over the course of 2025, signaling chronic underestimation of AI infrastructure needs.
- Meta's range for 2026 ($115–135B) is the widest of any hyperscaler, reflecting genuine internal uncertainty about optimal deployment pace.
- Approximately 75% of the aggregate 2026 figure is estimated to be AI-specific infrastructure (GPU clusters, liquid cooling, associated power).
4.2 Debt Issuance and Financing Structure
The transition from equity-funded to debt-funded datacenter build is one of the most significant structural shifts in this cycle:
- Total debt raised for datacenter-linked purposes in 2025 is estimated at $182 billion, approximately doubling the 2024 figure of ~$92 billion (S&P data).
- Hyperscalers themselves issued approximately $121 billion in new debt in 2025, with an outsized portion in the second half of the year. Meta raised $62 billion in debt cumulatively since 2022, with roughly half issued in 2025 alone.
- Off-balance-sheet vehicles (SPVs and joint ventures) have absorbed a further $120 billion in AI infrastructure financing. The most prominent single transaction is Meta and Blue Owl Capital's joint venture ($27 billion, the largest single private credit infrastructure deal on record).
- xAI, Oracle, and CoreWeave have also used SPV structures to shift financing off corporate balance sheets, with risks transferred to institutional fixed-income investors. The aggregate off-balance-sheet AI infrastructure debt is not yet fully catalogued in standard credit analysis.
Capex as a share of operating cash flow reached an extreme 94% for the hyperscaler group in 2025, up 18 percentage points from 2024. This is not a crisis for balance sheets of the caliber of Microsoft, Google, or Amazon — but it does constrain financial flexibility and investor tolerance for returns to shareholders.
4.3 Third-Party Developers and REITs
| Operator |
Commitment / Metric |
| Equinix (EQIX) |
$4–5B/year capex guided through 2029; 58 active expansion projects globally; aims to double capacity by end of 2029 |
| Digital Realty (DLR) |
Multi-phase campuses in Dallas, Tokyo, Frankfurt; each >250 MW potential; backed by infrastructure fund JV structures |
| Iron Mountain (IRM) |
Guiding 125 MW of leasing in 2025; expanding rapidly via joint ventures in Europe and the US |
| NTT Global Data Centers |
Committed March 2026 to doubling total capacity |
REITs face a structural disadvantage relative to hyperscalers: their cost of capital is higher, their balance sheet flexibility lower, and they are dependent on hyperscaler pre-leasing decisions that can shift (as the Microsoft LOI cancellations demonstrated). Equinix's multi-year capex guidance signals genuine confidence in the demand picture, but the company's xScale strategy (purpose-built hyperscale campuses) leaves it exposed to single-tenant concentration risk.
4.4 Private Credit Dynamics
Private credit's role in AI datacenter financing has expanded materially:
- Traditional lenders (banks, IG bond markets) remain available for investment-grade hyperscalers but cannot move at the speed required for speculative development.
- Infrastructure credit funds from KKR, Blackstone, Blue Owl, Apollo, and Ares have deployed billions into datacenter construction loans, often with yields of 8–12% and real-asset security (land, equipment, power contracts).
- The record $27 billion Meta/Blue Owl JV established a pricing benchmark for mega-ticket infrastructure private credit transactions.
- Risk repricing: As of early 2026, there are nascent signs of credit discipline emerging at the margins — debt underwriters are scrutinizing single-tenant exposure (where the pre-lease counterparty is the sole anchor tenant), and lender due diligence on power delivery certainty has tightened post-Iberian blackout. No large-scale pullback is evident, but spreads on speculative development credit have widened approximately 50–75 bps from their 2024 tights.
5. Demand vs. Supply — Is the Ramp Justified?
5.1 AI Workload Demand Growth
Underlying demand for AI compute is unambiguously real and growing. The question is one of pace and monetization lag. Key metrics:
- Inference is now the dominant AI workload by cost. Deloitte (November 2025) estimated inference accounted for 50% of all AI compute in 2025 and will represent two-thirds of AI compute in 2026. Inference crossed 55% of total AI cloud infrastructure spend in early 2026.
- Training clusters remain the largest single-site power consumers. A 100,000-GPU training cluster at current H100/H200 density consumes 50–100 MW. At GB200/Blackwell Ultra density, a comparable cluster in 2026 approaches 150–200 MW.
- The training-to-inference architectural divergence is beginning to fragment build strategies. Training workloads are latency-insensitive and can be sited in remote power-rich locations. Inference workloads require geographic distribution and low latency, driving a parallel build of distributed inference-optimized facilities closer to population centers.
5.2 Occupancy Rates
Operational facilities have tight availability. Goldman Sachs research projects occupancy rates rising from approximately 85% in 2023 to a projected peak of more than 95% in late 2026, followed by moderation beginning in 2027 as new supply comes online. At >95% occupancy, effective spare capacity for unplanned workload surges is functionally zero — any significant training run overage or inference demand spike faces queuing constraints. This underpins hyperscaler urgency to build ahead of demand.
5.3 Lead Times for New Capacity
End-to-end: site selection to first power-on for a purpose-built AI campus runs 24–36 months under normal conditions; 18 months is achievable for build-to-suit projects with pre-approved sites and pre-ordered equipment, but it represents the optimistic tail of the distribution. The implication: capacity being ordered today will not be operational before late 2027 at the earliest under typical scenarios.
5.4 The Monetization Gap
This is the central tension in the sector:
- AI-related services (AI APIs, inference-billed workloads, AI-enhanced SaaS) were estimated to generate approximately $25 billion in revenue in 2025.
- Hyperscalers spent approximately $380–450 billion on AI infrastructure in 2025.
- The ratio is approximately 10–18x spend-to-revenue — higher than any prior technology infrastructure build cycle at a comparable stage.
- McKinsey's widely cited analysis places the addressable AI infrastructure need at $7 trillion cumulatively through 2030, supported by a projected demand trajectory that most major forecasters describe as aggressive.
- Bain estimates that even the most optimistic enterprise AI adoption scenario generates $1.2 trillion in AI-attributable revenue cumulatively through the build period, against a potential $2 trillion in total infrastructure spend — implying a structural gap.
- The standard hyperscaler bull case argues the monetization gap closes through: (a) AI-enhanced cloud pricing premiums, (b) direct AI API revenue growth compounding at 100%+ YoY, (c) first-mover advantage in AI model hosting, and (d) productivity internalized within hyperscaler operations themselves (reducing human labor costs).
The gap is real and wide. It does not necessarily portend a crash, but it does mean that capital allocated to AI infrastructure at current multiples is pricing in an extremely optimistic adoption scenario with very limited margin for error.
5.5 Goldman Sachs 2027 Oversupply Warning
Goldman Sachs research published in September 2025 identified a risk of long-term market oversupply emerging in 2027 and beyond. The specific mechanics:
- Global datacenter power demand is projected to reach 84 GW by 2027, with AI accounting for 27% of the overall market.
- At current build paces, the supply of commissioned AI capacity in 2027–2028 will outpace the growth in monetizable demand at current AI revenue run rates.
- Goldman identifies four scenarios ranging from "sustained demand surge" (capacity shortage persists) to "efficiency-driven compression" (more capable models require less compute per query, collapsing utilization on existing hardware).
- The DeepSeek-type efficiency scenario is the most bearish for infrastructure operators: if model efficiency doubles every 18 months, the hardware deployed in 2025–2026 faces accelerating functional obsolescence.
- Moderating factor: Goldman acknowledges that "Jevons Paradox" dynamics — where efficiency gains expand total use rather than reducing infrastructure demand — could sustain the demand picture well past 2027.
6. Technology Shifts Reshaping Datacenter Design
6.1 Liquid Cooling — From Edge Case to Standard
The transition from air cooling to liquid cooling is not optional for AI-optimized facilities — it is architecturally mandated by the power densities of current-generation accelerators.
- NVIDIA Blackwell Ultra (GB200 NVL72): Packs 72 Blackwell GPUs and 36 Grace CPUs into a single rack. Power draw: up to 140 kW per rack. Requires direct liquid cooling (cold plates) and a 250 kW cooling distribution unit (CDU). Air cooling is not viable at these densities.
- Historical comparison: Traditional enterprise server racks ran 5–15 kW. AI racks at Blackwell generation: 120–142 kW. Upcoming NVIDIA Vera Rubin Ultra (end-2026, early 2027) will require over 400 kW per rack, driving a move to 800V DC power distribution (NVIDIA originally planned 600V but determined it was insufficient).
- Adoption trajectory: Single-phase direct-to-chip cooling became standard for AI builds in 2025–2026. Two-phase immersion cooling is in pilot at multiple hyperscalers and is projected to reach mainstream AI adoption by 2027–2028.
- Market size: Global datacenter cooling market valued at $10.8 billion in 2025, projected to reach $25.1 billion by 2031 (CAGR ~15%).
- Water implications: Liquid cooling reduces air cooling's evaporative water consumption but introduces closed-loop coolant management requirements. NVIDIA claims a 300x improvement in water efficiency with Blackwell liquid cooling vs. air-cooled predecessors — an important distinction as municipal water commitments face scrutiny in Arizona, Texas, and other water-stressed markets.
6.2 NVIDIA's Position — Dominant but Contested
NVIDIA remains the overwhelmingly dominant supplier of AI accelerators, but its market share is under structural pressure from multiple directions:
- The GB200/NVL72 rack is the benchmark AI training and inference platform as of early 2026. NVIDIA's CUDA software ecosystem creates substantial switching costs that go beyond raw hardware performance.
- Custom silicon market share was approximately 37% of AI accelerator capacity in 2024, projected to rise to 45% by 2028 as hyperscaler ASICs mature.
- Google TPU v7 (Ironwood): General availability began in mid-2025. Performance is 4,614 TFLOPS (BF16) vs. 459 TFLOPS for TPUv5p — a 10x improvement. Ironwood is positioned as the primary inference accelerator for Google's internal workloads and is now available to external clients on Google Cloud. Meta is reportedly in advanced talks with Google for a multibillion-dollar TPU deployment starting mid-2026.
- Microsoft Maia 200: Mass production was delayed from 2025 to at least mid-2026. The delay reflects the difficulty of bringing custom silicon to high-volume production at competitive economics.
- Amazon Trainium 2/Inferentia: AWS's proprietary AI chips are in production and are used for Amazon's internal AI training and inference workloads. External adoption has been limited.
- The dependency dynamic is structural: Microsoft, Google, Amazon, and Meta are simultaneously NVIDIA's largest customers and its most motivated long-term competitors. Each will continue buying NVIDIA GPUs at scale while investing in custom silicon to reduce their long-term vendor exposure and margin payments to Santa Clara.
6.3 DeepSeek — What It Actually Changed
The release of DeepSeek-R1 in January 2025 caused a sharp single-day decline in NVIDIA's market capitalization and triggered widespread discussion of whether more efficient AI models would reduce the hardware intensity of AI. The sector's actual response, observed over the following twelve months, is nuanced:
- No major cancellations resulted. Microsoft, Meta, Google, and Amazon explicitly reaffirmed their 2025 capex plans within weeks of the DeepSeek announcement. Datacenter construction surveys through mid-2025 showed no net reduction in planned capacity.
- The "Jevons Paradox" interpretation prevailed. Lower inference cost per query → more queries run → more total compute consumed. This has been the dominant observed outcome.
- DeepSeek's successor models have been constrained. DeepSeek's next-generation model development has been delayed by Nvidia GPU export restrictions to China, limiting the supply of H20 accelerators available to Chinese AI labs.
- The efficiency signal has influenced architectural thinking. Inference efficiency improvements are shifting some procurement from brute-force GPU cluster scaling to higher-quality inference optimization (attention mechanisms, quantization, speculative decoding), which has modestly reduced the incremental GPU count required per unit of inference throughput.
- Bottom line: DeepSeek altered the narrative briefly but did not alter the build trajectory materially. It increased urgency around inference optimization without reducing training cluster orders.
6.4 Inference vs. Training — Architectural and Commercial Implications
The inference/training split is the most important design variable in new AI campus planning:
| Dimension |
Training |
Inference |
| Primary hardware |
NVIDIA H100/H200/B200, TPU v5/v7 |
NVIDIA H200, B100, TPU v7, custom ASICs, Groq LPU |
| Latency sensitivity |
Low (batch jobs) |
High (user-facing, <100ms SLA) |
| Optimal siting |
Remote, power-rich, lowest cost/MW |
Distributed, near-population, edge-adjacent |
| Power draw per cluster |
50–200 MW for frontier models |
5–50 MW distributed |
| Utilization pattern |
Burst (training runs) |
Sustained (24/7 production) |
| Monetization structure |
Internal cost center |
External API revenue |
7. Geopolitical and Regulatory Overhang
7.1 US Export Controls on AI Chips
The Biden administration's January 15, 2025 AI Diffusion Rule imposed a tiered global licensing framework on advanced chips, computing systems, and AI model weights. The rule created three country tiers with differential access to US AI hardware.
The Trump administration announced the rescission of the Biden AI Diffusion Rule in May 2025, signaling a pivot to a more targeted, country-specific approach. However:
- On November 10, 2025, BIS suspended its "Affiliates Rule" for one year — a mechanism that had provided compliance pathways for multinational operators with ownership in restricted-country entities.
- The Affiliates Rule suspension will resume in November 2026, creating a compliance deadline that is reshaping ownership structures for datacenter operators with non-US investors.
- The net effect: advanced Nvidia chips (H100, H200, B200) remain restricted to China and Tier 3 countries. Tier 2 countries (including Malaysia, Vietnam, India, UAE, Saudi Arabia) have access to US chips but subject to security conditions and reporting requirements.
- Malaysia (Johor AI campus) and UAE (Stargate UAE, G42) are both navigating the Tier 2 framework. The UAE received explicit commitments from the Trump administration in 2025 for "Tier 1-equivalent" treatment for G42-hosted facilities with US operator oversight.
- Congress approved a 23% increase in BIS's FY2026 budget, with bipartisan support for stronger semiconductor export enforcement.
7.2 EU AI Act Datacenter Compliance
The EU AI Act's compliance calendar is directly relevant to datacenter operators:
- February 2025: Prohibited AI practices enforceable.
- August 2025: General-purpose AI obligations enforceable.
- August 2, 2026 (Key deadline): High-risk AI system requirements (Annex III) become enforceable, covering AI used in employment, credit, education, and law enforcement.
Infrastructure-specific implications: Every layer of AI architecture hosted in EU facilities must demonstrate accountability and data lineage tracking. High-risk AI systems require documented risk classification, human oversight mechanisms, and technical robustness testing. Datacenter operators hosting multi-tenant AI workloads are increasingly required to implement contractual compliance frameworks with tenants, adding legal overhead to colocation agreements.
The EU's proposed Cloud and AI Development Act (expected Q1 2026) aims to triple EU datacenter processing capacity within 5–7 years through simplified permitting, conditional on compliance with energy efficiency, water use, and circularity requirements.
The EU Data Centre Energy Efficiency Package (Q1 2026) will impose mandatory reporting and carbon-neutral targets for 2030.
7.3 China — Domestic AI Infrastructure Build
China's AI infrastructure build is proceeding at scale, structurally separated from Western supply chains following the escalation of US chip export controls.
Huawei Ascend Program:
- Huawei has unveiled a three-year chip roadmap: Ascend 950 PR (Q1 2026), 950 DT (late 2026), 960 (late 2027), 970 (late 2028) — targeting annual release cadence and doubling compute per generation.
- Plans call for production of approximately 600,000 Ascend 910C chips in 2026, roughly double 2025 output. Total Ascend die production scaled to as many as 1.6 million units in 2026.
- Huawei's Atlas 950 supernode — housing 8,192 Ascend chips — is planned for Q4 2026. Huawei claims the CloudMatrix 384 configuration outperforms the NVIDIA GB200 NVL144.
- China's domestic hyperscalers (Baidu, Alibaba, Tencent, ByteDance) have redesigned server halls for Ascend chips, which now power nearly half of large-language-model training tasks within China.
- Key limitation: Huawei's chips are manufactured by SMIC at 7nm-equivalent node (not 4nm or below). Performance-per-watt lags NVIDIA's TSMC-manufactured B200 significantly, requiring larger cluster sizes to achieve comparable training throughput.
Scale of Chinese AI datacenter build: China's AI-optimized datacenter market is growing at 20%+ annually, funded by a combination of government policy (the "AI+ Action Plan"), SOE capital, and private hyperscaler investment. Beijing, Shanghai, Shenzhen, and Chengdu are the primary build centers.
7.4 Middle East — Sovereign AI Infrastructure
UAE — G42 and Stargate UAE:
- The first 200 MW phase of Stargate UAE (a 1 GW AI compute cluster) is slated for 2026, built by G42 and operated by OpenAI and Oracle.
- Microsoft and G42 announced a 200 MW capacity addition in November 2025, part of a broader Microsoft investment commitment exceeding $15 billion in the UAE.
- G42's governance reforms (partial sale to Microsoft, removal of Huawei equipment, US national security compliance framework) have positioned the UAE as the only Middle East jurisdiction with effective Tier 1-equivalent access to US AI chips.
Saudi Arabia — HUMAIN / PIF:
- Saudi Arabia's Public Investment Fund (PIF) AI subsidiary HUMAIN is building datacenters in Riyadh and Dammam, targeting 2026 operations.
- Google Cloud announced a $10 billion partnership with PIF/HUMAIN in May 2025 for a Saudi AI hub.
- The broader regional datacenter colocation market is forecast to receive approximately $33.79 billion in investment from 2025 to 2030.
Risk note: As of March 2026, there is a commentary thread (Digitimes, March 23, 2026) suggesting Trump administration Middle East AI datacenter investment commitments are under threat from elevated Iran conflict risk. This is a live geopolitical variable that was not resolvable at the time of this report's publication.
India:
- OpenAI announced plans for a 1 GW datacenter campus in India.
- Policy reforms accelerating foreign datacenter investment, with Mumbai, Hyderabad, and Chennai as primary hubs.
- India's power grid reliability remains a concern for high-availability AI workloads; most hyperscale designs include significant on-site backup and solar generation.
8. Company-by-Company Profiles
8A. Hyperscalers
Microsoft (MSFT)
- 2025 CapEx: ~$80B
- 2026 Guidance: $120B+
- AI Datacenter Position: Among the most expansive hyperscale build programs globally. Five active continental programs (US, Europe, Asia, Middle East, Latin America).
- OpenAI relationship: Restructured multiyear compute agreement. Microsoft retains right of first refusal on new OpenAI compute demand but is no longer the sole provider. The restructuring freed Microsoft from being locked into OpenAI's training workload projections and freed OpenAI to diversify cloud suppliers (a better outcome for both parties than the original structure).
- LOI Cancellations: ~2 GW of non-binding pre-lease agreements walked back in early 2025. ~1.5 GW of self-build projects frozen near-term. Binding contracted pipeline (~5 GW) intact.
- Nuclear: Three Mile Island PPA (835 MW, 20 years, ~$16B) — restart targeted 2027.
- Custom silicon: Maia 200 production delayed to mid-2026.
- Liquid cooling: Full liquid-cooling mandate across new AI build.
- Geographic focus: Northern Virginia (building beyond Ashburn corridor), Iowa, Texas, Ireland, Poland, UK, UAE.
- Key risk: OpenAI relationship remains central to demand thesis. Any material deterioration in OpenAI's commercial trajectory reduces anchor tenant certainty.
Alphabet / Google (GOOGL)
- 2025 CapEx: ~$91–$93B (guidance revised upward three times during 2025)
- 2026 Guidance: ~$175–$185B
- AI Datacenter Position: The most technically sophisticated hyperscale AI infrastructure operator by multiple metrics. TPU-based compute is deeply integrated into Google's AI model development, inference serving, and external cloud products.
- TPU v7 (Ironwood): General availability since mid-2025. 10x performance improvement over v5p. Now available externally on Google Cloud.
- Meta TPU Deal: Meta is reportedly in advanced negotiations with Google for a multibillion-dollar TPU cloud deployment starting mid-2026 — which would represent the first major external customer adoption of Google TPUs at hyperscale.
- Nuclear: Kairos Power SMR fleet (500 MW target, 2030+) — first US corporate SMR fleet contract.
- Geographic expansion: US (Iowa, South Carolina, Oklahoma, Widnes UK, Singapore, Tokyo), Europe (Poland, Denmark).
- Key risk: Capex revision frequency in 2025 suggests planning visibility challenges. Antitrust exposure (ongoing DOJ search monopoly case) creates headline risk if adverse ruling affects Google's advertising revenue base.
Amazon / AWS (AMZN)
- 2025 CapEx: ~$125B
- 2026 Guidance: ~$200B
- AI Datacenter Position: Largest single-company cloud operator globally. AWS dominates cloud infrastructure market share (31% as of 2025). Amazon's datacenter build is diversified across training (Trainium 2), inference (Inferentia 3, Graviton 4), and GPU hosting (Nvidia H200/B200 clusters).
- Nuclear portfolio: Largest corporate nuclear energy buyer. Susquehanna PPA (1.92 GW), Energy Northwest (960 MW), Dominion Energy Virginia (300+ MW). $500M SMR investment program.
- Geographic distribution: Widest of any hyperscaler. Strong US, Europe, and Asia build; significant new investments in India, Middle East, and Australia.
- Custom silicon: Trainium 2 in production but external adoption limited. Internal use case is primarily large-scale recommendation and fine-tuning workloads.
- Key risk: $200B 2026 capex is the most aggressive absolute spend of any company on earth. Amazon's retail and advertising businesses provide revenue breadth to fund this, but the debt market implications are significant. ROI horizon for AWS AI infrastructure is longer than its historical cloud build.
Meta Platforms (META)
- 2025 CapEx: ~$65–$72B
- 2026 Guidance: ~$115–$135B
- AI Datacenter Position: Meta's "Prometheus" AI data center program is the most aggressive single-company AI infrastructure commitment relative to existing business size. Llama open-source models reduce Meta's dependence on third-party AI providers while requiring enormous internal training compute.
- Nuclear — "Prometheus Program": 6.6 GW nuclear procurement announced 2026. Constellation Clinton plant PPA (1,100 MW, 2027). Multiple additional deals in negotiation.
- TPU discussion: Advanced talks with Google to lease TPU clusters starting mid-2026 — a significant signal about limitations of Meta's internal compute buildout relative to its AI ambitions.
- Private credit: Meta and Blue Owl Capital JV ($27B) — landmark in datacenter infrastructure private credit.
- Geographic focus: Texas, Iowa, New Mexico, Georgia, Europe (Denmark, UK).
- Key risk: Widest capex guidance range ($115–135B) among hyperscalers reflects genuine internal uncertainty. Meta's AI revenue model (primarily ad targeting enhancement) has lower direct revenue ceiling than cloud API monetization strategies.
Oracle (ORCL)
- 2025 CapEx: ~$20B
- 2026 Guidance: ~$50B
- AI Datacenter Position: Oracle has pivoted aggressively from legacy enterprise software to AI cloud infrastructure. Its AI cloud contracts are substantial: the July 2025 agreement to develop 4.5 GW of datacenter capacity directly for OpenAI is the single largest hyperscaler cloud contract by power committed.
- Stargate complications: The Texas Stargate expansion failure (see Section 2.5) has raised questions about Oracle's execution reliability and its ability to manage relationships with demanding AI lab customers.
- Reliability concerns: Reports from Tom's Hardware (March 2026) indicate Oracle has struggled with infrastructure reliability issues at existing Stargate-related facilities, cited as a contributing factor in the OpenAI decision to pause expansion plans.
- Geographic focus: US (Texas, Nashville, Phoenix), Europe (UK, Germany), UAE (G42 partnership), India.
- Key risk: Oracle's cloud market share (approximately 4–5% of global cloud) means it is dependent on a small number of very large AI customers. OpenAI concentration risk is acute. If OpenAI migrates compute to competing clouds or builds own infrastructure, Oracle's revenue growth thesis deteriorates rapidly.
8B. Specialist Infrastructure Operators
Equinix (EQIX)
- CapEx guidance: $4–5B per year through 2029
- Strategy: "Build Bolder" — 58 major projects underway globally, including 12 xScale hyperscale campuses. Ambition to double capacity by end of 2029 (more capacity in 5 years than in prior 27 years combined).
- Strengths: Unmatched global colocation network with deep network interconnect density. xScale partnership model with Singaporean sovereign wealth fund GIC reduces balance sheet intensity for hyperscale builds.
- Concern: xScale model requires hyperscaler pre-leases to justify construction. If pre-leasing demand softens (as Microsoft's LOI cancellations demonstrated is possible), build economics deteriorate. Single-tenant concentration risk in xScale campuses.
Digital Realty (DLR)
- Strategy: Multi-phase campus developments (Dallas, Tokyo, Frankfurt — each 250 MW+ potential). Joint ventures with infrastructure funds (including a JV with Brookfield in Europe) reduce direct balance sheet exposure.
- AI positioning: AI-dedicated campuses with liquid-cooling-ready infrastructure. Actively pursuing hyperscaler anchor tenants.
- Concern: DLR has higher leverage than Equinix and a more complex portfolio of legacy assets and new AI builds. The transition to liquid-cooled, AI-optimized designs requires meaningful capital reinvestment across the portfolio.
Iron Mountain (IRM)
- Build target: 125 MW of leasing in 2025.
- AI positioning: Expanding via joint ventures. IRM's core strength is records management and established enterprise relationships — a useful tenant acquisition channel for adjacent datacenter leasing.
- Concern: Smaller scale and higher leverage relative to Equinix and DLR. Dependent on successfully converting records management relationships into datacenter clients, which requires a materially different sales and delivery capability.
NTT Global Data Centers
- March 2026: Announced commitment to double total global datacenter capacity.
- Significant presence in Japan, US, and EMEA. Japan market is experiencing strong AI infrastructure demand driven by SoftBank and domestic hyperscaler buildouts.
8C. Neocloud and GPU-Cloud Challengers
CoreWeave (CRWV)
- IPO: March 28, 2025, Nasdaq, $40.00 per share.
- Peak: ~$187 per share (mid-2025, approximately 367% above IPO price).
- Current price (Feb–Mar 2026): ~$89, approximately 123% above IPO but 51% below all-time high.
- Revenue: Guided full-year 2025 at $4.9–$5.1B (300% YoY growth). Analyst consensus for 2026: ~$12B.
- Revenue backlog: $55.6B (OpenAI ~$22.4B, Meta ~$14.2B representing majority of $66.8B gross backlog).
- Active power capacity: ~850 MW as of Q4 2025, with total contracted power reaching ~3.1 GW.
- Debt structure: $1.75B in 9.000% Senior Notes (due 2031, July 2025); $2.25B in 1.75% convertible notes (due 2031, December 2025); total equity and debt raised exceeds $12.7B.
- Net losses: $1.167B FY2025.
- NVIDIA investment: $2.0B private placement at $87.20 per share (January 2026) — a significant endorsement from CoreWeave's primary hardware supplier.
- Business model risk: The "GPU debt wall" concern is substantive. CoreWeave borrows to buy NVIDIA GPUs, leases the GPUs to AI companies, and uses lease payments to service the debt. The model requires sustained utilization, stable GPU pricing, and renewal of customer contracts. OpenAI and Meta together represent approximately 55%+ of backlog — an extreme customer concentration for a public company.
- Crusoe Energy connection: Meta is reportedly in discussions to lease the Abilene, Texas Stargate expansion site from Crusoe — a deal that would make Crusoe a more direct CoreWeave competitor in hyperscale AI GPU hosting.
Crusoe Energy
- Business model: Energy-first AI infrastructure. Originally captured flared natural gas for behind-the-meter power; now expanding to renewables, stranded grid capacity, and gas-turbine colocation.
- Abilene, TX campus: 1.2 GW planned capacity (Phase 2, 8 buildings). Among the largest AI campuses in North America. Built as the underlying infrastructure for the Stargate OpenAI deployment.
- Funding: $600M Series D (December 2024, Founders Fund); multibillion-dollar project finance.
- Meta opportunity: With Oracle/OpenAI abandoning the Abilene expansion, Meta is reportedly interested in leasing the available site from Crusoe — transforming Crusoe from an OpenAI-adjacent developer to a major direct hyperscaler partner.
- Competitive position: Crusoe's differentiation (energy sourcing expertise) is increasingly a first-mover advantage as power cost and availability become the dominant competitive variable in AI infrastructure.
Lambda Labs
- GPU cloud operator targeting AI researchers and smaller AI companies priced out of CoreWeave's enterprise minimum commitments.
- Positioned in the on-demand and short-term reservation segments of the GPU cloud market.
- Has not achieved the scale or backlog visibility of CoreWeave but benefits from a lower-overhead cost structure and lower minimum commitment requirements.
Vast.ai
- Distributed GPU marketplace model — aggregating spare GPU capacity from institutional operators and selling access to developers.
- Serves the spot-pricing and experimental training segment of the market.
- Faces existential model risk if major GPU owners (CoreWeave, Crusoe, hyperscalers) deepen their own marketplace and scheduling software, reducing available inventory.
9. Key Risk Scenarios
| Scenario |
Probability |
Key Trigger |
Infrastructure Sector Impact |
| Base Case — Controlled Build |
40% |
AI revenue compounds at 50–80% YoY through 2027; utilization remains elevated; capex ramps continue at projected pace |
REITs and neoclouds perform; hyperscaler margins compress modestly; power bottleneck remains most binding constraint |
| Bull Case — AI Revenue Surge |
20% |
Killer enterprise AI application drives step-change in monetization; inference demand exceeds supply; occupancy stays >95% through 2028 |
Capacity shortage persists; all infrastructure assets appreciate; CoreWeave backlog converts at full value; supply premium expands |
| Goldman Oversupply Scenario |
25% |
2027 capacity wave lands as AI revenue growth disappoints; efficiency gains (DeepSeek-type) reduce GPU per query; utilization falls to 70–75% |
Pre-leased but underutilized facilities; lease repricing on renewal; REIT cash flows under pressure; CoreWeave debt service risk increases; private credit spreads widen 200–300 bps |
| Hard Infrastructure Crisis |
10% |
Major grid failure (PJM cascading event, multi-hyperscaler outage) triggers regulatory shutdown of new large load interconnections |
Construction moratoria; project deferrals; increased insurance costs; government intervention in datacenter siting; nuclear restart acceleration funded by policy |
| Geopolitical Fracture |
5% |
US-China chip war escalates; Middle East conflict disrupts UAE/Saudi investments; allied-country export control defection |
Repatriation of AI compute to US soil; Middle East pipeline freezes; Huawei Ascend fills Chinese market vacuum; ~15% of global planned capacity stranded |
10. Positioning — What to Own, What to Avoid
This section is a directional analytical framework, not investment advice. See Disclaimer.
10.1 Areas of Structural Strength
Power and Energy Infrastructure
The most unambiguously scarce resource in the AI infrastructure cycle is reliable, high-quality power. Companies controlling power assets — nuclear generation (Constellation Energy, Talen Energy), high-voltage transmission equipment manufacturers (Eaton, ABB, Schneider Electric), transformer manufacturers, and grid modernization technology providers — are in structural demand with significant pricing power. Lead times of 12–18 months for key power equipment mean the bottleneck is durable.
Liquid Cooling and Thermal Management
The mandatory transition to liquid cooling across all AI-optimized facilities is a multi-year, non-discretionary spend. Companies in direct-to-chip cooling (Vertiv, Stulz, Rittal), CDU manufacturers, and coolant distribution specialists are well-positioned. The market is expected to nearly triple from $10.8B in 2025 to over $25B by 2031.
Fiber and Connectivity Infrastructure
The inter-datacenter connectivity buildout (high-bandwidth dark fiber networks linking AI campus clusters) is accelerating alongside compute density. Operators of fiber assets in AI campus corridors (Zayo, Lumen long-haul segments, telco tower operators) benefit.
Equinix (Qualified Positive)
Equinix's market position — the global colocation and interconnect standard — provides durable competitive advantages. The xScale concentration risk is real but partially mitigated by the JV structure. At the right entry price, EQIX represents a defensible way to own the infrastructure layer without direct hyperscaler earnings concentration.
10.2 Areas Requiring Caution
CoreWeave (CRWV) — High Risk, High Reward
The business model is valid but fragile. 9% senior notes on $1.75B of debt are expensive capital for a company with $1.17B net losses. Customer concentration (two customers = majority of backlog) is acute. The NVIDIA $2B private placement at $87.20/share provides floor support and strategic endorsement. At current prices (~$89), the stock is essentially pricing in perfect execution of a $12B revenue 2026 target. Any miss on utilization or customer renewal creates significant downside. Position sizing should reflect this asymmetry.
Oracle (ORCL) — Execution Risk
Oracle's pivot to AI cloud is strategically sound but the execution track record at scale (Stargate reliability concerns, OpenAI expansion collapse) introduces meaningful uncertainty. At 50x forward earnings (consensus pre-collapse), the stock was pricing in a datacenter CAGR that assumed OpenAI anchoring. Post-collapse repricing is warranted.
Speculative Development Plays
Third-party datacenter developers without pre-signed long-term hyperscaler leases are in a materially more exposed position than 18 months ago. The Microsoft LOI cancellations demonstrated that even near-term commitments are revocable if the hyperscaler's demand picture shifts. Spec development equity or mezzanine exposure should be underweighted.
Air-Cooling Legacy Operators
Existing datacenters built for air cooling at 5–15 kW/rack densities face significant stranded asset risk as AI workloads migrate to liquid-cooled, high-density purpose-built facilities. Older colocation assets in secondary markets with high air-cooled density are likely to see occupancy pressure as AI-specific demand is absorbed by purpose-built AI campuses.
11. Disclaimer
This report has been prepared by the Infrastructure & Technology Division of PRZC Research for internal distribution only. It does not constitute investment advice, a solicitation, or an offer to buy or sell any security or financial instrument. The information and opinions contained herein are based on sources believed to be reliable as of March 2026, including publicly available corporate disclosures, third-party research, and news sources; however, PRZC Research makes no representation or warranty, express or implied, as to the accuracy, completeness, or timeliness of such information.
Forward-looking statements, projections, scenario analyses, and estimates reflect PRZC Research's current views and assumptions. They are subject to significant uncertainty and may differ materially from actual outcomes. The AI datacenter sector is characterized by rapid technological change, evolving regulatory frameworks, and capital allocation decisions that are subject to reversal on short timescales. Readers are cautioned not to place undue reliance on any single projection or scenario.
Recipients of this report are responsible for independently evaluating any investment, financing, or strategic decision referenced herein. PRZC Research, its principals, and its analysts may hold positions in securities or instruments referenced in this report. This report may not be reproduced, redistributed, or forwarded to any third party without the prior written consent of PRZC Research.