No-AGI Thesis Update: Six-Month Scorecard and Verdict

Format C: Six-Month Scorecard | Original Reports: October 2025
PRZC Research | March 25, 2026 | Macro AI Thesis Review | Report ID: T35-C | Review Period: October 2025 – March 2026

Preface: How to Read This Scorecard

The original thesis had three interlocking claims:

  1. AGI will not materialize by 2030 — the frontier is further than US labs admit
  2. The AI investment bubble will deflate — capex/revenue mismatch will correct
  3. China's pragmatic AI approach outperforms US AGI mania — efficiency over brute force

Each claim is scored independently on a scale of: CONFIRMED / PARTIALLY CONFIRMED / TOO EARLY / PARTIALLY DISCONFIRMED / DISCONFIRMED.

The scorecard leads with disconfirming evidence, per CBOM principle. We do not start from what we got right.


Disconfirming Evidence First

Before the thesis summary, here is what happened in the six months that cuts against the original argument:

1. Model capability advancement was faster than the thesis implied.
Between October 2025 and March 2026, OpenAI released GPT-5, GPT-5.2, GPT-5.3, and GPT-5.4 in rapid succession. These models demonstrated measurable gains in agentic coding, visual reasoning, graduate-level scientific problem solving, and autonomous tool use. The deprecation of GPT-5.1 in March 2026 (only months after release) reflects a cadence of improvement that the No-AGI thesis framed as slowing — it has not slowed.

2. The AI investment bubble has not deflated. It has accelerated.
The four major hyperscalers (Microsoft, Alphabet, Amazon, Meta) have committed $660–$690 billion in combined capex for 2026 alone — a 36% increase over 2025 levels. Approximately 75% ($450–$500 billion) is directly AI infrastructure. This is not the behavior of a bubble about to pop. It is the behavior of a bubble expanding.

3. Major AI labs doubled down on near-term AGI timelines.
Sam Altman (OpenAI): "We are now confident we know how to build AGI." Dario Amodei (Anthropic): Expected "powerful AI" exceeding Nobel Prize-level capability across disciplines "in late 2026 or early 2027." Demis Hassabis (Google DeepMind): AGI "a handful of years away." Elon Musk: "By year-end 2026." These statements are more aggressive than those made at the time of our original report, not more conservative.

4. Agentic AI is reaching early production deployments.
While the production gap remains wide (only 11% of enterprises are using agentic AI in production as of early 2026), specific verticals are reporting legitimate productivity gains: banking KYC/AML workflows (200-2,000% productivity improvement), manufacturing quality control (PepsiCo: 20% throughput increase), and software development (multiple enterprises citing 30-50% code generation integration). These are not vaporware numbers. They represent early but real value.

5. The stock market has not corrected significantly on AI.
Nvidia peaked at $207/share (ATH $5.04T market cap) on October 29, 2025 — just weeks after our original report. It pulled back to ~$175 by March 23, 2026, a ~15% decline from peak — volatility, not collapse. AI-related equities broadly did not experience the bubble-burst correction our thesis implied was overdue.


Claim-by-Claim Scorecard

Claim 1: AGI Will Not Materialize by 2030

Verdict: PARTIALLY CONFIRMED — but with important nuance
Score: 6/10 for the thesis

What supports the claim:

The definitional chaos around "AGI" has, if anything, grown worse over the past six months. OpenAI, Anthropic, and Google DeepMind all use different definitions. Anthropic explicitly avoids the term, preferring "powerful AI." This is not a sign that the finish line is close — it is a sign that the concept remains slippery enough to claim credit at any point. Stanford HAI researchers and Andrej Karpathy (former OpenAI) continue to assert that current architectures lack fundamental capabilities required for genuine AGI: robust transfer learning, self-motivated goal formation, genuine understanding vs. statistical pattern matching.

The capability gains observed in GPT-5.x are impressive but are extensions of the same scaling + RLHF paradigm. There has been no fundamental architectural breakthrough announced in the review period. The jump from GPT-5 to GPT-5.4 in six months represents iteration, not revolution.

What cuts against the claim:

The rate of iteration is empirically fast. The fact that GPT-5.1 was deprecated as obsolete within months of launch means we cannot confidently state the frontier is not moving. If this pace continues, the functional capabilities of AI systems in 2028-2029 may render the "no AGI by 2030" claim technically correct but practically irrelevant — systems may perform AGI-equivalent tasks in narrow-to-broad domains without meeting any agreed definition.

Revised probability estimate: The thesis is likely correct on strict AGI-by-2030 definitions. It may be wrong on what practically matters for markets and geopolitics if "powerful AI" milestone systems arrive in 2027.

Claim 2: The AI Investment Bubble Will Deflate

Verdict: DISCONFIRMED in the near term / TOO EARLY for the structural claim
Score: 3/10 for the thesis (near-term)

What supports the claim:

The enterprise ROI data is alarming from a bull perspective. An August 2025 MIT study found 95% of GenAI pilots fail to achieve business value. Only 5% of enterprises report significant EBIT impact from AI investments. The ratio of hyperscaler capex ($400B+ in 2025) to actual enterprise AI revenue generated (~$100B) reveals a massive infrastructure-to-revenue gap. Amazon's 2026 capex plans ($200B) are projected to push it into negative free cash flow of approximately $17 billion — something no prior Amazon management team would have tolerated. This is objectively bubble-like capital allocation in search of returns that have not yet materialized.

CFO sentiment is shifting: only 26.7% of CFOs plan to raise GenAI budgets in the next 12 months, down from 53.3% a year prior. Budget consolidation is happening — enterprises are spending more through fewer vendors on proven use cases, cutting pilots.

What cuts against the claim:

The bubble has not deflated. Capital allocation is accelerating, not contracting. Nvidia reported Q3 FY2026 (quarter ending October 2025) revenue of $57 billion — up 62% YoY. Google's cloud backlog surged 55% sequentially to over $240 billion. These are demand signals, not overstock signals. The hyperscalers are spending into growing demand for cloud AI inference, not spending into air.

The thesis confused "bubble characteristics" with "imminent bubble burst." A bubble can display irrational characteristics for years before correcting. The Nasdaq ran from 1996 to 2000 with mounting fundamental disconnects. We may be in the 1997-1998 stage of the current AI cycle, not the 2000 stage.

What changed from our original framing: We underestimated demand-pull from agentic AI use cases. The shift from "GenAI chatbots" (low ROI, high hype) to "AI agents handling specific enterprise workflows" (demonstrable ROI in narrow cases) is real and is sustaining capex rationale at the hyperscaler level.

Claim 3: China's Pragmatic AI Approach Outperforms US AGI Mania

Verdict: PARTIALLY CONFIRMED — DeepSeek moment validated; R2 delay complicates the narrative
Score: 5/10 for the thesis

What supports the claim:

The DeepSeek R1 release (January 2025, predating our original reports but fully in force during the review period) remains the single most important validation of this thesis. A Chinese startup achieved frontier-model performance at $5.576 million in training costs — against $40-60 million US equivalents — on restricted Nvidia A800 chips. The single-day Nvidia market cap destruction of ~$600 billion and Nasdaq loss of $1 trillion on January 27, 2025 was the market's acknowledgment that the "moat is compute" premise underpinning US AI valuations was fragile.

The broader Chinese AI ecosystem remained active. Baidu launched X1 (competing with DeepSeek on reasoning), Ernie 4.5 (claiming to surpass GPT-4.5 on multiple benchmarks), and announced open-sourcing of Ernie models. Alibaba continued releasing Qwen model updates. ByteDance prepared model launches for February 2026. The ecosystem is competitive, not dormant.

The thesis's core argument — that China's export-restricted chip environment forced algorithmic efficiency innovations that now undermine the US brute-force scaling advantage — remains structurally sound.

What cuts against the claim:

DeepSeek R2, the anticipated successor to R1, has not shipped as of March 2026. Reports from August 2025 cited delays from chip problems (DeepSeek was pushed toward Huawei Ascend chips for training, which had stability and connectivity issues; the company reverted to Nvidia chips for training and Huawei for inference). This delay suggests the chip export controls are having some friction effect, even if not the decisive effect US policymakers assumed.

More importantly, "China's pragmatic AI approach outperforms" is too broad a claim. China dominates on efficiency-per-dollar. The US still leads on absolute frontier capabilities, agentic systems, and integration with enterprise software ecosystems. Adobe acquired Semrush, not a Chinese equivalent. The enterprise AI tooling layer remains largely US-owned.

The "America's arrogance" framing has been partially addressed: the DeepSeek shock demonstrably sobered US AI discourse and contributed to early-2025 scrutiny of whether the scaling hypothesis remained sound. That influence — China correcting US assumptions — is itself a form of geopolitical impact the thesis anticipated.


Aggregate Scorecard

ClaimVerdictScore
AGI will not materialize by 2030Partially Confirmed6/10
AI investment bubble will deflateDisconfirmed near-term / Too Early structurally3/10
China's pragmatic approach outperformsPartially Confirmed5/10
Overall thesisMixed — 1 strong hit, 1 near miss, 1 premature4.7/10

Revised Thesis: Where We Stand in March 2026

The "No-AGI / Bubble Burst" thesis had the right instinct about structural disconnects but wrong timing and insufficient weight given to two forces:

Force 1: Iteration speed. AI capability improvements from October 2025 to March 2026 were faster than the thesis's implicit model. GPT-5 to GPT-5.4 in six months, with genuine agentic capability improvements, means the gap between "current AI" and "functionally AGI-equivalent for most tasks" may close faster than 2030, even without a breakthrough.

Force 2: Narrative resilience of capital allocation. The hyperscalers have structured AI capex as infrastructure builds, not product bets — more analogous to cloud buildout (2011-2018) than consumer internet speculation (1998-2000). Cloud buildout had a significant bubble period but the infrastructure was real and eventually generated enormous returns. The current AI infrastructure may follow the same arc, making a 2000-style collapse less likely than a 2001-2005-style period of moderated but continued investment.

What remains structurally valid from the original thesis:

  1. The enterprise ROI gap is real and will force a correction in AI spending patterns — probably through vendor consolidation (fewer contracts, larger) rather than outright collapse. This is the "quiet deflation" scenario, not the "crash" scenario.
  2. China's efficiency-focused AI development remains a durable competitive threat, particularly for countries and enterprises that cannot afford US-priced frontier AI infrastructure. DeepSeek's influence on global AI adoption is underappreciated in US-centric analysis.
  3. AGI definitional inflation — where labs declare "AGI achieved" on partial criteria — is a genuine risk that makes any single "AGI moment" announcement unreliable as an analytical event. The thesis to maintain: measure capabilities, not announcements.

What needs to be revised:

The bubble deflation timeline should be extended. A 2026-2027 "enterprise ROI reckoning" remains plausible — where the 95% of failing GenAI pilots produce a wave of budget cuts and writedowns — but this is more likely to be a correction within an expansion than an end to the cycle.


Forward Watch List (Issues to Track for T36)

  1. DeepSeek R2 / V4 actual release — if China ships a frontier reasoning model at $1-2M training cost, it will re-test the US scaling premium thesis with more force than R1
  2. Amazon free cash flow in Q1-Q2 2026 — first real test of whether $200B capex is sustainable
  3. Enterprise AI contract renewals, H1 2026 — the "budget consolidation" signal should appear in SaaS churn/expansion data
  4. OpenAI / Anthropic revenue — OpenAI reportedly targeting $12.7B ARR for 2025; Anthropic targeting $1B+ — these numbers need verification
  5. Any actual AGI milestone announcement — watch for definitional games; require operational task performance benchmarks, not lab claims

Data Sources


PRZC Research — Investment Analysis Division | T35-C | March 25, 2026
This document is for internal analytical purposes only.

Commission Research

Want Research Like This on Your Own Topic?

Commission a bespoke PRZC report on any company, sector, or market question. Full CBOM methodology. Delivered within 5–7 business days. £500 per report.

Commission a Report — £500

Bulk credits available from £350/report  ·  Browse all free research