When Copilot Becomes Pilot

The Control Inversion, the Context Window Threshold, and the $6 Trillion Software Interface at Risk
PRZC Research  |  29 March 2026  |  AI Investment Analysis

Executive Summary

Every major AI product launched in the past three years carries the same implicit message: the AI assists, the human decides. Microsoft calls it Copilot. GitHub calls it Copilot. Google frames Gemini as an assistant. Apple calls it Apple Intelligence — intelligence in service of you. This naming discipline is not accidental. It is a defensive posture, coordinated across an industry that understood, before its users did, what the alternative framing implies.

The alternative framing is this: the AI flies the plane. You set the destination.

The copilot framing has been accurate — until recently. Alexa and Google Home attempted ambient intelligence a decade ago and failed not because the hardware was wrong but because the AI had no memory, no context, and no ability to reason across tasks. Every utterance was its own island. You were not setting a destination; you were issuing a command, and the system executed it literally or not at all. That is not a copilot. That is a voice-activated button.

What has changed is quantifiable. The context window — the volume of information an AI can hold and reason across simultaneously — has increased by roughly four orders of magnitude in five years. The qualitative consequence is a threshold crossing: AI can now hold enough of your world in working memory to be trusted with unsupervised, multi-step execution. The copilot framing was appropriate when the context window was measured in hundreds of tokens. It is increasingly strained when measured in hundreds of thousands. It will be untenable when persistent memory and ambient access extend effective context to the full span of a working life.

This report makes five arguments. First, the copilot naming is a deliberate incumbency defence, not a capability description. Second, the context window has already crossed the functional threshold for pilot-mode operation in constrained domains, and the general threshold is approaching. Third, what pilot-mode looks like is not a single form factor — wearable, ambient display, and OS-layer displacement all converge on the same architecture. Fourth, Microsoft's Copilot brand specifically will become a marker of the transition it missed. Fifth, the investment implications of the control inversion are asymmetrically negative for software-interface businesses whose valuations price in AI as tailwind rather than headwind.

Central Thesis

The copilot framing is a temporary defensive posture built on an AI capability that no longer applies. As the context window extends and persistent ambient access becomes standard, the human role shifts from operator to destination-setter. The software-interface layer — $65B/year in M365 alone — is priced as an AI beneficiary. It is structurally an AI casualty.

I. The Deliberate Subordination: Why Everything Is Called a Copilot

When Microsoft launched Copilot for Microsoft 365 in November 2023 at $30 per user per month, the naming choice was already a tell. "Copilot" — not "Pilot," not "Agent," not "Operator." The second seat. The one who assists the one in command. A competent partner, useful under pressure, but explicitly not in charge.

The decision was made by people who understood what they were building. Satya Nadella's internal framing around that launch period consistently emphasised augmentation over replacement: AI makes you more productive, it does not replace you. This is not a philosophical position. It is a product positioning designed to avoid triggering the obvious question: if AI can do this autonomously, why am I paying for a seat licence?

The same discipline holds across the industry. GitHub Copilot — pair programmer, not programmer. Google Gemini — your personal AI, framed as belonging to you rather than operating independently. Apple Intelligence — intelligence serving your intentions, as Apple's marketing made explicit. Meta AI — assistant. Amazon Q — business assistant. Every product positions the AI below the human in the decision hierarchy, and no product launched by an incumbent with legacy software revenue has broken from this pattern.

The reason is structural. The incumbents are defending revenue models built on the assumption that humans interact with specific application interfaces. Microsoft's $65B+ annual M365 revenue depends on users opening Word, navigating Excel, scheduling in Outlook, collaborating in Teams. If AI executes those tasks without the user touching an interface, the interface licence becomes optional. The copilot framing is not a feature description; it is a revenue protection mechanism packaged as a product philosophy.

The Alexa Template

To understand why the copilot framing was briefly accurate and why it is now under pressure, the Alexa failure is instructive. Amazon launched the Echo in 2014 with a genuinely ambitious vision: ambient voice computing, always-on intelligence, the home as the new interface. The hardware worked. The microphone arrays were excellent. The cloud infrastructure was available. Amazon spent an estimated $10 billion developing the Alexa ecosystem over the following decade, acquired Ring and Blink, built Skills infrastructure, partnered with every major appliance manufacturer.

The product failed to reach its ambition for one reason: the AI had no memory and no reasoning capacity beyond single-command execution. Every utterance was contextless. "Alexa, play jazz" worked. "Alexa, play the album I was listening to last Tuesday while I was cooking" did not. "Alexa, add milk to my shopping list" worked. "Alexa, check if I have everything I need for tonight's dinner based on the recipe I saved last week and add what's missing" did not. The command precision required to use Alexa effectively was nearly as high as using a command-line interface — which meant the interface never scaled beyond a sophisticated button.

In 2023, Amazon cut approximately 10,000 jobs from its devices division, the largest wave of Alexa-related layoffs since the product launched. The official framing was cost discipline. The actual problem was that after a decade of investment, Alexa's daily active usage remained dominated by timers, music playback, and smart home toggles — tasks that a $5 button could accomplish. The ambient intelligence vision never materialised because the AI could not hold enough context to make ambient intelligence meaningful.

The Alexa failure became the industry template for understanding what ambient AI could not yet do. Every product launched after 2020 — including Microsoft Copilot — was shaped by the implicit lesson: do not overpromise ambient autonomy because the AI is not capable of delivering it. The copilot framing is, in part, a memory of the Alexa lesson. It says: we learned not to claim more than the model can deliver.

That lesson is now expiring.

II. The Context Window Crossed the Threshold

The specific technical argument for the control inversion is not about model intelligence in the abstract. It is about context — the volume of information an AI can hold simultaneously in working memory and reason across without losing coherence. Context is what separates a command executor from a genuine agent. An AI with zero context executes instructions. An AI with sufficient context understands the situation, identifies relevant constraints, plans a route, and executes without requiring the human to manage each step.

System Effective Context Multi-Step Reasoning Ambient State Awareness
Amazon Alexa (2014–2023) ~0 (per-utterance) None None
GPT-3 (2020) ~4,000 tokens (~3,000 words) Limited None
GPT-4 (2023) ~128,000 tokens (~96,000 words) Strong Possible with retrieval
Claude 3.5 Sonnet (2024) ~200,000 tokens (~150,000 words) Very strong Strong with retrieval
Claude Sonnet 4.6 (2026) 200,000+ tokens, improved retrieval Agentic multi-step Viable with persistent memory

What does 200,000 tokens mean in practice? It is approximately your full email inbox for a week, your calendar for the next month, every document open on your computer, your last 50 conversations with colleagues, and the project brief you wrote in January — simultaneously in context. Not retrieved on demand. In context: the AI holds all of it and reasons across all of it in a single pass.

This is the qualitative shift. With four thousand tokens, the AI knew what you said in the last three minutes. With two hundred thousand tokens, the AI knows what you were working on last Tuesday, that your Thursday meeting conflicts with your flight, that you prefer to write contracts in plain language rather than legalese, and that the last time you engaged this client the negotiation stalled on payment terms. That is not augmentation of human memory. That is a second operator who has read everything you have read and remembers all of it.

Claude Code as Proof of Concept

The most concrete demonstration that the pilot threshold has been crossed in at least one domain is Claude Code — Anthropic's terminal-based coding agent, which this report was researched and structured using. A single natural language instruction — "implement the authentication flow, write the tests, update the documentation, and check for security vulnerabilities" — produces twenty to forty discrete operations: file reads, edits, terminal commands, API calls, validation steps. The human set a destination. The AI flew the route.

This is not hypothetical. It is the product as it exists today, in the terminal. The constraint is not capability — it is interface. Claude Code operates in a text terminal because that is where system permissions and tool access were first made available to it. The same underlying model, given system-level permissions and ambient input, operates identically in any environment. The terminal is the prototype; the OS layer, the wearable, and the ambient display are the production deployments.

The Threshold Is Domain-Specific But Rapidly Generalising

Pilot-mode operation is already reliable in constrained professional domains: software engineering (Claude Code, GitHub Copilot Workspace), legal document drafting, financial analysis, research synthesis. These are high-context, high-precision domains where the AI has enough training data and the tasks are sufficiently structured that multi-step autonomous execution achieves production-quality results.

The generalisation to ambient personal computing — managing your calendar, your communications, your files, your home environment — requires two additional components that are not yet universally deployed: persistent memory (the AI retains state across sessions, not just within a session) and ambient access (the AI has read/write access to your environment continuously, not only when you explicitly invoke it). Both components exist in prototype. Neither is yet standard. The window between "prototype" and "standard" is shortening.

The Threshold Argument

The copilot-to-pilot transition is not a future event. In software engineering, it has already occurred. In knowledge work more broadly, it is occurring. In ambient personal computing, the technical components are complete; deployment is the remaining constraint. The investment question is not whether the transition happens but how quickly it reprices the software-interface layer.

III. What Flying Looks Like: Three Vectors, One Architecture

The pilot shift does not arrive through a single form factor. Three distinct hardware vectors are converging on the same underlying architecture: the human sets destination, the AI executes route, the output appears wherever the human is. The convergence point is the same regardless of entry path.

Vector One: The Wearable (OpenAI / Jony Ive)

The most direct pilot-mode hardware is a device with no screen — or minimal screen — where voice is the primary input and intent is the entire interface. OpenAI's partnership with Jony Ive, known internally as project "io," is the most prominent public attempt at this form factor. Ive's history at Apple is relevant: he built the case that hardware simplicity and interface elegance are the same thing. Applied to AI, that principle leads to a device that removes the interface problem entirely by removing the interface.

The critical distinction from existing voice assistants is not hardware — it is context. A wearable connected to a pilot-grade AI holds your full working context: your calendar, your relationships, your in-progress projects, your preferences, your location, the ambient audio of your environment (with appropriate consent architecture). A request like "find me a flight to Madrid under £400 but check if the Thursday meeting can move first" is not a complex command requiring Alexa's precise syntax. It is a destination. The AI checks the calendar, identifies the conflict, emails the relevant party, monitors for a response, searches flight options against the constraint, and returns with a recommendation or executes the booking outright — depending on the autonomy level granted by the user.

This is not Siri. Siri required precise commands and surfaced options for the human to select. This requires only intent, and the execution is end-to-end. The human's role is to describe the destination and review the outcome. Everything between those two points is the AI's domain.

The wearable vector solves the input problem that has limited mobile computing since the smartphone. The touchscreen keyboard is a workaround — adequate but not efficient, fundamentally borrowed from desktop computing and shrunk. Voice with ambient context is not a workaround. It is the native input paradigm for a device that knows enough about your world to act on approximate instructions.

Vector Two: The Ambient Display

The second vector is the dissolution of the screen as a fixed, dedicated device. A flat television, a flexible LED panel adhered to a wall, a transparent display — any surface that can render output becomes the screen. The input is voice and gesture; the display is wherever you are; the AI holds the state across all of them.

This is what ambient computing advocates have been describing since Mark Weiser's 1991 paper at Xerox PARC. The technology to implement it — inexpensive large-format displays, mesh networking, always-on microphones, low-latency rendering — has been commodity for years. What was missing was the intelligence layer capable of managing a contextual, multi-surface, multi-session interaction without breaking. Alexa's failure was the proof of absence. The same hardware, with a pilot-grade AI behind it, is a different product category.

The implication for computing form factors is significant. The "desktop" as a concept — a box, a monitor, a fixed workstation — is a product of the input paradigm. You needed to be at the keyboard to operate the computer. If the input is ambient voice and the output is any available surface, the workstation dissolves. You are not at the computer; you are in an environment that is computing around you. The desktop is not replaced by another device. It is replaced by a room.

This reframes the smart home entirely. Amazon and Google built smart home products around command execution within fixed device categories (the speaker, the thermostat, the doorbell). The ambient display vector builds the smart home around continuous context and proactive intelligence: the system knows what you are working on, what you need next, and what the environment should be doing — and it manages all of it without requiring explicit commands for each action.

Vector Three: Claude OS on the Desktop

The third vector is the one detailed in the preceding report in this series: a Claude-native operating system layer, built on a Linux kernel, where Claude is the always-on system daemon with system-level permissions and the Model Context Protocol functions as the universal application interface. The traditional OS becomes invisible infrastructure; the user interacts with Claude, and Claude routes to the appropriate tool, file, or application without the user navigating to it.

In this vector, the keyboard and mouse do not disappear — they become optional. A power user who wants to navigate manually retains that capability. But the default interaction mode is intent-driven: "compare last quarter's numbers with the analyst estimates and draft the board summary" produces a finished document without the user opening Excel, finding the files, running the analysis, or formatting the output. The human set destination; Claude flew the route.

The analogy is the terminal power user. A senior engineer who knows bash, awk, curl, and git can accomplish in three commands what takes a junior developer twenty GUI clicks. The senior engineer's advantage is not hardware — it is interface fluency. Claude democratises that fluency entirely. Every user operates at terminal-power-user efficiency, through plain language, without learning the commands. The interface gap closes; the productivity floor rises to the previous productivity ceiling.

The Shared Architecture

All three vectors implement the same architecture:

This architecture makes the form factor question secondary. Whether the pilot-mode AI is delivered through a wearable, a wall display, or an OS layer, the experience is the same: the human's cognitive workload shifts from interface navigation to outcome evaluation. That shift is the control inversion.

IV. Microsoft Named Its Own Disruption

The branding problem is not subtle. Microsoft chose "Copilot" in 2023 to reassure its enterprise customer base that AI was a complement to their existing workflows, not a replacement. It was the correct product decision for 2023. It will become a liability at the moment the transition lands.

The Business Model Exposure

Microsoft's vulnerability is not one layer — it is three simultaneous layers, each dependent on the assumption that humans interact with specific application interfaces.

Layer one: Windows. The OS is priced on the assumption that hardware requires a managed software environment to be useful. The Claude OS thesis argues this assumption breaks when the AI layer provides all interface management and the OS becomes invisible infrastructure. Windows OEM licences and commercial Windows revenue — approximately $25B annually — are priced on interface value that the OS layer no longer provides.

Layer two: Office / Microsoft 365. The application suite is priced on the assumption that users interact with Word, Excel, PowerPoint, Teams, Outlook as distinct applications with distinct workflows. M365 commercial revenue exceeds $65B annually. If AI pilot-mode executes tasks without the user touching an application interface, the interface licence compresses toward the underlying data storage and computation cost — which is Azure, not M365.

Layer three: Copilot as AI layer. Microsoft's response to the AI transition was to build Copilot as an AI layer on top of M365 — a $30/month add-on that makes existing applications smarter. The problem is that Copilot is optimised to preserve the M365 interface rather than displace it. It opens Word for you, with a suggested draft. A pilot-grade AI does not open Word at all. The document simply exists when you ask for it. Copilot's entire value proposition is predicated on the interface layer remaining necessary.

Microsoft's strategic position is therefore a nested contradiction: building AI capability that, if it succeeds, destroys the interface premium on which two of its three major revenue layers depend.

Azure as the Hedge — and Its Limits

The standard bull case for Microsoft through the AI transition is Azure: whoever builds and deploys AI, the compute runs on cloud infrastructure, and Microsoft captures a portion of that infrastructure spend. This is correct and partially mitigating. Azure's revenue growth has been strong, and AI inference workloads are real incremental demand.

The limits of the hedge are two. First, Azure competes directly with AWS and Google Cloud for the same AI compute demand. The infrastructure layer is a commodity market with thin margins and strong competitors. The margin profile of cloud infrastructure is structurally lower than software licensing. If $65B in M365 software revenue reprices toward infrastructure revenue, the net financial impact is negative even with Azure growth. Second, the AI companies building the pilot-grade agents — Anthropic, OpenAI — may increasingly run on their own or on diverse cloud infrastructure as they scale. Azure's share of AI compute is not guaranteed to grow proportionally with overall AI demand.

The Brand Becomes the Epitaph

The "Information Superhighway" problem is instructive. In 1994, "information superhighway" was the dominant metaphor for the internet — used by presidents, journalists, and technology executives as shorthand for the coming digital transformation. By 1998 it was an embarrassment, a marker of people who had described the phenomenon in terms of the infrastructure it disrupted (roads, highways) rather than the behaviour it enabled. The companies that used the term most heavily were, in retrospect, the ones who had misunderstood what they were building.

"Copilot" is on the same trajectory. In 2026, the name communicates enterprise-safe, human-in-the-loop AI augmentation. In 2028, if the pilot shift has landed in enterprise workflows as the technical trajectory suggests, "Copilot" will communicate a company that understood AI well enough to augment its existing products but not well enough to see that the products themselves were the risk.

Apple's Parallel Exposure

Apple's exposure is structurally similar but differently composed. "Apple Intelligence" carries the same subordination framing as Copilot — the name explicitly positions AI as serving the user's intentions rather than acting autonomously. The moat Apple is defending is not a single revenue stream but an integrated ecosystem: iPhone hardware, iOS software, the App Store (30% commission on $90B+ in annual transactions), and the services layer built on top of that platform.

The pilot shift threatens the App Store specifically. The App Store's revenue model depends on users discovering, downloading, and interacting with individual applications. If a pilot-grade AI executes tasks without surfacing individual apps — booking the flight without opening the Ryanair app, writing the email without opening Mail, editing the photo without opening Photos — the App Store becomes a repository of tools the AI uses rather than a marketplace the human navigates. The 30% commission on user-facing transactions depends on users facing transactions. Ambient execution removes that facing.

Apple's hardware premium is more durable than the App Store. The device itself — the sensor array, the form factor, the integration with health and location data — is structurally valuable in a pilot-mode world because it is the ambient data source that makes the AI's context rich. Apple could transition from App Store monetisation toward device-and-data platform monetisation. That transition is possible. It is not currently priced in.

V. Investment Implications of the Control Inversion

The control inversion does not happen on a single day. It is a gradient, and different segments of the software market will cross the repricing threshold at different points in the 2026–2031 window. The investment implication is not a single trade; it is a framework for evaluating which current valuations price in AI as tailwind and which are exposed to the asymmetric downside of AI as headwind.

The Repricing Framework

Company / Segment Current Valuation Premise Pilot-Mode Risk Transition Timeline
Microsoft M365 (consumer/SMB) AI augments interface, drives ARPU growth High — interface licence compresses 2027–2029
Microsoft Windows (OEM) Hardware requires managed OS layer Medium — Claude OS displaces interface value 2028–2031
Microsoft Azure AI compute demand drives revenue Low — infrastructure benefits from transition Beneficiary
Apple App Store Human navigates marketplace, 30% commission High — ambient execution bypasses App Store 2027–2030
Apple hardware Premium device, ecosystem lock-in Low-medium — device as ambient data source Partial beneficiary
Adobe Creative Cloud Interface expertise drives subscription renewal High — AI pilots creative tasks without interface navigation 2026–2028
Salesforce CRM interface drives user adoption and retention Medium-high — AI queries data via API, interface optional 2027–2030
Anthropic / OpenAI (private) AI platform layer is the new interface None — they are the pilot shift Beneficiary
Ambient display hardware (Samsung, LG) Consumer electronics, thin margins None — display surfaces are the output layer Partial beneficiary

Microsoft: The Multiple Compression Argument

Microsoft currently trades at approximately 32–35x forward earnings, a multiple that embeds the narrative of AI as net positive for the business — Copilot adds $30/user/month to M365, Azure grows with AI inference demand, GitHub Copilot captures developer spend. This narrative is partially accurate. The risk it excludes is the scenario where Copilot's success accelerates the user's transition to pilot-mode workflows that no longer require the M365 interface.

A more conservative scenario — in which M365 commercial ARPU growth stalls as pilot-mode AI erodes interface stickiness in the 2028–2030 period — implies a P/E multiple compression from ~35x toward ~22x. On current earnings, that is approximately $700–900B in market capitalisation at risk. This is not a tail scenario; it is the central scenario if the technical trajectory holds. The bull case requires Copilot to successfully defend the interface premium rather than accelerate its erosion. That is a difficult product problem: a copilot that becomes too good is a pilot, and a pilot does not need the cockpit.

The Alexa Lesson for Scale

Amazon's Alexa failure cost an estimated $10B and was largely absorbed without major market impact because Amazon's e-commerce and AWS businesses generated sufficient cash and narrative weight to absorb the loss. The $10B was a rounding error in a $1.5T market cap company whose core business was unaffected.

Microsoft and Apple do not have equivalent shock absorbers. Their core businesses — software licensing and the App Store — are specifically the businesses exposed to the pilot shift. There is no equivalent of "but the cloud business is fine" that offsets the interface repricing. Microsoft has Azure, which is genuinely mitigating; but Azure's margin profile means it cannot fully replace M365 revenue on a dollar-for-dollar basis. Apple has hardware, which is partially mitigating; but hardware grows in single digits and cannot offset an App Store whose growth was driven by marketplace navigation.

The Alexa experiment was a $10B exploration of what happens when ambient AI fails. The Copilot-to-Pilot transition is a $6T experiment in what happens when it succeeds.

What to Own

The investable positions in the pilot shift are concentrated in private markets at present, which limits direct access but clarifies where value is accruing.

Anthropic is the most direct expression of the pilot thesis. Claude Code is the prototype for pilot-mode operation. Claude OS, if developed, is the OS-layer deployment. The persistent memory and ambient access components are on the product roadmap. Anthropic is building the pilot; Microsoft is building the co-pilot add-on.

OpenAI is the second expression, with the Jony Ive wearable device providing the hardware vector. If "io" ships as a pilot-native device — intent as input, no home screen, ambient context — it is the first mass-market hardware instantiation of the control inversion.

Inference infrastructure is the public-market proxy. As AI pilots more tasks, each user action triggers multiple model calls: planning, execution, verification, summarisation. Inference volume scales non-linearly with pilot-mode adoption. The hyperscalers (AWS, Google Cloud, Azure) and specialised inference hardware (NVIDIA, future competitors) benefit structurally from this demand curve regardless of which AI company provides the pilot.

Ambient display manufacturers are indirect beneficiaries. Samsung and LG produce the flexible and large-format displays that become the output surface in the ambient vector. Neither is a pure play, but both have product lines positioned for the transition that are currently valued as commodity consumer electronics.

The Bear Case

Key Risks to the Pilot Thesis

Enterprise inertia and compliance. Enterprise software adoption is governed by procurement cycles, security reviews, compliance requirements, and IT governance structures that operate on timescales of three to five years, not product launch cycles. Even if pilot-mode AI is technically superior to M365 Copilot by 2027, the enterprise replacement cycle extends the revenue impact timeline significantly. Microsoft's enterprise contracts are sticky by design.

Reliability threshold. Pilot-mode operation requires AI accuracy rates sufficient for unsupervised multi-step execution. Current models achieve approximately 95–97% accuracy on complex reasoning tasks. For a single-step task, this is acceptable. For a twenty-step autonomous workflow, a 95% per-step accuracy rate yields a roughly 36% probability of an error somewhere in the chain. The reliability threshold for genuine pilot-mode trust is closer to 99.9%. The gap is closing but is not yet closed.

Privacy regulation. An ambient AI that holds your full working context — emails, calendar, files, conversations, location — is a GDPR, CCPA, and emerging EU AI Act target of significant complexity. The persistent memory and ambient access components that enable pilot-mode operation are precisely the components that regulators are most likely to constrain. European deployment timelines may lag US deployment by two to four years as a result.

Microsoft's Azure hedge is real. The argument that Azure captures AI compute spend regardless of which application layer prevails is correct. If M365 interface revenue is partially offset by Azure AI inference revenue, the net impact on Microsoft's valuation is ambiguous rather than clearly negative. Microsoft is not a pure-play on the interface layer; it is a diversified technology company with genuine infrastructure exposure.

Apple's hardware moat is deeper than it appears. The ambient computing vector requires rich sensor data — location, health metrics, ambient audio, biometrics — that Apple collects through hardware with a trusted relationship with the user. Apple's transition from App Store monetisation to device-as-platform monetisation is plausible, and the hardware premium for devices that serve as high-quality ambient data sources may increase rather than decrease in a pilot-mode world.

Conclusion

Microsoft named its AI product "Copilot" for the same reason it named its browser "Internet Explorer" — not to describe what the technology would eventually do, but to position it safely relative to the revenue model it needed to protect. Internet Explorer explored the internet so you could keep using Windows. Copilot pilots alongside you so you can keep using M365. Both names are accurate descriptions of a transitional phase and dated descriptions of the destination.

The destination is an AI that holds enough of your world in context to act without constant correction. That threshold has been crossed in constrained professional domains. It is approaching in general knowledge work. The form factor — wearable, ambient display, OS layer — is secondary to the architecture: intent as input, AI as executor, human as destination-setter and outcome evaluator. Three hardware vectors are converging on that architecture from different angles, and they will meet in the same place.

The investment argument does not require certainty about timing. It requires only that the growth premium currently embedded in Microsoft's and Apple's software-interface valuations is not justified once the probability distribution of outcomes includes the pilot shift as the central scenario rather than a tail risk. That repricing has not yet occurred. The technical trajectory, the competitive dynamics, and the alignment of incentives among the companies building pilot-mode AI all suggest it is closer than current multiples imply.

The copilot framing was appropriate. It was also always temporary. The only question was when the AI would be good enough to stop assisting and start flying. The answer, in at least one domain, is already.


Disclaimer: This report is produced by PRZC Research for informational and analytical purposes only. It does not constitute investment advice, financial advice, or any solicitation to buy or sell securities. All views are those of the analyst and are based on publicly available information. PRZC Research makes no representations as to the accuracy or completeness of information contained herein. Past performance is not indicative of future results. Readers should conduct their own due diligence before making any investment decisions.

Want research like this on your own topic? Bespoke reports from £500.

Commission a Report