The IP Framing Problem in AI

Introducing: The IP Framing Problem in AI

A four-part series on why most AI defensibility strategies are protecting the wrong thing — and what actually works in 2026.

The IP Framing Problem in AI — A Four-Part Series. Stop protecting. Start owning the loop.

Most “AI defensibility” strategies in 2026 are protecting the wrong thing.

That’s the claim this four-part series sets out to defend, and it’s the through-line that makes the four essays sit together rather than as standalone takes. The dominant view across the industry — recited by VCs, by enterprise CIOs, by IP attorneys, by half the LinkedIn essays in your feed — is that the protectable surface of an AI business is its data, its prompts, its model weights, its agentic orchestration code, and the constellation of artifacts that hang off those things. The instinct is to lock all of that down with the conventional toolkit: copyright, trade secret, patent, NDA, repository access controls, employee confidentiality programs.

The instinct is the right impulse pointed at the wrong target.

It comes from twentieth-century intellectual property doctrine, which was built to protect a category of asset — books, songs, inventions, documented trade secrets — that does not behave like the assets the AI industry actually accumulates. Foundation models commoditize on a clock you can almost set. Synthetic data generation is closing the gap on private corpora. The U.S. Copyright Office settled in January 2025 that prompts alone don’t confer authorship over AI output. Trade-secret status on prompts evaporates the moment they ship inside a queryable agent. The legal tools the industry is reaching for were never designed for the assets the industry is trying to use them on.

What’s actually defensible — what survives adverse events, holds up under competitive pressure, and produces durable advantage in 2026 — is something different. It’s workflow position: the integration depth, switching costs, feedback closures, and human muscle memory that turn an AI product into infrastructure rather than a feature. It’s contract architecture: the no-train clauses, portability rights, and model-substitution rights that protect against vendor events the courts won’t resolve for years. It’s gateway abstraction: the architectural decoupling that turns provider blacklists, model deprecations, and acquisitions into configuration changes rather than operational crises. None of that maps cleanly onto traditional IP categories. All of it maps onto how the most durable AI businesses are actually built.

This series argues that the gap between the dominant framing and what actually works is large enough to be a strategic problem. Two years of misallocated investment have gone into “AI IP” programs that protect things that either leak the moment they are useful, weren’t defensible to begin with, or would create more value if they were shared. Meanwhile, the things that do protect the loop — the contracts, the architectures, the workflow integration depth — get treated as engineering detail rather than as the strategic core they actually are.

The series unfolds in four parts.

Part One — Your Data Isn’t a Moat. Your Loop Is. The strongest version of the data-as-moat argument turns out to be a workflow argument in disguise. Tesla doesn’t have a data moat; it has a fleet moat that produces data as exhaust. Bloomberg doesn’t have an archive moat; it has a terminal-on-every-desk moat that produces market data as a byproduct. The asset is the loop. The data is what falls out of it. A three-filter test — flywheel, replicability, decision-utility — tells you whether a dataset is a real moat, a timing asset, a static archive, or data hoarding theater.

Part Two — The Half-Life of a Prompt Is Shorter Than Your NDA. The U.S. Copyright Office disqualified prompts as copyrightable in early 2025. Patents on prompt techniques are technically possible and practically pointless. Trade-secret status survives until the prompt ships inside a queryable agent, at which point prompt-injection attacks make the secrecy untenable. The actual protection model lives below the legal layer — in vendor contracts (no-train clauses), in architecture (gateway-level tokenization), and in workflow position. A six-layer protection stack distinguishes theater from architecture and shows where the real defenses live.

Part Three — Agents Don’t Have IP. Workflows Do. The agentic AI gold rush has expanded the IP framing onto a new layer where it makes even less sense than it did for data or prompts. The components of an agent — system prompts, connectors, orchestration code, evaluation harnesses — commoditize independently and quickly. What’s defensible isn’t any component; it’s the workflow position the agent inhabits. A six-question scorecard sorts agentic products into prompt wrappers, real products forming position, and full workflow moats.

Part Four — What the Lawsuits Are Really About. NYT v. OpenAI, Bartz v. Anthropic, Thomson Reuters v. Ross — the legal commentary is watching the doctrine. The market is doing something else: pricing training-data inputs in real time through settlements, licensing deals, and contractual restructuring. The doctrine will stay unsettled for years. The market won’t. A priority-ordered punch list of eight enterprise AI contract terms shows what to fight hardest for in vendor negotiations and what to deprioritize.

A few notes on how to read the series.

The four essays stand alone. You can read Part Three first if agents are your primary concern, or Part Four first if you’re negotiating an AI vendor contract this quarter. The arguments don’t depend on each other; the through-line is the framing, not the sequence. That said, the four essays were written to be read in order, and the cumulative effect is sharper than any single piece. The line that closes each essay — Stop protecting. Start owning the loop. — is the same line from four angles, and the angles all point at the same target.

One discipline worth flagging up front. Throughout the series I have been explicit about the difference between claims I can substantiate and claims I cannot. The Anthropic-Pentagon crisis of February 2026 is the recurring spine because it’s the cleanest natural experiment we have on what survives adverse events in the AI stack — and the public reporting documents the disruption, not the survivors. I have not invented named survivors to support the architectural argument, and the architectural argument is forward-looking on purpose. The next adverse event hasn’t happened yet. The question worth asking is who is positioned to survive what comes next, not who survived what just happened.

If the series accomplishes one thing, I hope it shifts the question your organization is asking from “how do we protect our AI IP?” to “how do we own the loop our AI runs inside?” The first question has consumed two years of strategic attention and produced a lot of theater. The second question is harder, because it’s an architecture question and a contracting question and a product question all at once, but it’s the one whose answer compounds.

Read on.

← Back to Blog