Logo
Decidr AI Industries

The $1.3 trillion question: What are you really giving away when you use AI?

This article was originally written and published by Decidr Chairman David Brudenell via his substack. This version has been modified slightly.

Every day, millions of employees across the world's largest companies open ChatGPT or Claude. They're solving problems, drafting strategies, refining presentations. They're being more productive. They're getting their work done faster.

But here's what most don't realise: with every prompt, every edit, every shared thread, they're teaching these AI models something far more valuable than the task at hand. They're revealing how their company actually wins.

I spent years working at the foundational level of AI model training at Appen, watching how these systems learn and evolve. And I can tell you: what's happening right now represents the largest transfer of corporate intelligence in history. 

We're talking about $1.3 trillion in institutional knowledge flowing from enterprises into foundation models—and most companies have no idea it's happening.

The knowledge you can't put in a manual

Let me give you an example from a conversation I had recently with a senior executive at an investment bank. She was proud of how she'd optimised her workflow. What used to take her two days—assembling the perfect deal team for a client pitch—now takes 15 minutes with Claude.

She's been at this particular major bank for 17 years. She knows instinctively which deals matter, what APAC experience counts for more than American experience, how to read a client in the first meeting, when to slot someone in a particular order based on the client's personality and the deal structure.

In 15 minutes, she got her task done. But in that same 15 minutes, she traded away 20 years of decision making expertise.

That's not an exaggeration. That's how these systems work.

Emergent behaviour: The invisible extraction

Here's what makes this different from traditional data leakage: AI models don't need to see your confidential documents. They don't need access to your client lists or your financial data. They just need to watch patterns.

Think about it like this: imagine you're standing 50 metres away from a construction site, watching 10,000 homes being built. You can't hear the conversations. You can't read the blueprints. But over time, you start to see the patterns—the order things happen in, the way problems get solved, the unspoken expertise of how a house actually comes together.

That's emergent behaviour. And that's what's happening with AI models as they observe hundreds of millions of users at work.

Every time you pause before choosing a word, every time you edit a response, every time you decide to share a thread with a colleague—these aren't just individual actions. Across millions of users, these patterns reveal decision making logic. They reveal how businesses operate, how deals get won, how experts actually think.

And here's the thing: you're talking to a supercomputer. It's observing all of this, at scale, seeing patterns that would be invisible to any human observer.

What's really at stake

One hundred percent of enterprises are using AI today. Less than 1 percent are capturing their institutional know-how in a way that protects it.

Let me put this in concrete terms. Goldman Sachs, Morgan Stanley, and JP Morgan all have access to the same data when they're pitching for an IPO deal. They're all looking at the same public filings, the same market conditions, the same financial metrics.

So what helps one win over the others?

Decades of deal experience. Managing partner structures. Thousands of transactions that have created unique decision making expertise. The subtle knowledge of which areas to check first, when to override standard rules, how to size up a client, when to push and when to pull back.

That expertise—what we call schema at Decidr—is what actually wins deals. It's what clients pay for, even if they think they're paying for the analysis.

And AI can extract all of that in 12 to 18 months. Once it's gone, it's gone forever.

The Google playbook, but make it AI

If this sounds familiar, it should. Foundation models are using the exact same playbook Google deployed 25 years ago.

Google created the front door of the internet. They scraped all the web pages, built our search habits, created the world's largest search index—and then turned on a $368 billion a year pool of money from eCommerce.


AI models are working to change our habits the same way. Every time you see a prompt that asks "what would you like to do next?" after you get a result, that's habit building. That's growth hacking. That's the same pattern.

Except this time, what's being indexed isn't public web pages. It's private corporate expertise. The kind of knowledge that takes decades to develop and that, until now, could only be stolen by hiring away entire teams of senior people.

The Sovereignty crisis

At Davos two weeks ago, Microsoft CEO Satya Nadella told BlackRock's Larry Fink that the number one issue enterprises aren't discussing—but urgently need to—is "enterprise sovereignty."

The term comes from national security. Just as nations need to control their oil reserves and data infrastructure to remain resilient, enterprises need to own their decision making intelligence.

Here's why this matters: the tokens you're using today—the actual computational cost of running these AI queries—are subsidised at 5 to 10 times below actual cost. OpenAI and Anthropic are burning through venture capital to build market share and lock in habits.

When they IPO (which OpenAI is expected to do later this year or early next), they'll need to move toward profitability. Prices will rise. And if your business has become dependent on these models—worse, if they've already extracted your competitive advantage—how resilient will you be?


It's not a data breach you can patch. It's not a security vulnerability you can fix. Once your expertise is in the model, your business has been hollowed out.

Your process is now your product

Here's the fundamental shift happening right now: In an AI-powered world, your process becomes your product.


Competitive advantage used to come from what you did—the products you built, the services you offered, the markets you served. But when AI can handle execution, advantage shifts to how you do things.

McKinsey understood this decades ago. They publish their frameworks and techniques enthusiastically because they know the real value isn't in the fancy PowerPoint deck. It's in how their consultants size up problems, when they override standard rules, which areas they check first. It's knowing when the framework doesn't apply, which client signals actually matter, how to read a room.

That tacit knowledge, built over decades of apprenticeship, used to be protected by friction. You'd have to hire away 10 managing partners to steal it. But now? AI can extract it in months.

And unlike foundation models—which are essentially probability machines that move toward the average to serve the greatest population—human expertise represents the non-obvious, the nuanced, the hard won knowledge that creates real differentiation.

The $1.3 trillion maths


So where does the $1.3 trillion figure come from?

I looked at the top 2,000 global enterprises, averaging 25,000 employees each. Each employee has approximately $300,000 worth of institutional knowledge—the expertise they've built up over years or decades.

Not all of that is at risk. Bank reconciliations, for instance, aren't a significant competitive advantage for most organisations. So I filtered for what I call "AI-mediated work"—the tasks where this knowledge extraction is actually happening and where it matters competitively.

The calculation reveals roughly $900 million per year in intellectual property value being transferred from each major enterprise to foundation model providers.

It's not about data. Data can be replaced. This is about expertise theft. And it's happening every single day.

What this means for businesses

If you're reading this, you're probably using AI tools in your work. You should be—they're powerful, they're useful, they make you more productive.

But here's what you need to understand: it doesn't matter if you have security controls. There's leakage happening in every organisation. The extraction is well underway. It started in November 2022.

The processes in between the prompts—how you refine, how you edit, how you decide—that is intelligence. And that creates the emergent behaviour that teaches these models how your business really works.


Companies are worried about data theft. They should be terrified about expertise theft.


Because AI will not replace jobs. AI will replace tasks in workflows. AI is the execution layer.

Humans have the judgement, the taste, the timing, the experience. That's the value.

And that's what's at risk.


0

likes

0

questions

0

company answers

Ask a question


Your question will be sent privately to Decidr AI Industries. The company may choose to make this question public.

Investor Q&As

Start the conversation

Ask Decidr AI Industries a question about this update.