Revenue Operations
AI adoption is not transformation
An AI-adopted company runs AI on top of its existing structure. An AI-native company designed its structure around what AI can do. Most companies think they're becoming the second thing by doing more of the first.
They're not.
I notice teams doing this constantly. A company adds AI tooling. Meeting summaries, a support chatbot, a copilot for the sales team. The spend goes up. The org chart doesn't move.
The org chart, the status meeting, the approval chain, the manager who holds context because the systems can't share it. None of it was designed because someone figured out the optimal way to run a company. These structures evolved as compensations for systems that couldn't move information on their own. A human had to carry it. A meeting had to exist so five people could hear what the software should have surfaced already.
This is not a modern problem. Roman emperors ran empires of millions with the same solution. Augustus formalized it in 20 BCE: the cursus publicus, a state-run relay network of couriers and way stations spaced every 8 to 12 miles across the empire. A decision made in Rome took roughly five weeks to reach a garrison in Britain. The hierarchy of governors, prefects, and couriers existed because information had no other way to move. The org chart was the network.
Your approval chain is the same architecture. Different materials, same problem.
These structures are patches. The mistake most companies make is treating the patches as the foundation, then laying AI on top.
A status meeting with an AI notetaker is still a status meeting. A process manual with an AI chatbot is still a process manual nobody reads. You've made the waste faster. You haven't removed it.
What the carrying layer actually is
Think about what a middle manager does on a given day. They collect context from reports, compress it into a status update, carry it upward, and carry decisions back down. They are a translation layer between parts of the organization that can't communicate directly. A human protocol stack.
That job exists because the underlying systems are mute. When you build a company where every internal surface is queryable, you don't need the carrying layer. Meetings become structured records. Decisions get logged with the context that produced them. Customer calls flow into a knowledge base rather than a CRM field nobody updates. The company's state becomes readable without asking anyone to summarize it.
The pattern looks something like this: customer calls → Whisper transcription → tagged by deal stage → nightly agent scan → action items posted to the project channel. Nobody writes a summary. Nobody attends a debrief. The information moves on its own.
The system requires more structural clarity than the human system it replaces. An ambiguous human role produces confusion. An ambiguous agent role produces confident wrong answers at scale.
Each agent gets an identity and a set of SOPs to operate against. Model choice follows from what the task actually requires:
| Task type | Model | Why |
|---|---|---|
| Data extraction, formatting, status checks | Small / cheap | Narrow and deterministic |
| Customer-facing communication | Frontier | Judgment required |
| Contract review, escalations | Frontier | Judgment required |
The most common failure I see is a team that knows what the agent should do but hasn't written it down in any form a system can read. The process lives in someone's judgment. That judgment doesn't transfer. When an agent makes a bad call, you need to trace which agent, which model version, which SOP version, and what context it had. Without that, debugging is archaeology.
AI-native forces you to make explicit everything the carrying layer kept implicit. The messiness that made the old system feel human is exactly what makes the new one fail. The domain expert understands the process but can't always articulate it in a form a system can run. The engineer can build anything but doesn't know what to build. Getting them aligned on a working SOP is where most of these projects either happen or stall.
Altlast
Altlast is the German word for contaminated legacy. Industrial sites where the soil is poisoned by decades of prior use. It describes what large organizations carry better than "technical debt" does.
AI-native companies don't carry Altlasten. They design from the foundation. Incumbents have access to the same models. That's not the constraint. What they can't do is stop running the live operation long enough to rewire it.
I see this on both ends. A founder-led team can rewire a core process in a sprint. An established company with the same goal spends months in stakeholder alignment before a single system changes. The compensation layer is structural. Changing it means negotiating with everyone whose role depends on it.
This is why the gap compounds rather than closes. An AI-native company gets a structurally leaner operation each year, which lets it move faster the year after. The AI-adopted company gets faster summaries and better search. These feel good for a quarter. Then someone enters the market doing in four people what you do in forty.
Structure is the one thing most companies will do anything to avoid touching.