The shift underway is no longer a productivity upgrade. It’s a reordering of how work gets done. And the clearest place to see that is software engineering, where humans and agents have moved through four distinct patterns of collaboration in roughly four years. The same progression is now compressing into months across knowledge work.
Software development got there first because its feedback loops are tight enough to make each transition visible. And what those transitions revealed applies far beyond any single discipline. That makes it a map for every function to follow.
Mapping the four patterns of human-agent collaboration
Coding didn’t jump from autocomplete to autonomous agents. Each pattern only became possible because the previous one existed, and at each stage, two things changed together: what the agent was capable of doing, and what the human was actually responsible for.
1. Author. With GitHub Copilot, AI suggests the next line of code while the developer authors everything. The unit of work is a single line of code or function. The AI suggests and the human’s job is unchanged: creating.
2. Editor. Tools like Cursor Composer introduced a new pattern, where developers can use natural language to describe intent and the agent produces a full draft. The unit of work moves from a single line of code to a complete feature. And the human’s job moves with it—from producing code to evaluating it: reviewing, editing, deciding what ships.
3. Director. Claude Code shows what this pattern looks like at scale. The human writes a spec and hands off an entire task. The agent works autonomously to plan across the codebase, execute, run tests, iterate and troubleshoot when something doesn’t go as planned. The unit of work is a task or pull request. The human’s role has changed again, and they’re no longer reviewing every step of the process. Instead, they set intent, guardrails, and policies, then evaluate a final product. Anthropic illustrates this in its own operations: nearly all of their internal code is written by autonomous agents.
4. Orchestrator. With tools like GitHub Mission Control and Anthropic Agent Teams, one person runs multiple agents concurrently against a shared backlog. The agents execute specialized tasks in parallel, collaborate with each other, and surface exceptions for human review. The unit of work is the backlog itself. The human is no longer directing individual tasks. They’re designing the system, setting policy, and deciding where to intervene. The leverage is no longer in production itself; it’s in judgment—knowing which work is worth doing, in what order, and to what standard.
How to spot the human+agent work pattern you’re in
Map what the human needs to do, what the agent can do, and the smallest unit of work the human must sign off on.
The same transition is now underway in every function
Software development was already rigorously managed work. Code either compiles or it doesn’t. Tests either pass or they fail. That meant agents could check their own work and learn from the signal without a human in the loop, and that’s the capability that unlocked Director and Orchestrator. Without it, delegation stalls.
Every other knowledge-work function must build its equivalent. Whether it’s legal, finance, or marketing, someone needs to define what good looks like, make it measurable, and create the mechanism that lets agents assess their own output against that standard.
Our 2026 Work Trend Index Annual Report makes the stakes concrete. Among AI users, 58% say they’re producing work they couldn’t have done a year ago, rising to 80% among Frontier Professionals, the most advanced AI users in our research. The constraint isn’t what people can do, it’s whether the work around them is structured to let them do—it’s whether the work around them is structured to let them do it.
The goal is knowing which pattern fits the work
The four patterns are a diagnostic, not a progression. The decision about where work belongs depends less on importance and more on how clearly it’s defined. What is the smallest unit the human needs to sign off on? And what would a human never need to review? Work ready for Director or Orchestrator has a clear spec, a measurable definition of good, and a feedback loop that doesn’t require a human at every step. Work that belongs in Author or Editor doesn’t have those conditions yet. Moving it before they exist is where quality problems compound silently across every output the system touches.
The answer to which pattern fits isn’t fixed either. As the technology matures and organizations build better mechanisms for evaluating agent work, the line will move. What belongs in Editor today may belong in Director a year from now. As more work moves, what’s left for humans changes with it. Tactical execution recedes; setting direction, defining standards, and deciding what to delegate expands. Organizations will need to evolve the way they measure and skill their employees. The engineer prized for shipping code fast isn’t necessarily the one you want directing a fleet of agents.
Our research found 65% of AI users fear falling behind if they don’t adapt quickly, yet only 13% say they’re rewarded for reinvention when it doesn’t immediately produce results. The same forces accelerating adoption are also holding it back.
What it all means for leaders
Every function that goes through this transition runs into the same two realities.
First, humans need to teach agents what good looks like. In code, tests are the teacher. Everywhere else, humans will be. The first agents your team deploys should be the ones where your corrections and feedback are clear and frequent. That’s how the system learns your standard.
Second, accountability doesn’t scale with delegation, so review infrastructure must. As agents produce more, the most valuable thing a team can build is a great review system—the checks, dashboards, and feedback signals that let one person stay confident across a lot of parallel work.
Match the work to the wrong pattern and there’s no feedback loop to build on and no review process that can hold. Match it correctly and both scale naturally as the system matures.
Leaders have spent years thinking about AI adoption as a progression—how far along the organization is, how quickly it can advance. The more useful frame is a map: which pattern does this specific kind of work belong in right now, given what we know, given what we’ve built, given what we can actually evaluate? That question, asked consistently across functions, is what separates organizations that are getting faster from organizations that are getting better.
What software development has given every other function is a head start, not a finished answer. The patterns are visible, but what they look like inside legal work, inside marketing, inside finance—that work is still being done. The leaders who define it won’t just be running better organizations. They’ll be setting the standard for how their entire field works with AI.
For more insights on AI and the future of work, subscribe to this newsletter.