Building the hybrid workforce: what changes when AI agents join the team

When we look back at the automation programmes that have delivered the most sustained value, there is a pattern that holds across sectors, organisation sizes, and technology choices. The programmes that worked were the ones where the people doing the work were treated as co-designers, not recipients.

The ones that struggled, despite having excellent technology and genuine executive sponsorship, were the ones that announced the automation and expected adoption to follow.

The question nobody asks

In the early stages of an automation programme, leadership conversations tend to focus on the technology, the business case, and the risk framework. These are important conversations. But the conversation that is most often skipped, or held too late/ with too small an audience, is the one with the people whose work is about to change.

In our experience, the most common anxiety is not about job security. When you ask people directly and create a genuine space for honesty, job security is rarely the first thing they raise. The first thing, almost universally, is this: will I have any say in how this works?

That question matters more than most organisations realise. Because the people you are asking to work alongside AI agents are also the people who hold the process knowledge that makes good automation possible. They know where the exceptions are. They know what the system shows and what the reality is. They know which parts of the process are genuinely rule-based and which parts require judgement that is not yet captured anywhere.

If you involve them, you get better automation and you get a team that owns the outcome. If you don't, you get automation that misses the edge cases and a team that works around it.

What the hybrid workforce actually looks like

Let’s be precise about what we mean by a hybrid workforce, because the term is often used loosely in ways that obscure what is actually happening.

A hybrid workforce is not a workforce where some people have AI tools on their desks. It is a workforce where human roles have been deliberately redesigned around the capabilities of the AI agents working alongside them. The agent handles the volume, the consistency, the repetitive application of rules at scale. The human handles the judgement, the relationship, the exception, the situation that requires context that is not in the system.

That redesign is not trivial. It requires a clear view of which tasks are genuinely agent-appropriate and which require human involvement. It requires new skills and not necessarily technical skills, but the ability to manage, interpret, and override AI outputs. It requires performance frameworks that reflect the new division of labour. And it requires ongoing communication as the agent's capabilities evolve and the division of labour shifts.

The skills gap nobody talks about

There is a skills gap in the hybrid workforce that receives surprisingly little attention. It is not the gap between people who understand AI and people who don't. It is the gap between people who know how to supervise an AI agent effectively and people who don't.

Supervising an AI agent is a specific skill. It means knowing what normal output looks like so you can recognise when something is wrong. It means understanding the decision boundary — what the agent can do, and what it can't — well enough to catch edge cases that fall through. It means being comfortable with the idea that most of the time the agent is right, but not assuming that means it always is.

This is different from either trusting the automation blindly or overriding it reflexively. Both of those behaviours destroy the value of the automation programme. Effective human-agent collaboration requires a nuanced, calibrated relationship.

The organisations that invest in this and treat 'how to work with AI agents' as a real capability that needs to be built and maintained, consistently outperform those that assume people will figure it out. The figuring-it-out approach works for some people. It fails for enough others to materially affect the programme's outcomes.

Change management is not a communications exercise

There is a version of change management for automation programmes that consists of: send an email from the CEO, run a town hall, publish some FAQs, declare the change managed.

This approach is common and it does not work. Not because communication is unimportant but because the hard work of change management is not telling people something is happening. It is creating genuine participation in how it happens, and genuine mechanisms for raising concerns that are actually heard and acted on.

The specific things that make a difference, based on programmes we have been part of: process discovery workshops where frontline staff are the experts in the room, not the technology team. Pilot designs that include the people who will use the automation, not just the people who built it. Clear escalation mechanisms that are genuinely accessible when something doesn't look right. And visible follow-through; when a concern is raised and acted on, making sure the person who raised it knows it was heard.

None of this is complicated.

The transition is already underway

The hybrid workforce is not a future state that organisations are preparing for. For the organisations we work with, it is already the operational reality. Agents are processing referrals, validating transactions, managing outreach, running quality checks, and handling reauthorisation workflows. Alongside human teams who are doing the work that requires judgement, relationship, and context.

The question for most organisations is not whether to build this model. It is how well they are going to build it. And the answer to that question depends, more than anything else, on whether they involve their people in the design and not as a courtesy, but as the subject matter experts they are.

The best automation programme we have ever been involved in was one where the person who knew the process best, a team leader who had been running the same workflow for eleven years, ended up essentially redesigning her own role. She did not lose her job. She became the most effective human overseer of the agent that replaced the repetitive parts of it. That did not happen by accident. It happened because someone asked her what she thought.

Next
Next

Governing AI agents in the enterprise: the 5 controls that actually matter