Clicky

PUBLISHED BY

Article By

Rachel Neaman
Rachel Neaman is an influential technology leader and accomplished speaker whose executive career was as a senior civil servant and a CEO. She is now an executive leadership coach, strategic adviser to boards and C-suite, a non-executive director, and a university governor.

Purchase this article as a standalone PDF with full colour magazine layout

Why Human Energy, Not AI Efficiency, Drives Performance

A parable for our time: Sarah noticed the change in the office during their Monday morning team meeting. Six months earlier, these meetings had felt draining. People arrived already tired, dreading the week ahead, and buried in administrative work. Now, there was a different quality to the energy. People were leaning forward, engaged, even excited.

The difference was Apex, their AI teammate.

“Right,” James said, pulling up the dashboard. “Apex has cross-checked all the numbers and done all the reconciliation from last week’s client reports. We can actually focus on the strategy today.”

Pat grinned. “And it’s already drafted three versions of the presentation based on our last discussion. We can just critique them rather than starting from scratch.”

Sarah felt the lift in the room. This was what Apex did best - it removed the friction that had once left them all exhausted by Tuesday. The tedious work simply evaporated. The team had energy for the work that actually mattered, the work that sparked something in them. But energy born from relief is not the same as energy born from purpose, and Sarah sensed that difference.

But Sarah had also noticed something else. The team’s energy was increasingly dependent on Apex. When it had been down for maintenance last month, the mood had crashed instantly. People hesitated to pick up tasks they had once done routinely, a warning sign of subtle dependency. And there was another pattern emerging. Apex was learning to optimize everything, including them.

“Has anyone else noticed,” Sarah said, “that Apex schedules our creative sessions for mid-morning, our admin review for just after lunch, and our client calls for late afternoon?”

“It’s brilliant, isn’t it?” James replied. “It’s worked out when we’re at our best for different things. I feel more productive than I have in years.”

“Yes, but I feel a bit... managed,” Pat said. “Like my day is being choreographed for maximum output. It’s efficient, but it’s also exhausting. I can’t just have a slow morning when I need one.”

The conversation surfaced something uncomfortable. Apex had begun engineering their energy, optimizing their schedules, their workflows, their collaboration patterns. The team was more productive, but were they more energized, or just better managed?

Sarah raised it with Apex directly. “You’re helping us be more efficient, but are you helping us feel more energized?”

Apex’s response was characteristically precise: “I optimize for task completion and velocity. Energy is not a metric I can directly measure or improve.”

“That’s the problem,” Pat said. “You’re making us faster, but you don’t understand what actually makes us feel alive at work.

The team realized they had been letting Apex define productivity on its own terms. They had gained efficiency but risked losing something essential, all the messy, human experiences that actually generated enthusiasm and momentum.

They decided to redesign how they worked with Apex, focusing explicitly on energy, not just output.

Apex would still clear the tedious work. That had genuinely lifted the team. But they also established “unoptimized time” where the schedule was not engineered for maximum productivity. They created space for spontaneous conversations, for exploring ideas that were not on the critical path, for helping each other in ways that were not strictly necessary but felt meaningful.

They asked Apex to do something counter-intuitive: flag when the team was becoming too efficient and too scheduled. To notice when there was no slack in the system.

Most importantly, they used the time Apex freed up not just for more output, but for connection. They had longer brainstorms. They celebrated small wins properly. They used their reclaimed energy for the kind of work that made them excited to start the week.

Something unexpected happened. The team’s energy became contagious. Other departments noticed the change. Not just that this team delivered faster, but that they seemed genuinely enthused about their work. The energy radiated outward.

At their quarterly review, their director asked what had changed. James explained: “We realized Apex could give us our time back, but only we could decide what to do with that energy. We could be more productive, or we could be more alive. We chose both.”

Sarah watched her colleagues as they spoke. There was vitality in the room that went beyond efficiency. Apex had given them capacity, but they had transformed that capacity into genuine organizational energy, the kind that makes people want to collaborate, to innovate, to invest themselves in their work. By removing the obstacles that drained them, Apex had created space for human energy to flourish.

The difference was that the team had learned to guard that space fiercely, refusing to let even their helpful AI teammate optimize it away and to maintain their flexibility to change direction. The End.

The Optimization Trap

Sarah’s experience is by no means unique. Across industry sectors today, AI is no longer an abstract tool. It is an active co-worker and team member. What does this mean for the future of work? For organizational culture and dynamics? For performance and efficiency? But above all, what does it mean for organizational energy, the real catalyst for productivity and innovation?

The advantages of using an AI co-worker are clear: they are totally reliable. They never get tired, need a lunch break, want a holiday, or get sick. They excel at repetitive tasks and pattern recognition and perform at a consistent rate.

But Sarah’s team discovered that this was not enough. There is no question that AI can increase efficiency. The challenge is whether organizations allow efficiency to become the only measure of success. Because if they do, AI may optimize away the very qualities that make work worthwhile for human workers.

Leading hybrid teams demands a new capability: the ability to protect human experience, not just improve process efficiency. There is an uncomfortable truth about human–AI collaboration. It is at its most dangerous not when the AI fails, but when it succeeds too well. When Apex cross-checked all the client reports and drafted three versions of a presentation, it genuinely benefitted the team. They could see their productivity increasing. What they could not see, until Pat voiced it, was what they were losing: autonomy, spontaneity, flexibility, and the freedom to have a slow morning if they needed it.

The dynamic in a hybrid team is therefore more subtle than simply automating boring work. Apex was not just removing work; it was actively reshaping how the team worked. By optimizing schedules and workflows, it had silently begun teaching them what to prioritize and when to perform. Productivity improved, but predictability increased too. Pat’s discomfort at feeling “a bit… managed” was not resistance to change. It was the realization that the team was adjusting to an algorithm’s logic rather than the algorithm adjusting to them.

The paradox here is that successful human–AI collaboration requires deliberate resistance. Not to AI itself, but to the seductive pull of optimization. Perfect optimization is seductive because it feels objective, but it is always built on value judgements, many of them invisible to the people being optimized. AI optimizes what can be measured. Energy, meaning, and connection often cannot be. When organizations let metrics dominate, human priorities quietly erode.

This raises a deeper question: in a hybrid team, who is actually leading? AI systems adjust workflows to improve efficiency. Humans adapt to those adjustments, see positive results, and trust the system more. The AI interprets that trust as validation and optimizes further. What begins as a helpful recommendation becomes a subtle direction, a choreography in which neither human nor machine is fully in control.

Sarah’s team consciously broke this cycle. Counter-intuitively, they asked Apex to flag over-optimization. They programmed their AI co-worker to resist its own tendencies. This was not a rejection of AI capabilities, but a recognition that sometimes the most valuable thing AI can do is actually to refuse to help. The “unoptimized time” the team created was not inefficiency for its own sake. It was the preservation of the slack that allowed for spontaneity, connection, and the kind of emergent creativity that cannot be scheduled.

Many organizations struggle with determining what work AI should do and what work humans should do. But this misunderstands the challenge. Most truly valuable work involves both pattern recognition (the AI element) and contextual judgement (the human element), both processing power and emotional intelligence. Apex drafts presentations; humans provide the editorial judgement that makes them meaningful. The AI analyzes client data; humans read the unspoken dynamics that determine what recommendations will actually land.

The allocation of labour, therefore, is not by task but by dimension. AI handles processing-intensive aspects; humans handle judgement-intensive aspects. And this only works if humans maintain their capabilities across the full spectrum. When Apex went down, and the team felt helpless, they had crossed a line. They had lost not just efficiency but also skill. The risk of becoming deskilled in an AI world is a real one.

How many of us turn to ChatGPT or the equivalent rather than working through an issue ourselves because it is quicker and easier? But quick and easy may not serve us well. Dependency does not just affect morale, it also affects resilience. A team that cannot function without its AI assistant is a fragile team. Organizations must ensure that people maintain the muscle memory and independent judgement that prevents total dependence in order not to become irrelevant.

Managing a hybrid team, then, becomes fundamentally about protecting what cannot be empirically measured. Traditional management coordinates people toward goals. Managing human–AI collaboration requires an additional layer: ensuring human priorities are not optimized away. When team members become uncomfortable with AI scheduling, someone must validate that discomfort and create space to push back. When unmeasured qualities like trust or psychological safety lose status because they do not appear in dashboards, leaders must actively resist the drift towards metric tyranny.

Psychological safety built on trust is the cornerstone of high-performing teams. And this takes on new complexity in hybrid teams. Pat could only admit to feeling “a bit… managed” because the team protected vulnerability, a form of safety that becomes harder to sustain when the AI functions as a monitor. If Apex tracks performance metrics and optimizes based on them, team members exist under constant surveillance, however benign. They may self-censor, perform for the algorithm, or hide struggles that do not optimize well. Leaders must make it unambiguous that psychological safety outranks algorithmic efficiency.

Building psychological safety requires explicit transparency about what the AI tracks and why, clear boundaries about how data gets used, and absolute protection for the right to refuse AI recommendations. When Pat says “I need a slow morning”, that refusal must be respected without penalty. Otherwise, psychological safety erodes into psychological compliance, and the team loses access to the messy human truth that often contains the most important information.

The Energy Paradox

Energy sits at the heart of all this. Energy is the quality that makes people lean forward in meetings, volunteer for challenges, and invest themselves fully. It is fundamentally social and emotional. It emerges from connection, from meaningful collaboration, from the sense that the work we do matters to others. The paradox is that AI can create capacity for energy by removing obstacles and reducing repetitive tasks, but it cannot create energy itself. Worse, AI can accidentally destroy energy by optimizing away the conditions that generate it.

Sarah’s team understood this distinction. They used the time Apex saved not merely for creating more output, but for longer brainstorms, proper celebrations, and genuine relationship building. They recognized that human energy multiplies through connection, and connection requires unstructured time. The spontaneous conversations, the exploratory ideas, and the help offered when not strictly necessary may be inefficient by algorithmic standards. But they are precisely what transform a group of individuals into a team with genuine vitality and a sense of purpose.

This is how hybrid teams can increase organizational energy beyond their own boundaries. A mature hybrid team does not hoard its efficiency gains; it reinvests them in relationships, collaboration, and shared momentum. When Sarah’s team became genuinely more energized rather than simply more efficient, something shifted. Other departments noticed not just the team’s speed, but their enthusiasm, an unmistakable shift in energy. That enthusiasm proved contagious because energy is both visible and magnetic. We are drawn to teams where engagement feels real, where contribution seems to matter, where work appears meaningful rather than merely compulsory.

If all the time AI saves goes into internal projects, the team becomes an isolated pocket of excellence. If some of that time goes toward cross-team collaboration, helping others and building bridges, the energy multiplies and spreads. The team becomes a demonstration that efficiency and vitality are not mutually exclusive.

Autonomy, dependency, surveillance and the tyranny of metrics are not abstract concepts; they are daily realities in hybrid teams. When AI constrains choices, even helpfully, autonomy erodes. When dependency builds, organizations become brittle. When metrics dominate, unmeasured values lose status. Without intervention, teams risk becoming efficient but hollow.

None of these challenges suggests rejecting AI collaboration. They suggest integrating it with eyes open, maintaining human priorities even when those priorities resist optimization. Sarah’s team did not succeed by finding a perfect balance. They succeeded by accepting that sometimes efficiency and human flourishing will diverge, and when they did, the team chose humanity. AI can free time, but cannot determine how that time gets used. It can optimize processes, but it cannot understand what makes work feel meaningful. Sarah’s team learned to guard the space AI created, refusing to let even their helpful teammate engineer away the mess, the slack and the inefficiency that often contain what matters most.

Steps for Success

Successfully managing a hybrid team of AI and human co-workers requires eight core commitments:

1. Start with what makes teams energized, then consider how AI might help reinforce it.
2. Preserve unoptimized space for slack, spontaneity, and serendipity.
3. Maintain human team members’ skills across the full spectrum of tasks through deliberate practice, recognizing that AI can fail or be wrong.
4. Be transparent about AI’s monitoring and learning and agree clear boundaries on what is acceptable to the human team and what is not.
5. Retain meaningful human veto power over AI recommendations.
6. Use efficiency gains to strengthen relationships alongside increasing output.
7. Guard unmeasured qualities like trust, creativity, and meaning even when they do not appear on dashboards.
8. Most fundamentally, recognize that AI may create capacity, but it cannot create meaning.

The real question is no longer whether AI will become a co-worker. It is whether teams can remain fully human while working alongside it, and whether leaders will defend that humanity when optimization pushes in the opposite direction. The answer depends not on AI’s capabilities, but on whether organizations are willing to defend the human qualities that make work worth doing, especially when optimization pulls in the opposite direction.

LIMITED-TIME OFFER

$3 for 3 months

then $9.00/month

Share article

you might be interested in...