I spent a couple of days at the Converge Africa conference recently, and candidly, it was a great excuse to get out of my home office and spend time having real interactions with real people. Because good lord, when you spend most of your week flying solo at a desk with nobody but Claude, ChatGPT, Replit and whichever other AI platform you’ve dragged into your digital sweatshop, you can become wildly productive. Frighteningly productive. The sort of productive that makes you briefly believe you are a strategic genius, when in reality you are a sleep-deprived meat puppet conducting six artificial intelligences like a caffeinated wizard in Crocs.
And it works. That is the slightly annoying part.
With enough prompting, challenging, cross-platform interrogation and the occasional emotional breakdown disguised as “one more iteration,” you can get to where you need to be. You can build websites, write strategies, research unfamiliar industries, redesign workflows, test product ideas, create content, analyse data and generally accelerate what used to take weeks into something that can sometimes happen before lunch. In those moments, it feels extraordinary. You feel warm and fuzzy and quite proud of what you and half a dozen AI platforms have managed to achieve together.
And yet, by the end of the week, you can feel completely exhausted.
Not intellectually empty exactly. More relationally underfed. Like you have been sitting in a room with the world’s most capable intern, except the intern has no face, no soul, no sense of timing, and keeps telling you that your half-formed thought is “a strong and compelling direction” when what you actually need is someone to say, “Murray, that paragraph is wearing a fake moustache and pretending to be insight.”
That, for me, is the tension at the heart of AI and human capacity. AI is an incredible work partner. It is also a terrible colleague.
And that distinction matters, because it is not whether AI can help us produce more. It clearly can. It is whether we will use that extra capacity to strengthen human capacity, or whether we will simply use it to turn already exhausted people into higher-output machinery.
From capability to capacity
At Converge Africa, that distinction became very real for me. At one point, I found myself throwing ideas and theories around for two and a half hours with a delegate I had not known until the day before. There was no neat deliverable. No action item. No automated summary politely pretending to understand the emotional architecture of the conversation. Just two people thinking out loud, challenging each other, laughing, disagreeing, and slowly finding the shape of something useful together. And being able to do that with someone else was exceptional. It was also something I realised I had desperately missed.
I went into the week thinking AI was mainly a capability question: what can it help us do? I came out thinking it is a capacity question: what does it give back, and what do we do with that space?
That is where the conversation about AI and human capacity becomes more important than the conversation about AI and productivity alone.
During the conference, two ideas kept coming up around AI. The first was the one most of us have now heard several hundred times: AI will not take your job; someone using AI will take your job. It is usually delivered with the smug certainty of a man wearing a headset microphone and chinos so tight they may legally count as compression therapy. But the second idea interested me far more: AI needs to do more of the work so humans can get back to being human.
That one stuck.
Because AI has very much arrived in all its flamboyant, occasionally brilliant, occasionally deranged glory. It is here. It is useful. It is messy. It can be magnificent. It can also confidently invent nonsense with the calm authority of a substitute geography teacher explaining blockchain to a confused otter. But despite the rough edges, it has already changed what is possible for individuals and organisations.
If you have an idea, some strategic instinct, and a rough sense of what needs to happen, AI can dramatically expand your capacity even if you lack the technical ability to do it all yourself. Personally, I have used it to build websites, shape a business, develop strategies for industries I knew very little about, support work with a major telco, redesign workflows, research more deeply than I could before, and connect ideas across books, reports and lived experience at a speed that would previously have required either a team of analysts or a suspiciously well-funded cult.
At the moment, I have more than 15 active projects running through ChatGPT alone: resilience research, developer projects, finance management, holiday planning, content development, product thinking, business design and the occasional deeply important domestic question like “how much wood can I put in this fireplace before I accidentally recreate the final act of Backdraft?”
So I am not anti-AI. Not even slightly.
I am deeply, enthusiastically, sometimes problematically pro-AI. But I am also increasingly convinced that the conversation around AI is still too obsessed with output and not nearly interested enough in human capacity.
Because AI does not automatically make us more resilient. In badly designed systems, AI may simply help us burn out faster, with better formatting.
The double-edged sword of AI and human capacity
This is where the business conversation gets important. The reflexive response to AI in many organisations will be: “Wonderful, this saves people time. Therefore, they can now do more.” That sounds logical until you remember that most people were already drowning in email, meetings, Slack, Teams, WhatsApp, admin, reporting, dashboard archaeology, performance reviews, project updates, stakeholder wrangling and the daily corporate ritual of pretending the phrase “just circling back” has not caused a small part of their spirit to leave through the nearest air vent.
If AI saves someone four hours, the opportunity is not automatically to give them six more hours of work.
That is not innovation. That is taking a system that is already gasping for air, stapling a jet engine to its back, and congratulating yourself on the increased velocity.ing surprised when it develops opinions.
This is the double-edged sword of AI and human capacity. On one side, AI can give people back capacity for the work only humans can do: thinking, judging, connecting, creating, questioning, caring and making sense of complexity. But if every bit of capacity created by AI must be immediately filled with additional productivity, then the strain does not disappear. It simply moves. People remain overloaded, only now with better tools and less permission to admit they are at capacity.
And when that happens, they will have no choice but to use AI for the human work too, not because they are lazy, but because they no longer have the energy for it.
That is where the real danger sits. No new thinking. No deeper connection. No better judgement. Just more AI slop, fired into the world at industrial speed by exhausted people with inboxes full of “quick asks.”
The real opportunity is to ask what recovered human capacity should be used for. Some of it should absolutely go into better output. Faster workflows. Cleaner analysis. Stronger execution. Better service. More useful products. Fewer hours sacrificed to the kind of admin that makes intelligent adults stare into the middle distance like a Victorian ghost trapped in a filing cabinet. But not all of it.
Some of that human capacity has to be protected for the work that only humans can do: better thinking, better conversations, better decisions, better leadership, better creativity, better recovery, better judgement and better human connection. That is where resilience comes in.
Resilience is not simply the ability to take on more. It is the ability to keep functioning, adapting and making good decisions under pressure without slowly turning into a haunted spreadsheet with a blood pressure problem. It requires cognitive capacity, emotional capacity and relational capacity. It requires the ability to pause, notice, challenge, connect, recover and make sense of what is actually happening.
AI can support human capacity, but only if we design work differently around it.
The companies that get real value from AI will not simply be the ones that hand everyone a premium subscription and tell them to “go innovate.” That is not a strategy. That is a corporate trust fall into a ball pit full of compliance risks. The stronger organisations will be the ones that redesign workflows, build clear guardrails, train people properly, track what is actually improving, and decide where human judgement must stay firmly in the loop.
Because AI is not magic. It is leverage. And leverage cuts both ways.
Used well, AI can remove unnecessary drag, accelerate the work that should be accelerated, support better thinking and reduce the amount of time humans spend doing work that slowly turns their brains into microwaved porridge. Used badly, it creates more noise, more slop, more shallow thinking, more generic content, more false confidence and more pressure to keep producing because now production has become easier.
That is one of the strange risks of this moment. We are overloaded with information already. We are drowning in misinformation, disinformation, content, commentary, hot takes, automated outreach, synthetic thought leadership and posts that sound like they were written by a leadership coach trapped inside a hotel conference lanyard. AI is going to increase the volume, speed and polish of all of it.
Which means the human skill is no longer just producing. It is discerning.
It is knowing what matters. It is questioning what is true. It is understanding context. It is spotting when something feels emotionally persuasive but intellectually weak. It is knowing when the machine has produced something useful, and when it has produced a confident little lasagne of nonsense.
The more artificial intelligence we bring into work, the more valuable real humanity becomes.
That sounds soft, but it is not. It is commercial. It is strategic. It is operational.
If everyone has access to similar tools, the advantage does not come from having AI. The advantage comes from how well your people use it, what they use it for, what your systems allow them to do with the capacity it creates, and whether your culture is mature enough not to immediately punish efficiency with more workload.
The balance is the point
This is especially important for leaders. Giving people AI without redesigning the system around them is not empowerment. It is a trap with a better user interface. If your organisation is already a flaming shopping trolley of unclear priorities, weak accountability and meeting addiction, AI will not save you. It will simply make the trolley faster.
If the workflow is broken, AI accelerates the broken workflow. If accountability is unclear, AI produces faster confusion. If decision-making is slow, AI generates more material for people to ignore while they wait for permission. If trust is low, AI becomes another surveillance-shaped thing people quietly resent. If people are already overloaded, AI may simply disguise the overload for a while before the system cracks somewhere more expensive.
That is why the question should not only be, “How do we use AI to increase output?” it is, “How do we use AI to increase useful human capacity?”
That changes the conversation. It asks where people are wasting time. It asks where cognitive load is unnecessarily high. It asks which tasks should be automated, which should be augmented, and which should remain deeply human. It asks where decision-making is stuck, where people need clearer boundaries, and whether AI is helping the system become healthier or merely helping it run hotter.
Because hotter is not always better.
Sometimes hotter is just the final warning light before the engine turns itself into a very expensive soup.
For individuals, the same principle applies. Use AI. Play with it. Throw ideas at it. Ask it to challenge you, structure your thinking, summarise, compare, simplify, expand, interrogate and improve. Use it to get moving when you are stuck. Use it to lower the activation energy of difficult work. Use it to take the awful blank-page goblin and punt it into the nearest ravine.
But do not outsource the human work.
Do not let it do all your thinking. Do not let it flatten your voice. Do not let it replace your judgement. Do not let it remove the discomfort that comes with learning, wrestling, shaping, editing and actually becoming better. There is growth in that friction. There is resilience in that struggle. If AI removes every difficult edge, we may become more efficient while quietly becoming less capable.
This is why I still write my first drafts myself. Often in Microsoft Word, which now feels less like software and more like a heritage site. I use AI heavily afterwards. I use it to challenge, sharpen, structure, review and improve. I even created a kind of “Board of Directors” to give candid feedback on drafts from different perspectives. But the base layer still needs to be mine. The first pass needs to come from my own thinking, my own experience, my own irritation, my own jokes, my own odd little pile of metaphors involving Land Rovers, otters and corporate lasagne.
Because that is the human layer. And the human layer is the point.
AI can help us build faster, but it cannot tell us what is worth building. It can help us communicate faster, but it cannot care on our behalf. It can help us produce more, but it cannot decide whether more is actually what the system needs. It can simulate empathy, but it cannot feel concern. It can generate stories, but it has not lived one.
So the resilient path forward is not to reject AI. That would be ridiculous. It would be like refusing to use electricity because candles have “better energy.” The resilient path is to integrate AI deliberately, intelligently and humanely.
For businesses, this means taking a hard look at how AI budgets are being allocated. Throwing everything at technology while underinvesting in people is shortsighted. People need training, trust, governance, clear use cases, ethical boundaries, workflow redesign and the psychological safety to experiment without feeling like they are one missed prompt away from being replaced by a chatbot called Gavin.
For individuals, it means learning the tools while protecting the parts of yourself that make the tools valuable in the first place: your judgement, curiosity, empathy, creativity, humour, lived experience, relationships and ability to make meaning from the mess. This, ultimately, is the balance at the centre of AI and human capacity.
AI is here. It is powerful. It is useful. It is not going away. But we now have a choice in how we use the capacity it creates.
Some of that capacity should absolutely go into better output. Faster workflows. Cleaner analysis. Stronger execution. Better service. More useful products. Less time lost to the kind of admin that makes intelligent adults stare into the middle distance like a Victorian ghost trapped in a filing cabinet. But not all of it.
If every saved hour becomes another task, another deliverable, another metric, another little productivity brick added to the backpack of already tired people, then AI has not created resilience. It has simply increased the load-bearing expectations of the human system.
The balance is the point.
Use AI to produce more where more genuinely matters. But protect enough human capacity for the work only humans can do: thinking, caring, connecting, judging, creating, questioning, recovering and making sense of complexity.
One path treats AI as a way to make the machine louder, faster and hungrier.
The other treats AI as a way to make the whole system stronger.
That is where the resilience sits.
Visit my new product to find out how to better manage this: Antistable – Sustain performacne in a world that doesn’t slow down.

Organisational Systems Design: When Activity Replaces Progress
Work feels busy but progress stalls. It’s not a people problem – it’s the systems design. Here’s why your[…]

Emergence in Business: Designing Systems That Grow
Control is slowing your organisation down. Emergence in business explains why – and how better systems create speed, alignment,[…]

Internal vs External Locus of Control: The Difference Between Reacting and Responding
Internal vs external locus of control explains why you feel stuck at work. Learn how control, burnout, and performance[…]

No responses yet