The Question We Keep Getting Wrong About AI
Blog by Gary Lancina
Every conversation about AI eventually lands in the same place. Someone poses a version of the question: where do humans still matter? Where do we still win?
The interest is understandable. The technology is accelerating fast, and folks want to know where we stand. The framing of these questions limits our thinking, though. It treats the topic as a competition. A displacement event. A contest with a winner and a loser.
This is the wrong frame.
The better question isn't where or how humans survive AI... It's how humans and AI best leverage one another. Addressing this question with clarity will separate the organizations that thrive in the next decade from those which will stumble through it.
The 80% Problem
On a recent episode of The Fault Line, Pat Fitzgerald made a point that deserves further reflection. AI raises the waterline. It gets almost everyone to roughly 80% of their goals faster, with less effort, and with fewer domain-specific barriers than before.
A deck that once took a week can be assembled in hours. An analysis that required a specialist can be drafted by a generalist. Content, code, summaries, frameworks: the 80% solution is increasingly available to everyone.
This all sounds like good news. And on one level, it is, but here's the catch: if everyone has access to the 80%, then the 80% is no longer your edge. It's table stakes. The baseline has risen for you and for every one of your competitors simultaneously.
This is a critical strategic reality, and most organizations haven't yet reckoned with it. Efficiency gains from AI are real and worth pursuing, but efficiency is not differentiation. Speed is not advantage when everyone is going faster.
The advantage in our AI world lives in the last 20% of our efforts. This last 20% is irreducibly human.
Where the Failure Modes Diverge
To understand why, it helps to think about how humans and AI fail differently… and they do.
AI is, in certain respects, remarkably capable. Pattern recognition at scale. Processing and synthesis across enormous datasets. Tireless iteration. Speed that humans simply cannot match.
However, AI is also persistently overconfident. It will provide a well-structured, plausible-sounding answer and have no awareness that the answer is wrong. It cannot read the room. It doesn't notice that the CFO just tensed up, that the energy in the room shifted when a certain topic came up, or that the timing on a recommendation is politically poisonous even if it's analytically correct. It doesn't carry the weight of lived experience that spurs you to pause before you speak, or when the right move is to ask a question instead of offering a solution.
When AI makes a mistake in a generative context, you see it. You catch it. You move on. When AI makes a mistake inside an agentic workflow managing, say, an email campaign to 500,000 customers, the error compounds before anyone notices. It's a meaningful difference. One is a typo. The other is a crisis.
Humans fail differently. We get bored with repetition. We carry bias from yesterday's experience. We can only hold so much in working memory at once. We tire, we rationalize, we miss patterns in large datasets that a machine would catch in seconds.
The point isn't that one is better. The point is that the failure modes don't overlap much. Which means the combination can be more capable than either alone.
What the Research Confirms
MIT's Center for Collective Intelligence reviewed more than 100 studies on human-AI collaboration and published findings in Nature Human Behaviour. One of the illustrative data points involved classifying bird species from images, a task requiring genuine specialized expertise. Humans alone achieved 81% accuracy. AI alone hit 73%. The human-AI combination reached 90%.
That's not a marginal improvement. That's a different category of result. It was achieved specifically because the human experts and AI brought different capabilities to the task, covering each other's blind spots, and triangulating toward a better answer.
The same research found that synergy between humans and AI isn't automatic. Working together doesn't guarantee a better outcome. The combination outperforms when humans understand what AI is good at and where it runs off the rails, and when they bring genuine judgment to the collaboration rather than deferring to whatever the machine produces.
Which brings us back to the real risk.
The Inversion Problem
In working with clients and leaders across industries, one failure mode concerns me more than any other in this AI moment. People can and do inadvertently flip their relationship with ChatGPT or Claude or Gemini. They outsource their thinking to AI and then apply their own energy to the tasks that AI could handle better.
These individuals ask the machine what strategy they should pursue and then go do their own expense reports. They let AI generate the insight and settle for curating the output. They skip the hard cognitive work of forming a point of view and trust the machine to do it for them.
That's the inversion. It's dangerous not because AI's answer is necessarily wrong, but because it trades away the things that make a human-AI combination so valuable: our judgment, our context, our taste, our understanding of the people in the room.
Pat put it plainly in the Fault Line conversation. AI can generate a lot of good-looking things. It can get us to 80%. But the last 20%—the part that involves understanding what customers actually need, reading the organization, or making the call that isn't in the data—that's where people not only hold value, but where we can up our game.
AI won’t counsel, "don't say that thing you're about to say, because the CFO is going to blow a gasket." It can't look at the body language across the table and adjust. Kim Scribner noted in the same Fault Line episode that her job as an observational coach feels safe from AI disruption for exactly this reason. The real-time human dimension of organizational life is one place AI isn't going.
A Better Framework for the Work
So what does this mean in practical terms?
Think of complementary roles for AI and people rather than competitive ones. Assign AI the work it's genuinely better at: synthesis at scale, pattern recognition across large datasets, rapid iteration, handling the repetitive and complex tasks that don't require human judgment in the moment. Free team members to do the work only they can do: connecting with customers, making calls that require organizational wisdom, exercising the taste and creativity that creates genuine differentiation.
When we build strategy, let AI assemble the analytical scaffolding. Then bring the judgment. Interrogate the output. Ask what the machine missed. Use AI to red-team your own thinking before you take it into a boardroom.
Kim further made the point: don't just measure efficiency. Measure effectiveness. Doing more things faster creates more noise if you aren't pointed at the right things. AI can help you execute faster. The question of whether you're executing toward the right outcome remains a human call.
And when it’s time to understand customers, let’s engage with them directly. AI will aggregate and analyze what's already been said. The novel, the emerging, the unarticulated need: that's not sitting in an AI training set. It’s to be found in new, exploratory conversations.
The Frame Worth Keeping
The organizations that will pull away in this era aren't the ones that use AI the most. They're the ones that understand what their people bring to the collaboration that AI genuinely cannot replicate.
Judgment developed over years of hard-won experience. The ability to read a room and adjust in real time. Creativity that generates something genuinely new rather than a synthesis of what's come before. The savvy to recognize when 80% isn't good enough and the discernment to push through to something better.
These aren't soft skills. They're competitive assets.
The question was never where humans “still win.” The question is how humans activate the full value of AI collaboration. Answer this one, and the outcome improves for everyone in the room.
Gary Lancina is a Principal at CMG Consulting, where he advises senior organizational leaders on strategy, leadership alignment, and go-to-market excellence. He co-hosts The Fault Line podcast with Kim Scribner. Listen to the full conversation that inspired this article here.