AI-First or Customer-First?
The Gap Nobody Wants to Admit
I want to start with a number I read this morning, because this one deserves to land before anything else.
Sixty-six percent of CX practitioners believe their customer experience improved last year.
Seventeen percent of customers agree.
Read that again.
Nearly two-thirds of the people building and managing customer experience believe it’s getting better.
Less than one in five of the people actually living it feel the same way.
Medallia’s 2026 State of Customer Experience report released this data, and I haven’t been able to stop thinking about what it really means. Not the technology implications. Not the strategic implications. The cultural ones.
We have created organizations that are extraordinarily good at convincing themselves they’re doing well.
The Comfort Program Problem
Here’s what the data also shows: organizations are pouring resources into AI and CX technology, yet 30 to 40 percent of departments take no meaningful action on the critical customer insights they receive. The insights arrive. Someone presents them. They sit in a silo. Customers quietly leave.
Only one in three CX teams has visibility into the full customer journey. The rest are optimizing pieces of an experience they cannot actually see.
What we’ve built, in many cases, is not a customer experience program. It’s a comfort program — a system designed to generate enough internal metrics to feel good about work that customers aren’t experiencing as better.
I’ve been in the business advisor and CX space long enough to know how this happens. It’s not cynicism. It’s not malice. It’s the organizational tendency — that I write about in my new book, “Beyond Distinction” — to measure what’s easy to measure and then mistake measurement for improvement.
The dashboard gets greener.
The presentation gets applause.
The customer gets the same broken experience, and now with a new chatbot in front of it.
The AI Acceleration
Now layer in the current AI investment cycle.
More than 80% of CX practitioners in the Medallia study report positive returns from AI. Most see AI-equipped frontline employees as essential to their 2026 success. The investment is real. The enthusiasm is genuine.
And the customer signal is getting clearer too: automation works for simple, transactional interactions. But when something goes wrong — when money is involved, when emotion is involved, when a customer is vulnerable or frustrated — they want a human.
Not a better bot. A person.
They’re also notably less forgiving when AI gets it wrong than when a human does. We extend grace to people. We assign blame to systems.
Yet the industry trend is unmistakably toward heavier automation, often regardless of what customers have said they want.
I want to be precise here, because the technology itself is not inherently the problem. There are platforms genuinely architected to expand human capacity — where AI handles volume so that people can handle meaning. That’s a legitimate vision and some organizations are executing it well.
The problem is the translation error that happens in the boardroom.
Leaders hear “AI-first” and unconsciously process it as “customer-first.” And those are not the same strategy. They can align. They often don’t.
What the Gap Is Really Telling Us
That 49-point perception gap — 66% versus 17% — is not primarily a measurement problem or a communication problem. It’s a humility problem.
Organizations have become very good at building the case for their own progress. Less good at staying genuinely curious about whether customers feel it.
The most distinctive organizations I’ve encountered share a discomfort with their own success stories. They treat internal metrics as hypotheses, not verdicts. They go back to the customer to find out if the hypothesis holds. And when it doesn’t, they say so internally before a competitor says it publicly.
That posture — honest, a little unsettled, genuinely curious — is increasingly rare. And increasingly valuable.
Three Questions for Leaders Who Want to Close the Gap
Before the next AI initiative gets funded. Before the next CX roadmap gets locked. Three questions worth genuine reflection:
Are we listening to our customers or to our own confidence?
When your team is four times more likely than customers to believe the experience improved, which reality is actually driving your decisions?
Is our AI serving the customer, or is the customer serving our AI?
There’s a real difference between automation that removes effort the customer feels and automation that removes cost the company sees. One builds loyalty. The other erodes it — slowly, then quickly.
Here’s another question that I encourage leaders to ask: Can we prove the experience is better on the customer’s terms? Not in internal metrics. In outcomes customers actually recognize: time saved, problems solved on first contact, moments of genuine human care when it mattered most.
The Simple Test
I’ve started suggesting this to every leader I work with.
Before you greenlight the next initiative — AI or otherwise — ask one question and answer it honestly:
Would I want to be the customer for this experience?
Not as a thought experiment. As a real answer. If you encountered your own company’s process on the worst day of your month — frustrated, short on time, something actually at stake — would you feel taken care of?
If the honest answer is no, you don’t face a technology problem. You face a priorities problem. And the most sophisticated AI stack available won’t fix a priorities problem. It will only execute it faster and at greater scale.
Your 2026 mandate isn’t “add more AI.”
It’s this: stop letting internal confidence substitute for customer truth.



