It’s 9:40am in the office and someone just called out a “winner” from across the room.
Not in a big, dramatic way—more like a casual, “hey, this one’s up ~18% CVR.”
A few people glance over. Quick nods. Back to work.
Because if you’ve been around this long enough, you know that number… doesn’t mean much yet.
We’ve seen too many “winning” experiments that look great in a dashboard and do absolutely nothing for the business once you zoom out.
That’s usually the moment where someone pulls up the session-level data, and the tone shifts a bit.
Different channels. Overlapping experiments. Returning users behaving nothing like first-time visitors.
The story gets messier. More real.
And more often than not, that “winner” starts to look a lot less like a win.
Let’s start with a slightly uncomfortable truth:
Most CRO teams are celebrating wins that never hit the P&L.
Variant B beats control.
Stats say it’s significant.
Dashboard turns green.
Slack message goes out.
And then… revenue barely moves.
Not because testing doesn’t work—but because most teams are measuring the wrong thing entirely.
A/B testing, as it’s typically practiced, assumes something very clean:
Reality is messier.
In a live environment, users:
So what actually happens?
Your “winning” experiment is often:
But your reporting doesn’t show that.
Because it can’t.
Most tools—and most teams—treat experiments like they exist in a vacuum.
They don’t.
If a user:
Which experiment gets credit?
Most setups:
So you end up with a portfolio of “winners”…
that don’t stack into real revenue.
Across ClickMint experiments, the pattern is consistent:
That gap—between “test win” and “actual revenue impact”—is where most CRO programs quietly break.
This is also why most brands don’t want to run testing internally.
Not because they don’t believe in it—but because:
So testing becomes:
Even though everyone agrees it’s “important.”
The real shift isn’t better ideas.
It’s better measurement.
At ClickMint, we look at:
Because the only number that matters is:
Did this generate incremental revenue when everything else was happening at the same time?
Not in isolation.
Not in theory.
In reality.
Most CRO teams don’t have a testing problem.
They have a measurement problem.
And until that’s fixed, you’ll keep shipping “winning” experiments…
that never actually make you money.