The metric that's lying to your face

Why conversion rate alone is a dangerous north star

I was on a call last month with a brand doing ~$20M in revenue. Smart team. Good instincts. They'd been running their own testing program for two years.

They pulled up their dashboard and showed me 50+ tests they'd shipped. A good number of them were "winners." Conversion rate had climbed from 2.8% to 3.4%.

I asked one question: "What happened to your average order value during that same period?"

“Uh, yeah…”

AOV had dropped 12%.

All those "wins"? They'd been training their site to convert more people at lower cart values. A discount banner here. A simplified page that removed comparison features there. More conversions, sure. But less money per conversion.

Their revenue was basically flat. Two years of "winning" tests and they had nothing to show for it.

The conversion rate trap

Here's what happens. You run a test. Conversion rate goes up 5%. Everyone high-fives. You ship it.

Two months later, finance notices revenue is flat. Or worse, it's down.

The test drove more purchases, but at a lower average order value. The urgency messaging attracted deal-seekers. The simplified flow removed an upsell step that was actually working.

I see this constantly.

Conversion rate tells you one thing: did more people buy? It doesn't tell you if those purchases were worth having. It doesn't tell you if AOV went up or down. It doesn't tell you if people bought more items or fewer.

It's one number doing the job of three.

Why we use Revenue Per Visitor

At Surefoot, the one metric we’re always watching is Revenue Per Visitor (RPV). Not conversion rate. Not AOV.

The math is simple: total revenue ÷ total visitors.

But it captures something conversion rate can't, the actual economic value of each person who hits your site.

RPV accounts for three things at once:

  • Whether visitors are converting (conversion rate)

  • How much they're spending (average order value)

  • How many items they're buying (units per transaction)

When RPV goes up, you know you're making more money per person who walks through the door. Not just getting more people to buy something.

When a test lifts conversion rate but RPV is flat or negative? It may not be the winner you need.

That one rule will save you from shipping dozens of bad decisions.

How to actually use this

Here's how we implement it. Steal this.

Break RPV into its components to understand the "why."

When a test moves RPV, decompose it. Did conversion rate go up? Did AOV go up? Both? Understanding the driver tells you what's happening with user behavior, not just that something changed.

Segment RPV by traffic source and device.

Your paid search visitors have completely different intent than your organic social visitors. Blending them into one RPV number hides what's really going on. We cut by:

  • Device (mobile vs. desktop — these are basically different websites)

  • Traffic source (paid, organic, email, direct)

  • New vs. returning visitors

This is where real insights can live. You might find a test lifts RPV for returning visitors by 8% but hurts new visitors by 3%. That changes your entire rollout strategy.

(One note here: don’t get caught in the trap of slicing down to an audience that shows a “win”, you can always find one, and it will almost always be false.)

Set RPV benchmarks by page type.

Your PDP has a different RPV profile than your collection page, which is different from your homepage. Track RPV by page type so you know where the biggest opportunities are.

If your collection page RPV is 40% lower than your PDP RPV? That tells you there's a discovery problem. People are landing on collections and not finding what they want.

Use RPV to prioritize your roadmap.

A 5% RPV lift on a page getting 200K monthly visitors is worth way more than a 15% lift on a page getting 10K. This math should drive your roadmap. Not gut instinct. Not whoever yells loudest in the meeting.

The objections I always hear

"But our goal is to increase the conversion rate."

Sure, but what’s at the root of that goal? I ask them to also look at RPV and let the data make the argument. Once you agree that revenue with sustained or improved profitability is the real goal, then the metrics to track become an easy discussion.

"RPV takes longer to reach significance."

True. Revenue amounts vary more than binary (convert/didn't convert), so you need more traffic. This is a feature, not a bug. It means you're being more careful about what you ship.

If you can't run a test long enough to get RPV significance, that tells you something about your traffic and testing velocity, and that's a problem worth solving separately.

"We don't have enough traffic."

If you're doing $10M+ in annual revenue, you almost certainly do. You might need to run tests for 4-6 weeks instead of 2. You might need to be more selective about what you test. Both of those are good disciplines anyway.

The bottom line

Conversion rate is a component of performance. RPV is the performance.

If you're making decisions based on conversion rate alone, you're flying with one eye closed. Start tracking RPV. Segment it. Use it to evaluate your tests. I promise you'll find that some of your "winners" aren't adding to the business like you expect.

Not sure if your testing program is measuring the right things? ... you already know. Want to face reality and make a change?

We have two slots for March. Book a call and we’ll talk about real numbers.

Looking forward,

Brian Schmitt

Quote of the week:

Measurement is the first step that leads to control and eventually to improvement. If you can’t measure something, you can’t understand it. If you can’t understand it, you can’t control it. If you can’t control it, you can’t improve it.

H. James Harrington

How valuable was this week's newsletter?

Login or Subscribe to participate in polls.