- The Conversion Ledger
- Posts
- The case for slowing down your optimization program
The case for slowing down your optimization program
Why the best optimization programs build in deliberate friction.
There's a story from WWII that Neal Stephenson fictionalized in a great book called Cryptonomicon.
The core of his story is real, and it just popped up on my radar again in a recent Concord article.
The gist: At Bletchley Park, England's secret codebreaking headquarters, they had machines cycling through Enigma combinations at speeds no human could match. But the thing that actually broke Enigma wasn't the machines.
It was the people who noticed German weather reports always arrived at the same time. That certain phrases appeared in predictable formats. That some radio operators got sloppy.
Behavioral patterns. Human ones. No algorithm would have flagged them.
The Bletchley team used both. Humans spotted the pattern, fed it to the machine, and the machine cycled through possibilities at scale.
Humans produce discovery. Machines reproduce the rules.
That's still true in CRO.
The automation trade-off nobody talks about
Most CRO teams are over-automated. Not because they've run too many tests, but because the more you automate, the further you drift from the data. Further from the visitor on your site.
A conversion rate drops 8% on mobile. Automated alerts fire. The team checks the dashboard, runs one or two tests, finds nothing, marks the tests inconclusive.
But nobody watches the session recordings. Nobody notices collection pages with out-of-stock items showing before in-stock on mobile.
Automation creates distance between your team and your customers. And that distance leads to assumptions.
Surefoot has been our A/B testing partner for more than five years, and their experimentation program has been a meaningful driver of incremental revenue for Peak Design. They excel at designing, running, and measuring iterative tests that compound, turning small, well-reasoned changes into sustained performance gains. Their strength isn’t just finding wins, but building a disciplined testing system that continuously generates learnings we can act on with confidence.
Where to stay hands-on
Three questions to find your leverage points:
1. Where am I most likely to miss the context?
2. Where would a mistake be most costly?
3. Where is my domain knowledge most valuable?
For us, that means: manual data exploration during client onboarding. Reviewing session recordings at the start of an engagement, and after each test. And, staying in the flow with the data, not letting AI tell us what it means. It might sumarize it for us, but at the end of the day we’re looking at the raw data ourselves.
The flywheel most teams miss
The Bletchley codebreakers didn't keep their insights to themselves. They encoded them. Fed them back into the machines. Turned one-time observations into scalable rules.
Most CRO teams don't do this. They investigate a data anomaly, fix it, and move on. The next person starts from scratch.
We’re working to automate the right 80%, stay hands-on with the 20% that requires judgment, and then turn what we learn back into better automation.
You're invited to see this in action
If your ecommerce brand is doing $10M+ and you've ever looked at your testing program and thought "we should be learning more from this," I'd love to show you what a compounding optimization system looks like.
Reply to this email and I'll send you a few examples from real engagements.
Quote of the week:
The most successful data teams aren't those who only automate; they're the ones that combine key principles and insights with scalable systems, melding human and artificial intelligence to create powerful feedback loops.
Looking forward,
How valuable was this week's newsletter? |