A/B Testing Everything Except Your Strategy

A/B Testing Everything Except Your Strategy

You have tested the green button against the blue button. You have tested “Buy Now” against “Get Started” against “Claim Your Spot.” You have tested hero images, subject lines, headline fonts, CTA placement, and whether the word “free” in an email subject line increases or decreases open rates (the answer changes every six months, depending on who you ask). You have optimized every pixel of the customer journey. You have never once questioned whether the destination is worth reaching.

Welcome to the golden age of A/B testing, where we measure everything and understand nothing, where data replaces judgment, and where the appearance of scientific rigor substitutes for actual strategic thinking. It’s a beautiful system, really — endlessly productive, perpetually inconclusive, and completely immune to the uncomfortable question of whether you’re optimizing the right thing at all.

How Testing Became a Religion

The democratization of A/B testing tools was supposed to bring empiricism to marketing. And for a while, it did. Instead of arguing about opinions in a conference room, teams could run experiments. Instead of HiPPO decisions (Highest Paid Person’s Opinion), data would rule. The shift was genuine and valuable. Then, like most genuinely useful things, it was taken approximately forty steps too far.

The problem began when testing stopped being a tool and became an identity. “We’re a data-driven culture” is now the marketing equivalent of “we move fast and break things” — a slogan that sounds rigorous and signals virtue without committing to anything specific. In practice, data-driven culture often means: we run tests on things that are easy to test, measure things that are easy to measure, and declare victory when a metric goes up, even if nobody can explain why or whether it matters.

The result is teams that have run 200 experiments on their homepage and cannot tell you what their brand stands for. Teams that can produce a confidence interval for a subject line variant but have no idea why customers churn after 90 days. Teams that treat statistical significance as a moral category — as if a p-value of 0.04 is not just a number but a verdict from the universe that green buttons are objectively better than blue ones.

The Local Maximum Trap

Here is what nobody puts in the A/B testing case study: optimization without strategic direction leads you, very efficiently, somewhere you don’t want to be. Every incremental test improves what exists. None of them ask whether what exists should continue to exist.

This is the local maximum problem. You can optimize a mediocre product page to be the best possible version of a mediocre product page. Your conversion rate will climb 0.3% per experiment until it can climb no further. You will have squeezed every drop from a lemon that, strategically speaking, you should have replaced with a different fruit two years ago. But the dashboard looks great. The quarterly report is full of winning experiments. Everyone involved gets a mention in the retrospective under “wins.”

The deeper issue is that A/B testing is structurally incapable of questioning its own premises. You can test two versions of a landing page. You cannot test whether the landing page is the right mechanism for what you’re trying to achieve. You can test two subject lines. You cannot test whether email is the right channel. Those questions require judgment, context, and the willingness to consider that your current approach might be fundamentally wrong — qualities that do not produce clean dashboards and cannot be automated.

Data as Alibi

There is a more cynical function that testing serves in large organizations, and it’s worth naming honestly: it provides cover. When a decision goes wrong, “we had the data” is the modern version of “I was just following orders.” It distributes responsibility so thoroughly that nobody is accountable for anything. The test said to do it. The algorithm recommended it. The confidence interval supported it. Who could argue with that?

This is why genuinely bold creative decisions almost never come out of A/B testing. Testing can tell you which version of an existing concept performs marginally better. It cannot tell you to do something completely different. It cannot tell you to run the campaign that makes your legal team nervous and your competitors jealous. It cannot tell you to make the thing that nobody has made before because, by definition, you can’t test something against itself before it exists.

The great brand-building moments in marketing history were not A/B tested into existence. They were made by people with strong points of view who were willing to be wrong in an interesting way rather than right in a boring one. Testing would have optimized those ideas into something safe and forgettable.

What Good Testing Actually Looks Like

None of this means testing is bad. It means testing is a tool, not a philosophy. Used well, it answers tactical questions quickly and cheaply: which headline communicates the value proposition more clearly? Which onboarding flow reduces drop-off? These are real questions with measurable answers, and testing is exactly the right instrument for them.

The problems start when testing substitutes for strategy, when the question “what should we be doing?” is replaced by “which version of what we’re already doing performs better?” The first question is hard, uncomfortable, and requires people in a room with different opinions and no easy answers. The second question produces a dashboard. Guess which one gets more organizational energy.

If you’re the kind of marketer who has sat in the meeting where someone says “let’s test it” as a way of avoiding a decision, you know exactly what this piece is about. And if you want a daily reminder that your job is more than optimizing button colors, the NoBriefs shop has what you need — starting with the Fuck The Brief collection, for when the brief itself is the thing that’s failing the test.

Related Articles

0
    Your Cart
    Your cart is emptyReturn to Shop