The Annual Brand Survey: A Beautiful Ritual of Quantified Irrelevance

Once a year, with the seasonal reliability of a migratory bird, the annual brand tracker arrives. It comes in the form of a research presentation, usually delivered by a research firm that has been running this study since before some of the attendees were in secondary school. The deck is thick. The methodology is sound. The sample size is robust. The findings are, depending on your charitable disposition, either reassuring confirmation of existing intuitions or an expensive restatement of things that were already known, formatted as discoveries.

Brand awareness: 67%. Up two points year-on-year. Brand consideration: 34%. Flat. Net Promoter Score: 41. Slightly down but “within margin of error.” Top-of-mind awareness among the 25-34 demographic: “we’ll look at the cross-tabs.” The room nods. Someone asks about the competitor data. The competitor data is shown. Everyone notes that Competitor A has gained three points of consideration and spends forty minutes discussing whether this is methodological noise or a real signal. The meeting ends. The deck is shared. The findings are cited in the annual report. Nothing changes, and next year, the same research firm will return with a new wave of data showing movement within margin of error in every direction.

The Value of Knowing What You Already Thought

Brand tracking studies were designed to answer a legitimate question: is our brand getting stronger or weaker in the minds of the people we want to reach, and how does this compare to competitors over time? This is a real question with real business implications. Brand health does predict future revenue in ways that are sometimes invisible in short-term performance data. The investment in longitudinal tracking is, in principle, sensible.

The problem is what happens to the data. Tracking data is, by design, slow-moving. Brand metrics change over months and years, not weeks. They are resistant to short-term campaign activity in ways that quarterly reporting cycles cannot accommodate. This creates a structural mismatch: the data exists on a timeline that the organization doesn’t have the patience for, and the organization exists on a timeline (quarterly, annual) that the data doesn’t have the resolution to illuminate.

The result is a peculiar use of research: the brand tracker is consulted not to make decisions but to defend them. If awareness went up, the campaign worked. If awareness went down, it was “external factors” or “the competitive environment” or “a methodology note in appendix C.” The data is treated as confirmation when it confirms, and as noise when it doesn’t. The tracker is not a decision-making tool so much as a document of record — a regularly updated archive of things that happened to brand sentiment, filed under “things we measured.”

The Action That Never Follows the Insight

Brand tracking studies have a section, usually near the end of the presentation, called “Implications” or “Recommendations.” This section suggests what the brand should do differently based on the findings. The suggestions are typically: “strengthen emotional connection with the 35-44 segment,” “increase salience in the premium consideration set,” “address the perception gap on quality attributes.” These recommendations have appeared in brand tracking presentations for as long as brand tracking has existed. They are structurally incapable of being specific, because the data cannot be specific — it can tell you that emotional connection is lower than it should be, but not what creative execution would raise it, by how much, by when, or at what cost.

The implications slide is therefore a bridge to nowhere: it generates the appearance of actionability without the content of it. The next steps are to “develop a plan to address these findings,” which generates a workstream, which generates a workshop (see: the discovery phase), which generates a strategy deck (see: the insight that isn’t), which circles back, eventually, to the next annual brand tracker that will measure whether any of this had any effect. It’s a beautiful system if you appreciate circularity.

The Competitor You Can’t Stop Looking At

The most emotionally intense section of any brand tracker presentation is the competitive data. Your own brand numbers are processed with professional equanimity. Competitor numbers are treated with the scrutiny of a forensic accountant reviewing a suspicious receipt. If a competitor’s awareness is up, there is a twenty-minute discussion of why, which methodological factors might explain it, whether the sample was properly weighted, and whether the shift represents a genuine change or a statistical artifact. The possibility that the competitor ran a better campaign and more people now know about them is considered, then reframed as “an opportunity for differentiation.”

The competitive obsession in brand tracking is revealing. It suggests that the primary use of the data is not “are we building the brand we want?” but “are we ahead of the people we’re afraid of?” These are related but different questions. The first question is strategic. The second is anxious. Most brand tracking presentations answer the second question while pretending to answer the first.

The Use It Could Have

Good brand research is genuinely useful when it’s designed to answer specific questions, when the methodology is matched to the decision being made, and when the findings can actually change what happens next. Tracking studies, done well, can reveal shifts in brand health before they show up in revenue — a form of early warning that is worth the investment if the organization is actually willing to act on warnings.

The prerequisite is an organization willing to be told uncomfortable things and do something about them. Not willing to note them in the appendix, or explain them away with margin-of-error arguments, or add them to the list of things that will be addressed in Phase Two. Actually willing to change course. That organization is rarer than the research investment it would justify, but it exists, and when it does, the brand tracker earns its budget many times over.

For everyone else: the tracker arrives, the deck is presented, the findings are filed, and next year the same firm returns. It’s not fraudulent. It’s just expensive ritual. The NoBriefs shop runs its own kind of brand research: watching what resonates with people who are fed up, making more of that, and not commissioning a tracker to tell us what we already know from talking to our community. The KPI Shark has seen the competitive data. He is not impressed. He suggests you stop watching the competitor’s numbers and start making something worth watching.

Related Articles

0
    Your Cart
    Your cart is emptyReturn to Shop