The Case for Creating Custom Benchmarks

Joe Mineo
October 7, 2020
Categories

In the early days of social media and performance analysis, there was a jump by marketers to make sense of it all.

There was a lot of publicly available information at the time, and vanity metrics like follower count and engagements were at the forefront of analysis. People analyzed social media a lot like radio and TV, focusing on how many people were in the audience and how many could be reached per post. To media companies, this made sense and was directly comparable for advertising buys. If someone spent X to capture Y amount of users, they could see how cheap it was compared to buying the same amount of users on other mediums.

This sparked tremendous growth that Facebook, Twitter, Google and other platforms took advantage of, which led to the development of self-service ad platforms and a shift to holistic, business-focused digital marketing and conversion attribution.

Naturally, upstart advertising and sales companies wanted to capitalize on the wave and invent new ways to sell their services. Since they now could accept contracts from a vast array of industries in the blink of an eye, they could amass client bases that provided them with a wealth of data.

Whether their clients signed off on it or not, these companies began churning out “benchmark data reports” and “industry benchmarks” that promised specific measurements that could crown them as thought leaders, and with good enough SEO, the outright leaders in their space.

The problem with using external “benchmark” reports

There are a lot of different brands and reports out there, but they all tend to fall into the same traps:

Let’s take a look at exactly why these flaws are critical to point out, and ultimately, why they lead to misdirection and a lack of sustainability over time when evaluating performance.

Sample sizes and accounts managed

When it comes to these “benchmark” reports, the first things that we were able to single out on these reports were sample sizes and accounts managed. They’re always vastly different, with some brands claiming hundreds of clients, some claiming twenty or less, or some naming thousands. Some had big spends, some had small spends, and some were in the middle. There was no way to tell if brand A got X results because they spent Y.

In truth, some platforms work better with less spend, some with more, and some actually get you results because they work hard to help you. 

Date ranges

Date ranges in benchmark reports have always been amusing, because some referred to the current year and some to the past year. WordStream’s report—to this day—ran for a few months between 2016 and 2017 and still tries to pass off its results as benchmarks attainable in 2020. In truth, most ad campaigns can see actionable results within two weeks, but organic campaigns often take much longer to evaluate as awareness builds.

Accessibility of data

For something like an industry standard, one would need to be able to have comparable insights into comparable companies in a vertical and see trends over time. Unfortunately, it’s difficult to assess insights when there isn’t enough data to support a claim. We can’t look at a company’s follower count and see how many people they reach on a post (this is a major flaw in RivalIQ’s benchmark report). We can see total engagements, but we can’t tell if they’re all comments, or simply shares, or things of that nature.

Just because a company is getting a ton of reactions on a post doesn’t mean they’re performing any better than your company. They might be driving eyeballs to their content, but is it translating to sales, talent acquisition or public perception of their goals?

Strategic vision

Company A might focus on sales, Company B might focus on environmental sustainability, Company C might focus on employee culture and talent acquisition, and Company D might focus on getting eyeballs to their video content. At face value, each of these companies could have similar business models, but they are using social media for very, very different purposes.

Something like Dash Hudson’s Industry Benchmark Report doesn’t look at these strategies, all of which can greatly affect vanity performance. This also circles back to date ranges, as an annual date range doesn’t take into account cross-quarter or -month content campaigns that could account for variations in performance.

Seasonality and platform stability

Before COVID-19, companies could look back at 2018 or 2019 and think circumstances were generally the same. However, each year has a holiday period where traffic and results are affected. Something like Sprout Social’s benchmark report works well in overall methodology, but still falls short in this category. Most years have down periods that could be different for each company and industry.

Facebook, Twitter, LinkedIn and Google constantly update their algorithms to keep up with demand changes, content volume, and other factors, and all of this contributes to how organic and paid content is delivered and received on both ends, which can ultimately hinder the effectiveness of “benchmarked” data.

Attribution error

Finally, these benchmark reports fail to mention one huge factor that sets them apart from each other, as well as any other company or brand that should use them as a benchmark: fundamental attribution error. This is the disproportionate tendency to find insights and explanations behind external behaviors based on one’s own assessments.

Benchmark companies that work closely with brands are using their tools, their strategies and their optimization methods to generate “benchmark data” that could either be really awesome if they’re successful, or really bad if they’re mediocre.

This last factor is the prime reason why we steer clients away from comparing themselves, from a data analysis perspective, against other companies or industries. How ChatterBlast operates and runs your ads will look and feel different compared to Agency X or Agency Y, and it’s not very productive to attempt that analysis.

Our solution

Each and every business has access to a wealth of data for each platform, and that data can be used to craft customized performance analyses and benchmarks to measure success.

For example, if we see that Company A used to get a CTR of 1.90% in Q1, and is now seeing a CTR of 0.91%, a thoughtful analysis of external factors combined with historical performance data can help us understand why things shifted and if content is still performing well, despite what might appear to be a drop. It’s completely possible that 0.91% is the highest possible achievement for Company A, and your target may be to continually try to beat that with each new campaign, as opposed to that old 1.91% benchmark. 

Social media, and media in general, is constantly evolving. To try and claim set-in-stone benchmarks would be an exercise in futility and can ultimately drive executives to make brash and uninformed decisions despite all the data in the world telling them otherwise. Focusing marketing data views inward helps to keep us accountable in responsible ways, while also allowing for inspiration from external sources without muddying the waters.