Market Research

One Size Fits None: The Myth of ‘The Average Startup’

February 12, 2016

Have you heard of The Average Startup?

The Average Startup, statistics tell us, receives $1.5M in funding and gets itself in trouble by trying to scale too quickly. The Average Startup has just over 5 employees. Ultimately, The Average Startup fails before it reaches maturity.

In fact, when you get down to it, almost every last detail about The Average Startup is public knowledge at this point. In many ways, The Average Startup is probably the most transparent and understood company in the world.

the-average-startup

We learned this firsthand in late 2015 when we set out to conduct an extensive benchmarking analysis of the tech industry. Everywhere we looked, we were confronted with data about The Average Startup ‒- its perfectly average Sales Cycle and its smack-dab-in-the-middle-of-the-road Win Rate. After a few weeks, we learned more than anyone should know about The Average Startup.

What we learned most of all, though, is that The Average Startup had very little to tell us about how real-world tech startups actually function.

So, early in the process of analyzing tons of real-life data, we decided to do something pretty drastic: we decided to ignore The Average Startup.

One Size Fits None

Here’s how the story went.

We started our 2016 Tech Benchmarking Analysis with a single, clear goal: To provide reliable, pertinent benchmarks for real-world tech companies. To do this, we studied almost 200 of our customers and analyzed their actual operational data on everything from sales performance to lead generation to rep execution.

At first, we went about this process in a pretty conventional way. We analyzed the hundreds of companies in our dataset and started to come up with “average benchmarks” for all of the golden oldies: sales cycle, quota attainment, annual bookings, etc.

But it was at about this time that an old joke kept popping into my head, and I couldn’t seem to shake it.

It ended up being pretty important, so bear with me while I relay it quickly.

3 Statisticians Go Duck Hunting

Three statisticians are out duck hunting, waiting patiently behind their stand, when they spot the day’s first duck.

3-duck0hunters-stats-2

Excitedly, the first statistician takes a quick shot and misses the duck 6 inches high.

As he’s reloading, the second statistician takes aim and fires; he misses the duck 6 inches low.

The third statistician drops his gun, leaps in the air and yells “We got him, boys!”

The Problem with the Average

This joke may not be laugh-out-loud funny, but it does illustrate a common problem with relying on averages. Namely, that if you’re using averages to make specific decisions or determine particular actions, you’re likely to miss the mark.

This joke was especially pertinent in our case because we had a large, varied dataset but we wanted to provide narrow, targeted benchmarks. In other words, we didn’t want to provide benchmarks for the mythical “Average Company” ‒- we wanted to provide reliable, targeted benchmarks that could actually help real-world tech companies evaluate and contextualize their own performance.

In short, the joke helped us realize that in an effort to provide usable benchmarks to everybody we would end up providing watered-down benchmarks that applied to nobody.

Customizable Benchmarks

So what did we do?

We decided that if we wanted to help companies better understand themselves, by comparing their performance to similar companies — we need to break our dataset into smaller, more meaningful chunks. Not “tech company” but “tech company with an average deal size between $5k and $15k and more than $10M in annual bookings.”

We knew that by going through the effort of sorting and cohorting the companies in our dataset, we would be able to give much, much more meaningful and tailored benchmarks to companies.

That’s how we eventually came up with our new interactive Sales Performance Benchmarks page.

Updated-Benchmarking-GIf-resized1

Unlike so many benchmarks out there, these are totally filterable and customizable so that you know you’re not just looking at some overly general average tech benchmarks but actual, apples-to-apples benchmarks that you can use to see how your company stacks up against others like it.

Instead of taking a wildly diverse dataset ‒ filled with big companies and tiny companies, high-growth and steady-state companies, and everything in between ‒ and boiling it down to meaningless averages, we wanted to hit the duck square on. We wanted to provide targeted benchmarks that companies could actually use to measure themselves against similar companies, not just average companies.

In the end, this led to some incredibly interesting conclusions that we never would have come to with blanket “one size fits none” industry benchmarks.

Here are just a few examples:

  • Companies with Average Sales Prices of more than $30k had win rates that were a full one-third lower than companies with ASPs under $1k. (23% vs. 37%)
  • Companies with Annual Bookings of less than $1M had Sales Cycles that were half as long as companies bringing in more than $10M. (41 days vs. 80 days)
  • Companies with ASPs higher than $5k bring in more than 3x as much in Annual Bookings as companies with ASPs below $5k. ($7.2M vs. $2.2M)

And there’s a whole lot more. InsightSquared will be releasing more benchmarking content ‒- articles, infographics, key findings -‒ over the next several months. You can sign up to receive updates as they are released here.

Senior Manager, Global Content Strategy

Mike Baker is the Senior Manager, Global Content Strategy at <a href="http://www.crimsonhexagon.com/">Crimson Hexagon</a>. Previously he was in charge of Content Strategy at InsightSquared. He has covered business and the tech industry for a number of publications and is Co-Executive Director of Boston Content.