Marketing

Great Marketers Are Great Skeptics: 3 Assumptions You Need to Stop Making Now

April 28, 2014

You know what they say about assumptions. I’ve been guilty of making my own marketing assumptions once or twice in my time, and if there’s one thing I’ve learned from getting proven wrong, it’s that great marketers need to be great skeptics.

Earlier this month, I attended the always insightful SearchLove Conference hosted by Distilled. One of my favorite presentations (as always) came from none other than Moz’s Rand Fishkin.

Rand opened up day two with a talk on how we as marketers need to position ourselves in order to be successful. Plainly stated: we need to be more skeptical.

During his presentation (see below), Rand outlined how the “crap” skeptic, the “good” skeptic, and the “great” skeptic are all set apart by a few defining qualities. In our industry, it’s really easy to follow the latest and greatest revelations posted on places like Moz or KISSmetrics (or OpenView Labs!). But while having data, benchmarks, and the best real-world advice can be extremely helpful, as we all know, every business is different. Just because this information exists out there on the Internet doesn’t mean we should hold ourselves to another company’s results, or worse, go chasing after the next shiny object. We have to do what makes sense for our business and, more importantly, we must always be skeptical of making assumptions — even if those assumptions are partially based on hard evidence.

[slideshare id=33269102&doc=great-marketers-skeptics-arial-140408072813-phpapp02]

3 Marketing Assumptions You Need to Stop Making Now

1) The more a piece of content is shared, the more engaging it must be

Is this universally true? Nope. Rand presented some stats on a test he ran on one of his own posts. He found that one post, which got more than 701 social signals, ended up getting only 306 page views. Say what? I know that from a marketing standpoint, I’m always harping on the importance of measuring both traffic and social signals — we all should, right? And for some reason, I’ve tended to place a lot of importance on the number of social shares to measure engagement on the actual post. That’s the engagement metric we use after all, right?

Well, wrong. According to Rand’s results, those are just assumptions that may seem logical, but the metrics show otherwise. Ultimately, he and others have found that the amount of time someone is willing to dedicate to a particular post depends on a variety of factors like timing, source, formatting, and others. So, if you’re really serious about measuring smart metrics you need to ask some questions.

A great skeptic needs to ask: What other factors are at work? How can we test this by keeping as many variables as possible the same?

2) More testing means better performance

Not always. Rand pulled from another study that dove into the performance metrics of landing pages from Wordstream. My assumption would be that if I’m testing a ton of different elements on a landing page, for example, I’ll then be able to optimize my landing pages better than any marketer in the world. No? No. The study found that the amount of testing on a landing page did not always yield better conversion rate. To that end, sometimes tests become a waste of your time because the results of what you’re testing (say, the color of a subscribe button) aren’t going to make as big an impact as you think they will. At the very least, they may not make enough of an impact to justify the amount of time and resources you spent simply testing it!

A great skeptic needs to ask: Is this the most meaningful test we can run right now?

3) Once you get conclusive data on a test, you’re done

Wrong again. Like many people, when I’m running a test, I’m looking for results that I can really feel good about. I want something definitive. Most good marketers aren’t going to go all-in on new findings unless they are statistically meaningful, but even some of the more experienced marketers do tend to commit this one testing sin — taking meaningful data from one test and stopping there.

Rand presented his findings from the following test on the performance of anchor text vs. standard link text:

Rand test conditionsUltimately, what he found after a few tries was that anchor text drove a more powerful jump in SERPs for the target keyword and URL. From there, a crap skeptic might say: good enough for me! The good skeptic will say: let’s try one more test. And the great skeptic? They’ll push until you get definitive results from at least three different tests.

This may sound like a lot of work (because it is), but with the number of variables at play in the marketing space, we can’t afford to make assumptions without testing the heck out of it — even if that means spending time and resources just to make sure.

A great skeptic needs to ask: Have we run enough tests to be sure?

So, What Marketing Assumptions Are You Making?

Following Rand’s talk, there is no question that I felt a little stressed about the need to challenge each and every assumption I might have as a marketer. But the reality is this: you need to question everything and it’s your job as a marketer to choose which tests, which projects, and what amount of work will be the MOST meaningful to your overall strategy. You can’t take on everything and you surely can’t rely on every assumption, but what you can do is pose a question to yourself with each and every marketing move.

Tell me, do you think you’re skeptical enough?

Image by Duncan Hall

Chief Marketing Office

<strong>Morgan Burke</strong> is Chief Marketing Officer at <a href="https://greenpinatatoys.com/">Green Pinata</a>. She previously worked on OpenView's marketing team.