It’s not shocking that analytics has become one of the sexiest buzzwords in digital marketing. The ability to collect data, crunch numbers and draw actionable insights from them is highly valued by smart businesses. Geeking out with stats is the new black and a lot of marketers are trying to hop on that bandwagon en route to cashing in and enhancing our profiles.
As analytics becomes more mainstream, more people start preaching about it even if they’re not exactly qualified to do so. Non-experts start writing inaccurate insight pieces and misconceptions start to proliferate. Novices who don’t know better eventually absorb incorrect notions, preventing them from seeing the big picture that most KPIs are trying to convey.
I’ve had my share of mistakes in the six years that I’ve been in this industry. Throughout that period, I’ve learned that the last thing a marketer should do is take numbers at face value and start basing campaign decisions on them. For all the hype on data-driven marketing, the case can be made that analytics is still best used as a conversation starter. It’s a foundational piece of every smart digital marketer’s strategy, but it also needs to supplement your first-hand knowledge of your site, its content and its audience.
In this post, I’m listing down seven of the most dangerous analytics misconceptions that can set your digital marketing strategy off course. Avoiding them will help you make better use of your data en route to more potent campaigns:
1. Each bounce is a bad thing
In web analytics, a bounce happens when a user accesses a page of a site, doesn’t click on anything and leaves after viewing only that page. The bounce rate, therefore, is the percentage of users who bounce versus the ones who stay and click on internal links. It’s the most popular KPI for engagement and it’s reputed to be the measure of how satisfying a page is to users. Popular notions suggest that each bounc represents an unhappy visitor and that pages with high bounce rates are failing to respond to the intent of people who visit them.
For the most part, this is a valid assumption but it’s not always correct. Bounces happen for all sorts of reasons and not all of them are necessarily negative. For instance, if a user is looking for quick, bite-sized information on Google such as a word’s meaning or a pizza delivery number, the normal tendency is to view the page and hit the browser’s Back button. It doesn’t necessarily mean that the user went away unhappy with the experience. It only means that the intent only warranted a one-page visit.
When you factor in the fact that the web is infested with bots, spammers and link prospectors who visit webpages not for content consumption but for other motives, you’ll realize that being paranoid about your bounce rate is an exercise in futility.
That’s not to say that you shouldn’t improve your content and your site’s usability features – because there’s always room for that. What I’m saying is that you should take your site’s bounce rate with a grain of salt. It’s not a 100% precise measure of the overall satisfaction of your true audience. It’s a stat that needs to be put in perspective with the page’s intended use.
2. Longer times on site always indicate better engagement
The average time on site is the approximate duration of an average visitor’s stay within your site’s domain. In theory, it should reflect how engaged users are in the experience that your site offers. In reality, this is a decent estimate at best and a shot in the dark at worst. The average time on site is one of the KPIs in web analytics that’s most prone to skewing, so try not to base important decisions on this raw stat alone.
Case in point: suppose that half of your site’s visitors are legitimate users who spend about four minutes in your domain per session. Now suppose that the other half of your visitors are bots, spammers and link prospectors who spend an average of 1 minute in your domain. When you combine the two groups, the average time on site will come out at two minutes and 30 seconds. Can we accurately say, then, that the average tells an accurate story of how long the typical user stays in your pages?
The funny thing about getting the average in this scenario is that no user from either group really stays for 2 minutes and 30 seconds. It’s either they stay long or they exit quickly. The middle ground doesn’t really exist and what analytics is providing in this instance isn’t really useful data.
A good way to measure the average time on site for the users who really matter to you is by setting up a custom report that measures the behavior of your loyal readers. By determining what human users who are interested in your content respond best to, you can make better strategic decisions about your content and user experience.
From a user intent perspective, there are cases when longer session durations on pages are bad and shorter sessions are good. An example would be if you’re selling computer hardware and your site offers support software downloads. A shorter time on your product support pages would suggest that the user found what he was looking for immediately, yielding a positive experience. Conversely, longer session durations on these kinds of pages could confusion on the part of your users, which is always a bad thing.
3. The more page views, the better
In general, having more page views is a good thing. It usually indicates a higher rate of content consumption from your users. However, it can also indicate a bad navigation scheme that confuses visitors who are looking for specific content in your domain. In a scenario like that, more page views is a bad thing unless you’re generating revenue with CPM banner ads.
Large sites with thousands of pages and deep navigation trees are particularly susceptible to the high page view illusion. Users tend to click on links and hit the Back button a lot, jacking up the number of low-quality page views. To get a better grasp of whether your page view status is healthy or not, cross reference your views with the average time per page. Shorter average session durations on content-intensive pages may indicate negative user experiences.
4. Unique Users is the Real Deal
Far too often, I’ve been in board meetings and conferences where presenters talk about unique visits like it’s the exact number of real people who accessed a site during a given period of time. This stat’s name is a little misleading because what it really measures is the number of times that a site fires persistent cookies to a browser. That means that a shared computer can be used by five people and the unique visit counted would stay at 1. Similarly, one person who uses two computers will count as two unique visitors when a cookie fires twice on two separate browsers even if it was the same guy.
Bottom line: it’s almost impossible to tell how many real users accessed a site, so take the number of unique visitors that your analytics system gives you as a rough estimate and not an exact figure.
5. Exit pages are weak pages
Most marketers use a funnel model to visualize the user conversion process in their websites. A lot of us follow the Awareness-Interest-Desire-Action (AIDA) school of thought when classifying pages in our sites to see where in the funnel visitors tend to drop off. These pages are called exit pages and it’s natural for us marketers to be interested in lowering the drop-off rate for each one of them.
While that’s generally a good practice, we have to come to terms with the fact that some pages are inherently more prone to exits than others. For instance, squeeze pages and sales pages tend to have higher exit rates because most visitors who get to them will either accept or decline calls to action. In both scenarios, the most likely next action is to leave the site. If the user isn’t interested in an offer, he’ll leave. If a user accepts the offer, he’ll be done with the transaction and head out as well.
Conversely, some pages innately have low exit rates. The homepages of blogs, category pages in wikis and product class pages in ecommerce sites branch down into different paths. This makes the user more likely to go to another part of the website, averting exits and making these pages look good.
Before gathering every page with a high exit rate in your analytics system and applying usability enhancements, examine the nature of the page and the intent of the users who are likely to go there. Never assume that a page needs tweaks just because of its exit rate. Factor in the context of these pages before making any changes because you might end up doing more harm than good.
6. The Best Traffic Source is My Site’s BFF
Common business sense dictates that we should invest more in things that pay off. In digital marketing, that can’t be more true. It’s crucial for every site’s success to know which channels are driving the most business and which ones aren’t getting much traction. A proper assessment of channels has to be done on a regular basis and this is an area where novices stumble.
Some marketers make the mistake of assuming that the best traffic referrers are your site’s best channels. That’s a myopic way of seeing things due to the fact that traffic is just a means to an end. The most definitive KPI for any channel is the quality and quantity of conversions that it pushes to the site. Traffic is essentially useless if it doesn’t move the needle on your business goals.
When trying to assess your site’s best traffic sources, go beyond visits and page views. Get conversion attribution data and see which traffic sources bring in visitors who opt in, download or buy from you. Invest more resources on these getting more traffic from these sites and investigate why other channels are not driving as much success.
7. It correlates, therefore it causes
Confusing correlation with causation is a common mistake that marketers make when they try to figure out the relationships between KPIs. Correlation refers to the strength of a relationship between two or more variables. Causation, on the other hand, indicates that the fluctuations of one variable are the reason for the movement in another.
It’s a bit of a cliché but it’s a relevant adage to this day: correlation does not imply causation. When writing reports, formulating strategies and making decisions based on numbers, we should always draw a line between correlative and causative tendencies between values.
Here’s an example: just because publishing more content correlates with earning more backlinks doesn’t mean that the former is causing the latter. The average marketer will see the correlation and producie more content to earn more links. A smarter marketer will investigate which content assets are attracting the most links and focus on more of those assets for best results.
By the same token, there’s usually a correlation between traffic and sales in ecommerce sites. The more visits these sites receive, the better they tend to perform from a conversion standpoint. However, it wouldn’t be accurate to say that having more visitors is the reason for better sales. Other factors such as traffic quality, usability and copywriting effectiveness also have direct effects on a site’s ability to convert.
Recognizing the difference between correlation and causation will help you set more realistic expectations on the part of your clients or bosses. It also keeps everyone level-headed and it prevents people from jumping to unreasonable conclusions which might be used as the basis for flawed strategies.
In the end, effective web analytics isn’t just about the numbers – it’s about our knowledge of our own content and our understanding of the audience that it attracts. Not looking beyond the numbers and not thinking critically about best practices are easy ways to make costly mistakes.