Playing Truth or Dare With Traffic Sources
For adult website admins and owners, traffic is a constant preoccupation, with a lot of resources spent on acquiring visitors to a website, and then analyzing the metrics of that traffic flow. Sometimes it is a simple analysis that focuses mainly on the quantity and not the quality of this traffic, i.e., asking, “How many visitors does my site get?” rather than “How much money is each visitor making (or costing) me?”
This quality level is not merely a matter of “conversion ratios,” but of the underlying factors that affect those ratios. One easy example of this is “bot” hits, which no matter how many you have, or how good your offer is won’t generate a single sale — simply because this traffic doesn’t represent human visitors.
One problem is that a site’s stats do not always accurately reflect its traffic, and more problematically, many of those responsible for interpreting those statistics fail to understand what they are looking at — and as a result, may make bad decisions based on faulty data and/or misinterpretations.
A recent example occurred while examining the traffic stats of two totally different sites — one a white label adult camsite, the other a WordPress powered, non-adult mainstream blog. Other than the same owner, and their use of Google Analytics for monitoring visitors, these sites do not share any content, links, underlying code or publishing platform.
What they do share is a similar negative-value traffic profile, evidenced within short-order of these sites’ launches — and before any links were added to them. This includes everything from unsolicited offers by “SEO experts” who claim that they can quickly boost the site to page one of Google’s search rankings, to referrer spam including confusingly similar typo-squatted domain names.
With no actual value to this traffic, the decision was made to block it using traditional .htaccess methods — methods that did not work. The reason why is because these referrals are not the result of actual visits to these sites, but of an automated script that randomizes Google Analytics ID codes to create the impression of actual visits to your site (and the sites of every other GA user). Such visits can’t be blocked through traditional means, as can legitimate (but unwanted) visits from other entities, and serve as an example of the complexity of today’s traffic game.
Other bot hits and potentially toxic referrals may be characterized by visits where the browser language is not set, as well as by a higher than normal rate of new sessions and new users, along with an elevated bounce rate, vastly lowered numbers of page views per session and lower session durations, compared to other traffic sources hitting the same site. These metrics can negatively influence search rankings, so in addition to being useless to your revenue stream, they can harm it as well.
Bots are not the only factors adding confusion to a website’s traffic analytics, however.
For example, the debate over the use of full vs. relative URLs on internal pages takes an additional twist when you consider their impact on analytics — where full URLs can cause referrals to be recorded as if they came from an external website — rather than from your site’s own internal pages. As with referrer spam, this data may be more of a nuisance than anything else, because it gets in the way of seeing a less cluttered view of your site’s traffic sources.
Another murky area for many traffic analysts is the studying of direct traffic, which in Google Analytics is characterized as “(direct) / (none)” in its Source / Medium report. In the early days of the web, a high number here might be great news, as it typically indicated return visits from bookmarkers as well as the direct typing in of the domain name in the browser’s address bar — an indication of branding success.
Today, however, there are numerous other instances where no referrer information is provided. These include bots visiting your site and visits arriving via links in apps, Word documents, emails, newsletters, PDF files and shortened URLs without an additional tracking code added to it — as well as when HTTPS or other technologies mask a visitor’s actual source.
One solution is to identify the sources of your traffic via tracking codes and other means when possible, providing better intelligence on the value of a particular campaign — an especially important factor for time-consuming social media marketing. Additionally, do not underestimate the impact that you and your team have on your own stats — from setting the site as your home page, to maintenance, updates and regular use: Unless you filter out your own IP addresses, your site will seem busier than it actually is.
Another useful technique is to redirect proxy traffic, often used by bots, to a subtle “skill test” page, to determine whether or not that visitor is a bot or an actual human being, presenting it with a simple task, such as a Captcha-style box, or better, a question that can easily be answered (and prove useful to your content marketing efforts), but which might prove difficult for bots to answer — such as, “Do you prefer photos or videos?” These visitors will still show in your analytics, but separating good traffic from bad is the first step in improving your site’s traffic quality.
By eliminating unproductive traffic and as much unnecessary information as possible from your analytic reports, it becomes much easier to drill down to the data that makes a difference. Some of this clutter can be eliminated by using filters within your analytics software, while other times, .htaccess and other technologies can help by blocking access to your site from specific domains, IPs, languages, platforms, regions, and more.
When it comes to playing Truth or Dare with your website’s traffic sources, it’s important to face the truth that your site is not seeing the numbers you may believe it is, and to also dare yourself to cultivate new and more profitable traffic sources. It may be the best move you make for your site in 2015.