Advertising

Not all keywords are created equal: search engine rankings

By Max Brockbank

The Media Image

|

The Drum Network article

This content is produced by The Drum Network, a paid-for membership club for CEOs and their agencies who want to share their expertise and grow their business.

Find out more

August 27, 2019 | 8 min read

Without access to a website’s Analytics data – or in fact any other real information – most SEOs turn to historic Search Engine Ranking Positions (SERPs) for a proxy of performance.

Wilderness weigh up how brands can capitalise from search engine rankings.

The Media Image weigh up how brands can capitalise from search engine rankings.

It’s logical that the more keywords a site appears for, the more successful it will be, isn’t it?

Well, there are issues. Not all keywords are created equal. Some of them are being searched for by every one – “hotel”, “red shoes”, “kardashian”, etc. – and some are being searched for by no-one – “oven-baked rocking horse” anyone?

And your website might appear for just one keyword, but it might have a search volume of a million and one, with a conversion rate of 50%. On that basis, the standard SERPs tracking model would say your site is a total failure, but you’d be too busy counting your money to care.

So, in the absence of hard facts, and if the number of keywords isn’t a factor, how can you measure SEO success? How about the most position ones, “Position Zeros” or total keywords in the Top Three or Top Five? Being at position one for keywords with a zero search volume is also pointless.

SERPS tracking tools attempt to provide an answer with algorithms that combine lots of factors – search volumes, ranking positions, conversion rate, user intent, etc., which is all well and good. But are SERPs worth anything at all?

For a start there are lots of different types of search result pages, ranging from straight lists of individual websites to pages with lots of distractions like maps, images, knowledge graphs, position zero elements and especially paid search units (compare “hotels in London”, “red court shoes” and “Arnold Schwarzenegger”).

Such complications change change how interact with search results, with an obvious knock-on effect on click-through rate (CTR).

Location, Location, Location

And there’s more. Search for something with a local dimension and you’ll be offered a tailored result based on the geographical location. You don’t even need to specify the place: “plumbers near me” will give matching results from the Google My Business index based on your IP address.

The search result for “plumbers near me”. TMI is based in Nine Elms, Vauxhall

Mobile search results will show pretty good location specifics based on cell-tower data, but if you’re connecting via a cable provider you might find your estimated position is somewhere you aren’t expecting at all.

Logging in to your Google account draws on previous search data too, and will tailor your results for better “accuracy”.

Personalised results can cause problems for SEOs because clients who search for their own websites may get different results to the “official” SERPs compiled by third-party rank-tracking programs which employ big samples of data from different “locations” and IPs, and average out the results. The difference may flatter performance or underreport it.

But you don’t even need to be logged on – or even have a search history – for Google to try to tailor your search results.

Google uses “Metadata Search” as part of its information gathering: all the stuff that the simple act of looking for something reveals without you even knowing about it – including your Operating System, screen resolution, browser type, and plug-ins. Even network providers can be factored in

Taken together, all of this gives a fairly detailed “profile” which Google can use – checked against all the other known profiles it holds – to make a fair “guess-timation” of who you are, where you are, and what you want to see.

I tested this once using anonymous proxies, and different geographical locations – at the beginning and end of a flight from London to Heathrow – using a search term I had calculated to be broad enough to be ambiguous.

The result was something which reflected my geography, but also included results which were closely tied to my line of work. In other words, despite all my best attempts to conceal my identity, Google seemed to “know” who I was and where I was.

The Bigger Picture

So, taking all of this on board, what good are SERP-trackers if every search result is tending to the bespoke and personal?

My answer is that they’re good enough. Medical treatments and financial planning are based on population averages for the same reason.

SERPs data – based on big data averages – give at least an indication of what the “facts” are, and form a starting point for any SEO programme. Ultimately, the right actions can only be taken with actual hard data from Analytics or Search Console.

June’s sharp upturn in “Visibility” shown by SearchMetrics, should have indicated a massive rise in traffic, but it was exactly the opposite.

Recently, we were called in by a US website to explain why their revenue from Organic Search had steeply fallen, even though they were seemingly appearing for thousands of new keywords. The rise in the number of keywords was there in all the keyword trackers, and all predicted a jump in traffic, the sort of result that would get a prospecting agency whooping with joy.

Conversely, Analytics showed a drop in revenue, even with an increase in search impressions reported by Search Console.

But Search Console also provided the reason for this contradiction: the site was appearing for more keywords on desktop and mobile, but the average position on desktop was the bottom of page 2, compared to the middle of page 1 for mobile.

Looking at where the traffic was coming from, hits on the mobile site peaked on Saturday/Sunday, while desktop’s peak was Monday. It looks like people were researching a potential purchase at the weekend, but when they went to buy it on their desktop they weren’t finding it.

So if SERPs trackers – which use bulk data on millions of selected keywords – give unreliable results, and what you see in your personal results may be completely different to those of the SERPs trackers, should we be thinking of another direction?

How about this? We accept the client’s contention that their rankings are better/worse than the “official” SERPs because – and not in spite – of their search history.

Their potential customers are likely to have a similar search history, so they could also be seeing a similar set of results. Perhaps we should be caring more about what searchers actually see, rather than what we think they see?

Does it Click?

Recent tests with Virtual Machines using anonymous proxies and a range of search profiles, were aimed at seeing what the difference actually is, but while initial results are “interesting” and unexpected, it’s by no means conclusive.

However, there is a real prospect that the client has the right idea about how he is appearing.

But are SERPs any use at all? We quote them as an indication of the CTRs at different positions, on the basis that more people click on result number 1 than number 2, and more people click on result number 2 than number 3, and so on.

Anecdotally, more people click on result number 10 than result number 9, and going through to the next page, more people click on result number 11 than number 10.

But does one profile of click-through rates really cover all searches? Is that possible with all of the variations on search result pages, anyway? Back in 2011, SEO Scientist called the SERPs CTR model “a waste of time”. Since then, a lot of data has passed under the bridge and it’s pretty much accepted that there isn’t a one-size fits all model.

Both Search Metrics and SEM Rush quote variable CTRs in their results, but taken in to consideration with personalised search, it raises a further question: are click-through rates personal too?

For now, we’re probably left with a variable CTR model based on results landing page and market. It may not be perfect, but it’s good enough.

If something can give you 70-80% accuracy, do you reject it because it’s not 100%?

Max Brockban, Head of SEO at The Media Image.

Advertising

Content by The Drum Network member:

The Media Image

The Media Image (TMI) is an independent performance media agency offering a paid search- led full-service digital media solution built on the principles of people,...

Find out more

More from Advertising

View all

Trending

Industry insights

View all
Add your own content +