Google Knowledge Graph: Answers Not Pages

By Kathy Heslop

May 8, 2013 | 4 min read

In May 2012, Google announced a new feature to its search product: a knowledge base that would contain verified information that could answer search queries directly. They called it Knowledge Graph.

The logic was sound. When people search for Benedict Cumberbatch, they most likely aren’t looking for a page about the Sherlock Holmes star- they’re just looking for information about him. The movies he’s acted in, that show you saw him in last night and if Alan Rickman really is his dad.

By providing users with that relevant information immediately, Google could tap into the value offered by answer engines of old, like Ask Jeeves, or more pertinently, that friend of yours who always seems to know way too much about celebrities.

But how relevant is the information turning up in Knowledge Graph?

While bigger stars like Cumberbatch have accurate information about them in their knowledge graph entries, some of the comparatively smaller stars who’ve earned knowledge graph entries still have to contend with occasional moments of misinformation.

World renowned drummer Gary Husband for instance, has a vast list of Level 42 songs attributed to him in spite of the fact that he wasn’t involved in writing, producing or even performing most of them during his tenure with the band. For an independent, far-reaching and original artist like him, this somewhat major miscalculation of his temporary relationship with Level 42 could have a big impact.

He explains: “Google's Knowledge Graph is gravely misrepresenting a large number of artists that I know, myself included, by frequently presenting misinformation as "factual" information (more akin to Wikipedia). Since Google is widely perceived as a trusted source of intelligence, such widespread errors have the potential to be highly damaging to artists' careers and of course wholly misleading to anyone searching for accurate facts - journalists, booking agents, producers, promoters or fans. But for me, the most alarming part of all this is that we seem powerless in our efforts to get Google to rectify the errors. This is totally unacceptable. The lengths I have gone to over the past nine months to set my own 'record' straight with Google have been in vain: it would seem that nobody there is remotely interested.”

So where exactly is Google getting the entry wrong? It would appear they’re misunderstanding some of the information they’re collecting and the quality of relationships between them.

All the data in knowledge graph comes from four sources: Wikipedia, Freebase, The CIA World Factbook and interestingly enough for brands, Google Plus. All 570 million objects, 18 Billion facts and the relationships between all these different entries are based on the information available in these four sources.

The algorithms traversing this massive amount of data appear to occasionally misinterpret the quality of some of this information. In the case of Husband, it might be Wikipedia’s separated section about his relationship with Level 42 being misconstrued for greater involvement.

Where the algorithms are misfiring, is something that will only ever be clear to Google.

What is clear though, is that Google are aware their algorithms aren’t perfect just yet. Hence their repeated removal and re-entry of different Knowledge Graph sections from time to time.

With some interesting competition from Bing’s knowledge base Satori, Google are certainly being kept on their toes when it comes to making sure their knowledge graph entries are always relevant and useful.

More importantly, if Google are to keep up the incredible momentum they’ve worked up in their march towards semantically indexed, socially relevant search results, then they’ll be desperately keen to make sure the entries for Husband are as accurate as the entries for Cumberbatch.

Trending

Industry insights

View all
Add your own content +