Google now understands human language with 95% accuracy. Its ability to decipher the intent behind our words has led to it becoming market leader of the search world. Job done? Not quite.
With reports that Google image searches now account for almost 30% of all US queries, the industry giant is looking to capitalise on this by turning its attention to non-text searches.
But it’s not just Google that’s branching out from word-based searches. Bing is sitting up and taking notice too, having invested heavily in visual search with new features such as its deep image tool. And with over 175m monthly users, it makes sense that Pinterest is looking to capitalise on image-based search as well. As chief executive Ben Silbermann explained, “a lot of the future of search is going to be about pictures instead of keywords”.
Unsurprisingly, image search is highly-relevant to sectors like fashion and beauty, where visuals sell. But these aren’t the only verticals that stand to benefit; homeware, food, travel and retail also have the potential to harness the power of image-search to strengthen their brands. Here we’ll explore the different types of visual searches, their benefits and, most importantly, how you can optimise for them. But first, what do we mean by visual search and image search?
Image search vs. visual search
Put simply, image search is the act of retrieving images from a search engine based on a user’s input. As computers become better at understanding visual information, these inputs have become more varied. For example, developments in technology mean search engines can now interpret colour, texture and shapes within an image in a way that’s not been done before.
And it’s not just searches with image-based results that are on the rise. We’re now seeing the emergence of searches using image inputs, too. In other words, users can now search with images, not just for them.
This has led to the coining of the umbrella term ‘visual search’ to refer to all visual data inputs and retrieval – including new reverse image search technology and traditional keyword-in/image-out model.
Types of visual search
There are many different types of visual search – here’s a breakdown of what each means:
In the early days, the only information search engines could see was metadata – keyword-rich elements within the code. So searches for ‘red trench coat’ would return images that included this keyword in the file name or alt text tag. These meta elements are still important, but developments in image recognition technology mean search engines can now understand the images themselves, too.
Reverse image search
Pioneered by Google, the image-in, similar images out (or reverse image search) model initially became popular with brands trying to identify uncited uses of their company images. But marketers are now also adopting the technology in consumer-facing ways.
Pinterest leads the way in related image searches, which follow a similar logic to the Google Suggest function in that they show users queries and common prepositions relating to what they’re looking for.
Another Pinterest initiative. Building on related search, this functionality suggests filters – like colour and size – to help users focus their search. The success of this has led to most search engines adopting it.
Augmented reality search
Thanks to smartphone camera integration, users can now search using visual inputs from the palm of their hand. Google and Pinterest’s Lens apps allow users to capture objects in real-life and return related images.
Pinterest was the first to realise that users may want to save their image search results, especially when looking for inspiration. This lead to the creation of Pinterest boards and has paved the way for more repositories – i.e. places to save/collect images.
Deep image search
Pioneered by Bing, deep image search lets you select objects within an image using a crop tool and find related images.
The key thing to note here is that image search is no longer limited to Google, (or even Bing and Yahoo). Just as YouTube started as a video-sharing site and evolved into a video search engine, we’re seeing a variety of platforms, like Pinterest, gaining image ‘search engine’ status.
What does the future hold for visual search?
The key to the success of visual search is the ability of search engines to attribute context to image-based content. Today, this is achieved by understanding shape, colour, texture, and data labels, but as understanding of visual inputs and outputs improves, so will the search experience for users.
A key driver in this innovation is the ImageNet large-scale visual recognition challenge (ILSVRC). Now in its eighth year, it’s the largest academic challenge in computer vision, putting leading image recognition algorithms to the test. One interesting criteria that it assess is an algorithm’s ability to describe a complex scene by accurately locating and identifying many objects in it.
This is the next big challenge for visual search: to not only recognise objects within visual content, but to fully understand context through its composition.
The search engine that achieves this will be able connect us with a wealth of information based on our surroundings. It will be able to understand the interior design theme of your living room and recommend complimentary products, or suggest recipes based on an image of ingredients, for example.
Beyond commercial applications, these algorithms will become more impactful once they begin to establish visual blueprints of the world around us. They will be able to compare visual inputs with these and identify subtle differences – like the faulty wiring in an image of a car engine, or a hairline fracture in a bone X-ray.
Search engines have already made a huge impact on the way we live, through their understanding of language, but that’s just the beginning. We’re on the verge of a visual revolution that will change the way we interact with the world for good.
Sam Colebrook is a content strategist at iCrossing