The Drum Awards for Marketing - Extended Deadline

-d -h -min -sec

At your service: Are virtual assistants just another way for big companies to harvest our data?

By Sean Hargrave

October 3, 2015 | 7 min read

Virtual assistants such as Apple's Siri are changing the way consumers organise their lives. But should we be concerned by the data and privacy implications of voice search?

It is the next big step in search and artificial intelligence – virtual private assistants who can understand their owner’s voice and do their beckoning, whether it is checking when to deliver anniversary flowers or putting a conference call in the calendar.

The latest entrant is Facebook, who announced a trial of its M service in California in August. The more established players already up and running include Google’s Now and Ok services as well as Apple’s Siri and Microsoft’s Cortana.

The technology undoubtedly offers a useful service to consumers, but a question remains. Given that the big tech companies behind the assistants have a history of using popular free services to gather data on users which can be sold to advertisers to improve targeting, should the public be concerned? Are our conversations with personal devices being shared? If so, with whom and how secure is our information?

According to Jeremy Pounder, research director at Mindshare, there are some areas of concern that consumers are fully aware of, even though the technology is only in its infancy. He oversaw qualitative and then quantitative research of 1,000 people, which showed nearly two in three are interested or very interested in the technology, despite having concerns over what would happen to the data the services create.

“The issue with virtual assistants is that for them to work effectively you have to hand your life over to them,” he says.

“We found that there was a lot of interest in the services they can offer and we’re predicting there are going to be three main mindsets. There are those who just won’t touch them because they’re concerned they’re a risk to their privacy, others who will accept any potential risk for an easier life, and some who just won’t even think about it.”

Creepy or useful?

The proportions are uncertain as services roll out and are mostly used today by early adopters. However, for Tony Anscombe, senior security evangelist at AVG, there are two very real concerns as the services gain traction.

“There’s little doubt that the big consumer tech companies are not providing this technology for nothing. At some stage it’s bound to be used for better targeting of advertising,“ he says.

“It depends how you feel about that, but many might find it a little creepy because, particularly with voice, you open up a lot more – you tend to be more verbose as you ‘chat’ to your device. The other issue people may not be thinking about is how voice authentication is growing and so hackers may one day listen in to someone’s voice and be able to mimic it to hijack their identity. The issue is, who’s all this data being shared with or who will it one day be shared with?”

Who’s sharing our data?

It is a question that Ken Munro, senior partner at ethical hackers Pen Test Partners, has been putting considerable research into. He believes the public would be concerned if they knew that it is not only their voice that is being sent to the service provider but also additional data, such as their contacts and appointments.

“We’ve been truly unsettled by some of the privacy arrangements surrounding virtual assistants,” he says. “Siri transmits a lot of extra data with voice requests, including contacts’ names and nicknames, to enable it to work better. Microsoft’s Cortana similarly sends contact and calendar data with voice requests which, it makes clear, can be stored and processed by itself, subsidiaries or affiliates, and service providers in accordance with US-EU Safe Harbour regulations.

“Google Now is not specifically mentioned by Google’s terms of service, but it specifies the right to share data with third parties and, as far as we’re concerned, the volume and type of data Google aggregates seems somewhat ‘stalkerish’.”

Privacy arrangements

For those who are concerned, Apple points Siri users to its privacy site where it makes clear that data sent to its servers is only put to the use of improving the technology. It insists that, although user data such as their name, contacts and song choices are sent to Apple in encrypted form, this information is not linked to the owner’s Apple ID but rather to a randomised identity given to the device, and it is not sold on to marketers.

It is a similar position over at Facebook, whose virtual assistant M has attracted attention for its policy of using staff to help train the technology to work better.

A Facebook spokesperson pointed out to The Drum that requests made over M do not bring with them information from a user’s profile page, although common contact addresses are stored so it doesn’t have to ask for the same delivery information several times. In addition, insights from a request are not used to build a more detailed profile of a user for advertisers, and marketers are not permitted to buy any data gathered by the technology.

Google similarly suggests that anyone concerned checks out its privacy policy, but points out that Google Now and Ok Google are both opt-in products. It assures that it does not sell any personally identifiable information to advertisers.

Future worries

This, of course, does not mean that a user’s data is not stored and processed by Google, only that it is not identifiable when used for more targeted marketing.

As far as Rafael Laguna, chief executive of Open-Xchange, is concerned, consumers should remember that the tech giants have traditionally made very good profits from harvesting user data, even if it is anonymised. In the long run, it would not be surprising for our conversations with our computers, desktops and tablets, and even our televisions, to be stored and sold on, just like search and social profile data is today.

“AI assistants are simply another avenue for these companies to harvest your data,” he suggests. “Users need to know they’re paying for these services with their data. Companies like Google, Facebook and Amazon are investing in new ways of harvesting and exploiting your data through devices that monitor your home, listen to your private conversations and map your movements. There is no reason to think they won’t do the same with AI personal assistants.”

While consumers might appear quite well protected right now, the concern is once we get accustomed to virtual assistants and stop questioning them. Those conversations we have with our personal devices may well end up being sold off to third parties who we will have to entrust to invest in top level cyber security measures.

This feature is also published in The Drum's 30 September issue.

Trending

Industry insights

View all
Add your own content +