Google on Cambridge Analytica scandal: 'Yes, we have brand safety issues, but we are not Facebook'

“We respect user consent and user privacy and we also give users control over what they want to do with their data," said Google

Google is confident it will never suffer a similar data breach on the scale that Facebook is currently facing with Cambridge Analytica, even though it has suffered brand safety and advertising fraud issues in the past.

The tech giant’s president for Asia Pacific and head of Trust and Safety for Southeast Asia, Karim Temsamani and Arjun Narayan respectively, expressed this sentiment to the media while on a panel during a ‘Growing with Google’ event in Singapore on 12 April, hours after Facebook co-founder Mark Zuckerberg’s testimony to the United States Congress about the scandal.

Temsamani explained the company’s confidence is down to the fact Google is used in a very different way to Facebook, as users tap on its technology to solve complex problems. Earlier in his presentation, he admitted the company does not always get it right, but its commitment to keep its platform is ‘unwavering’.

“People use our products and platforms when they intend to do something, like search or get directions through Google Maps. So, the way in which the ecosystem works, is incredibly different,” said the Frenchman. “We have had strong policies for a very long time and have not seen any of the issues that Zuckerberg has talked about on the platform, but that doesn’t mean we don’t have to pay incredible attention to it.”

Narayan added: “We respect user consent and user privacy and we also give users control over what they want to do with their data. All of that makes me believe we are in a better place. Google has hired the best in data security to protect it against "bad actors".”

Later, in a separate media briefing, Narayan argued that even though Google was unable to stop pedophiles from commenting on regular kids videos on YouTube and ads on extremist website, it is constantly adapting to the external environment because it is not a one-time effort and its work to protect the ecosystem is ongoing.

“Our work is ongoing because abuse, like consumer trends, are shape-shifting and evolving. As the trends evolve, you must be ahead of the curve and keep one step ahead. That requires investment and commitment, both of which, we have in plenty,” he explained.

“We have seen this (fraud) for the last 17 years and we have built capacity over time, and recent events have seen us invest even more in preventing these on our platforms.”

To combat this, he explained Google uses a two-pronged approach of technology and human reviewers to weed out 98% of the egregious content it finds on its platform. That means if there is a certain kind of abuse that is re-manifested, its technology can review and cast it out before it gets to a human reviewer.

“Obviously, we do need human reviewers because technology cannot catch everything, which is why there is that nuance and you always have human intelligence that complements technology, but is also making our technology better because there is a feedback between human intelligence and our automated systems,” explained Narayan. “A word can be used in context and can mean totally different things, so making sure that machines understand that, is not a trivial task. It is a constant effort to optimise technology.”

For advertisers, Narayan points to Google’s policy and enforcement safeguards in place that can detect traffic and potential fraud before it manifests itself. “In some cases, we work with the advertisers to notify them about things like malware. When we do, we make sure we are transparent about it and make sure advertisers feel secure to advertise with us,” he said.

Narayan was ambiguous in his response when asked to address calls for tech companies like Facebook and Google to be further regulated, on top of GDPR, and would only say that Google works very hard to comply with data protection acts, as is what is expected of a company doing business in Europe.

“We do have a very high standard which we place on ourselves in how we serve our users and we try to maintain those standards. When it comes to privacy and security, we want to make sure that we respect and are good custodians of user data because that is fundamental for our business model,” he explained.

The Drum recently spoke to GroupM’s brand safety chief John Montgomery, who warned that more regulation will only bolster walled gardens and called for self-regulation.

Search The Drum Jobs

Explore the best jobs in Marketing and Media industries
View all open jobs