APAC Google Consumer Behaviour

The other side of the internet: Identifying cues for concern

By Dave Sanderson, Founder & CEO

Nugit

|

The Drum Network article

This content is produced by The Drum Network, a paid-for membership club for CEOs and their agencies who want to share their expertise and grow their business.

Find out more

July 10, 2017 | 5 min read

Today’s competitive and highly digital landscape sees businesses scrambling to get control of their data; organisations know how important it is to leverage analytics in order to make better decisions and gain an edge over their competitors.

Human data

Human data

But commercialism and consumerism aside, why do we not focus more attention on data analytics to address other pressing issues as well, such as identifying people who need help?

Identifying cues for concern

Of course, there are those who have made the gallant effort and embarked on this already. For example, a project that ran on Facebook called myPersonality found that data mining Facebook messages can reveal substance abusers. Research revealed that there was a strong co-relation between the words used in messaging and specific types of substance abuse, and some of these might surprise you:

  • ‘Girl’, ‘woman’, ‘up’ and ‘down’ are positively correlated with alcohol use
  • ‘Hate’, ‘kill,’ ‘clinic’ and ‘pill’ are positively correlated with drug use

Less surprising are swear words such as ‘fuck’ and ‘shit,’ sexual words ‘horny’ and ‘sex', and words related to biological processes such as ‘blood’ and ‘pain', are found to be positively correlated with all three types of substance use disorder i.e. tobacco, alcohol and drugs. Being able to identify likely substance abusers, increases the chances of providing help because after all, prevention is better than cure.

Facebook isn’t the only social media platform that’s making an effort. Recently, Google announced four measures that it will deploy to help fight the spread of extremist content on YouTube:

  1. Utilizing machine learning to train its automated systems to better identify terror-related (or even self-harm) videos on YouTube
  2. Doubling the size of its Trusted Flagger programme – a group of experts with special privileges that review flagged content in violation of community guidelines, which Google will also provide with additional operational grants
  3. Taking tougher stances on videos that toe the line in violating YouTube policies. While the videos will not be removed completely, they will be hidden, issued a warning and unable to generate any advertising revenue
  4. Building on its Creators for Change programme, which will redirect users targeted by extremist groups such as Isis, to counter-extremist content. In working with Jigsaw to implement the Redirect Method, this approach harnesses the power of targeted online advertising to reach potential ISIS recruits, then redirects them towards anti-terrorist videos that can change their minds about joining.

It’s hoped that these efforts will allow the company to draw on specialty groups to target these specific types of videos. The move comes on the tail of high-level brand safety concerns that were raised when Fortune 500 companies found their ad content alongside (and funding) unsavoury content. Naturally, this has led to mounting pressure from both governments and brands to focus more on this aspect and stop enabling terrorist propaganda online.

Jumping hurdles

Despite encouraging signs, there are challenges to be expected. Beyond the issue of scale (which technology can address), a major concern is where do we draw the line between data collection and an invasion of someone’s personal privacy? In the Facebook research cited above, users had voluntarily signed up for the myPersonality project and agreed to allow their data to be used for research purposes only.

But imagine if they hadn’t done so and found themselves identified as the group of people to be flagged for substance abuse problems. How receptive would they then be to a sudden message such as – “Hi there, we think you need help with substance abuse.”? This presents a whole other set of problems because ultimately, those who need help, are often in denial and refuse assistance.

The same goes for Google’s initiatives to counter terrorism. How do you avoid profiling people as an entire group, and accidentally identifying the wrong individuals based on poor stereotypes? Repercussions for this type of mistake are often a long, negative effect on innocent lives. Media has shown us time and again how often mistakes like these happen, further marginalizing a group or people and causing unrest, as opposed to helping.

Despite the hurdles however, the pursuit of using data for a good cause has more benefits that outweigh the negative. Businesses have learnt over the years that having a proper workflow and processes in place is the first step towards leveraging their data. The same approach should be employed towards the collation of data for a greater good; processes should be in place to protect people and their data. The first priority should always be safety – people should feel safe enough sharing details online, and data should always be used to help and save people, not prosecute.

Once that happens, imagine the impact data could have in preventing deaths from substance abuse, terrorism or even from self-harm. Data can literally help make the world a better place, and we have tonnes of it to work with.

David Sanderson is chief executive officer and founder at Nugit.

APAC Google Consumer Behaviour

Content by The Drum Network member:

Nugit

People need stories to relate to data. At Nugit, we use technology to simplify and present data as stories, to help make insights easier to find, digest and act...

Find out more

More from APAC

View all

Trending

Industry insights

View all
Add your own content +