Online Safety Google Regulation

Swimming with the current: the next frontier of consumer expectation and data regulation

By Arielle Garcia, Chief privacy officer

November 15, 2021 | 7 min read

As the landscape of data privacy grows ever-more complex, the industry faces a slew of unprecedented challenges, writes UM Worldwide’s Arielle Garcia as part of The Drum’s Data Deep Dive.

Man swimming through ocean water

UM Worldwide's chief privacy officer on navigating the shifting tides of data privacy

​​Amid a fragmented regulatory environment, participants in the vast digital advertising ecosystem have been catapulted into what can only be described as a hybrid game of Scrabble-meets-Battleship, requiring them to unscramble, parse and apply new rules while simultaneously evading cannonballs on a trajectory toward their commercial goals.

While the disruption has created cost and complexity in the short term, the industry has generally accepted that the temporary discomfort is a necessary price to pay for the opportunity to repair fractured trust with consumers. Yet despite well-intentioned efforts to adapt to evolving regulations and expectations, diverging requirements and the spectrum of interpretations have distracted from the core goal of increased transparency and choice for people, and a fairer, more trustworthy online experience.

The regulatory landscape will undoubtedly remain dynamic, rendered further complex by changes spearheaded by big tech. We’ve already seen this evidenced with Apple’s iOS 14.5 AppTrackingTransparency and iOS 15 enhancements, Google’s deprecation of third-party cookies and its Privacy Sandbox proposals as well as, most recently, Facebook’s decision to restrict advertisers’ abilities to target based on sensitive topics and interests.

How does the industry break the costly cycle of reactivity? How do we stop backing into solutions and begin futureproofing our collective approach to data collection and use?

It begins with understanding the shifting tides of consumer sentiment, as, ultimately, people’s expectations serve as the current that propels regulatory scrutiny. To that end, there are two key areas at the intersection of the cookieless future – emerging regulations and shifting consumer expectations – that the industry should proactively prioritize. By doing so, the industry can be prepared for impending regulation, reduce commercial disruption and most importantly, do right by people.

Algorithmic harms: the next frontier

As we look ahead to a cookieless future, adoption of AI and machine learning will undoubtedly accelerate – whether to power smarter, next-generation contextual targeting that leverages semantic intelligence via natural language processing, or to bridge addressability and measurement gaps by balancing reliance on deterministic attributes with probabilistic modeling.

Perhaps the most widely discussed proposed solution is Google’s Federated Learning of Cohorts (FLoC), which uses on-device AI/machine learning to analyze web activity and assign users to cohorts based on this activity. While on the surface cohort-based solutions are intended to offer elevated privacy compared to one-to-one solutions, the reality is that one-to-many targeting still entails the processing of people’s personal information. Privacy and responsible data use considerations must be just as central to these solutions to ensure they withstand the test of time. Greater reliance on algorithmic decision-making renders transparency and human oversight even more critical – as the industry aims to avoid the obfuscation of potential harm to individuals, such as those caused by algorithmic bias and other forms of discrimination.

As a result, it is imperative that privacy-preserving solutions are not viewed as a silver bullet. As these solutions are developed, they should be built in contemplation of the impact they will have on individuals – looking beyond privacy of personal information to using data responsibly and being transparent and accountable for algorithmic decision-making.

In the Federal Trade Commission (FTC)’s September announcement, algorithmic and biometric bias were identified as priority areas given their potential for discriminatory outcomes – not just intent. Specifically, the FTC highlights that algorithms developed for benign purposes ’like advertising’ can result in racial bias and discrimination against other protected classes.

Concurrently, Europe looks to maintain its position as a leader in issues of responsible data and technology through its proposed AI regulations. While questions remain about the effective scope of the regulations, these debates will surely have a trickle-down effect, and we will continue to see ’profiling’ and ’automated decision-making’ make their way into new legislative proposals around the world. For example, the California Privacy Rights Act (CPRA) requires that an opt-out be offered for use of data in ’automated decision-making’ or profiling.

Of course, the potential for algorithmic harms in the context of advertising are not new and not limited to cohort-based solutions. In recent years, it was found that in a handful of cases, platform configuration inadvertently enabled discriminatory effects, such as the suppression of certain audiences (based on gender classification or age) from receiving opportunity-related advertisements. Detecting these harmful outcomes can be more complex in the context of interest segments or cohorts where there may be less obvious correlations between interests or behaviors that enable sensitive inferences. Complexity is further magnified in cases when the algorithms that power these cohorts lie within the walled gardens.

Data is an input. Privacy is a fundamental right and an expectation. Using data responsibly is the brief before our industry, and responsible algorithms are the next frontier therein. It is more important than ever to establish standards, implement controls and commit to transparency and accountability.

A spotlight on dark patterns

The second key area at the intersection of the cookieless future and the shifting regulatory dialogue is the use of what are commonly referred to as ’dark patterns’, or user interface manipulations that may be used to trick users into sharing personal information.

As first-party data becomes increasingly valuable and identity solutions tout their foundation in ’consented data’, the risk of designing privacy experiences that prioritize ’opt-ins’ at the expense of meaningful transparency and choice grows higher. As the FTC looks to curb ’deceptive and manipulative conduct’ online, it is evaluating the ways in which dark patterns subvert individual autonomy – a topic that the French data protection authority Commission Nationale de L'informatique et des Libertés published a report on in 2019. As it stands, both CPRA and the Colorado Privacy Act impose a ban on dark patterns.

Solving the core issue that led to the demise of third-party cookies requires a fundamental change to give people more transparency and more choice, even where it may be at the expense of scale. This is especially true given growing criticism of e-mail-based identifiers.

What do the two imperatives of reducing algorithmic harm and restricting dark patterns have in common? They highlight how the broader impacts that data collection and use can have on people and society.

Navigating these challenges will require a commitment to pursuing the harder right versus the easier wrong, as skirting around the edges will only breed more consumer mistrust and, in turn, prolong regulatory whack-a-mole. Ultimately, these themes are the latest symptoms of fractured trust, and the current era of disruption provides the perfect catalyst for the industry to partner on standards and solutions that sail with the tide of consumer sentiment toward a symbiotic outcome.

Arielle Garcia is chief privacy officer at UM Worldwide.

Online Safety Google Regulation

More from Online Safety

View all

Trending

Industry insights

View all
Add your own content +