The Drum Awards for Marketing - Extended Deadline

-d -h -min -sec

Reporting Tool Meta Data

3 most common attribution challenges and how to solve them

Meta

|

Open Mic article

This content is produced by a publishing partner of Open Mic.

Open Mic is the self-publishing platform for the marketing industry, allowing members to publish news, opinion and insights on thedrum.com.

Find out more

November 8, 2022 | 10 min read

Attribution refers to how we understand which marketing touchpoints work and, we hope, why

If we know the contribution of each channel, how they interact, and which ads were most and least successful in driving outcomes, then we know how to allocate resources. It’s why 76% of marketers evaluate their digital marketing efforts by using attribution.

What makes attribution such an ambiguous and difficult thing to understand is that appearances do not always equal reality. Marketers will rarely have such firm footholds on these points or related questions such as: How can we identify successful strategies and channels when the tools we rely on report different numbers of conversions? And how should I verify which model of performance is the most accurate?

The most definite thing you can say is that clicks and impressions played some role - but what role? And why do we choose some methodologies over others to decide these questions? Making decisions based solely on reporting tools alone isn't sufficient because we can have no trust in their veracity or significance. A cliché which is readily applicable here is that some of our most important realities are hardest to see. With that in mind, can you trust Meta’s reporting tools or any others?

Here we list marketers three most common attribution challenges and how to solve them.

Challenge 1: Changes in data collection and measurement

The changes brought about by the various privacy initiatives all target the mechanisms involved in identity resolution and matching. These inform all aspects of marketing when serving a relevant ad, reaching out to relevant customers, driving important business outcomes and reporting on those outcomes. For attribution, we can think about how these changes affect both the quality of the data feeding into a model, and the model itself. For the former, reduced identity resolution means marketers have less data about buyer behavior online from touchpoint to touchpoint to inform their decisions which ultimately impacts advertising performance. It doesn’t matter how sophisticated a model is, it can’t attribute credit to things it doesn’t see. This is quite a change from a time not too long ago, when it was reasonable for advertisers to expect granular log-level impressions and click data with associated timestamps keyed to individual ids. Despite these changes, marketers still need to regularly optimize campaigns.

Recommendation 1: Build a solid data infrastructure and explore modelling techniques for better attribution

A more complete path to conversion can begin by building systems to maintain high-quality consumer data with privacy consent in place. Systems which help connect all levels of marketers data: website data, offsite data, CRM data, customer interaction data , data partnerships and customer analytics. Consent is the thread that ties it all together and can be cultivated more easily via a centralized platform such as server-to-server (S2S) solutions like Meta’s Conversions API, to understand consent through the various engagements. With these in place, it becomes easier to resolve users' identities across disparate contexts and can be joined with conversion data to infer actions. It's also worth keeping in mind that MTA vendors specialize in combining data from different publishers into their modeling.

Once a more complete customer database is in place, marketers can switch focus to ways of modelling data. Traditional multi-touch attribution (MTA) is heavily impacted by privacy changes because it built a user path analysis based on tracking cookies cross-device. However, its demise has been greatly exaggerated. Like other techniques, MTA is undergoing a shift from cookie-based log-level to data to aggregated, cohort level impressions and clicks - all facilitated by standardized S2S, thus reducing reliance on technologies like the Pixel.

This allows third-party providers to maintain users' privacy choices by obfuscating a group of users, while keeping the functional use case of measuring the value (i.e. incremental sales) generated by each touch-point in the 'path to conversion'.

Challenge 2: The need for a simple procedure to estimate true performance from observational (attribution) data

Attribution involves more than plugging tracking gaps in data. This alone won’t help distinguish the real from appearances for the simple reason that, what could you reliably compare your doubts against to know something’s wrong? Marketers also need a procedure for estimating what is really happening to their performance by figuring out how best to (re)align observational (attribution) data with actual performance.

Data discrepancies not only come about due to the differences in platform technology but also due to how those platforms are configured: conversion modelling, attribution models and windows, data ranges and campaign naming taxonomies. In all of these cases, reporting tools may not be offering an accurate view of actual performance. To do that, we need to think of Mathematician Alfred Korzybski.

Korzybski notes in a famous remark, ‘the map is not the territory’, meaning all modelling is a representation of something, not the thing itself. What he meant was that models, no matter how sophisticated, still require interpretation and verification. Randomized control trial experiments offer a way to distinguish the map from the territory, what’s real from appearance, an extremely important distinction to make as our reliance on statistical modelling increases. With this in mind, what is a relatively simple procedure for improving reporting?

Recommendation 2: Improve model accuracy with verification

Verification involves evaluating how far away the current model is — no matter which reporting system the model lives in - from experimental results. If you’re far off, it’s an argument for choosing a different model or calibrating your current one i.e. bring reporting into alignment with experimental results. This year, we conducted a meta-analysis based on 17 EMEA businesses comparing client’s internal attribution for Meta (a mixture of last-click, MTA and other attribution models) with RCT results (Meta’s Conversion Lift). We found that, on average, uncalibrated attribution undervalues Meta by 56%. A clear example of how inaccurate measurement data can lead to suboptimal budget allocation decisions.

Sample comparison between incremental results and attributed model results

Calibration

Calibration helps to redistribute conversion credit to improve the budget allocation model. It does this by including incremental numbers into the media mix—if one channel gets more credit, which channels should get less? Of course, the fairest advice is to recommend RCTs on all channels if possible, but that also comes with nuances. Returning to the meta-analysis, we estimated results attributed to Meta ads had to be calibrated to incremental results by 2.3x.

Challenge 3: The need to balance the strengths and weakness of various measurement methods

There has never been a one-size fits all solution capable of answering the necessary questions businesses ask to understand their performance: how do we bridge the gap between short and long term optimization? How do we evaluate the impact of interdependencies between our online and offline media? How do we overcome under or over-reporting? Instead, the norm is a fragmented ecosystem with varying data collection and limited data sharing between publishers. An ecosystem in which each business is unique in terms of its business model; the number of sales channels; data available, and as a result, each presents unique measurement challenges. The challenge is how to deal with multiple disparate sources for measurement when traditional methodologies individually only solve part of the puzzle and triangulate data points for sensible results.

Recommendation 3: Unify marketing techniques

Unified marketing measurement (UMM) is the unification of approaches (MTA, MMM, experiments) that balances the various strengths and weaknesses of each technique. It requires marketers to understand which ones to use, when to use them and how to use them. Ultimately, the cornerstone of UMM consists of combining these different solutions together.

As I’ve described already, experiments (randomized control trials or quasi-experiments) can help verify our models as described above and produce more accurate outcomes for specific campaigns and channels, but they are slower to implement, are less precise than MTA, and come with a cost for holdout groups.

Marketing teams use MMM to determine overall channel and campaign impact, creating holistic reporting on different sales channels and their interdependencies. MMM is extremely flexible, but less accurate versus experiments and less precise in comparison to MTA. Although the pace of innovation in MMM is now so rapid given privacy selection pressures that we’re seeing how modernized MMMs can complement or even replace MTA use-cases - blurring the boundaries between the two methods.

And finally, MTA - a bit of a catch all phrase taken to mean almost any reporting platform that features data points from multiple channels (Google Analytics) as well as modelling techniques (Shapley Values). The biggest benefit of MTA is that it can offer fast and granular tactical optimisation on creatives and line-items that are ‘always-on’. However, it is not necessarily more flexible or accurate than the other techniques and can suffer from data gaps.

Bringing together all of these techniques promises to break down the traditional marketing measurement silos where single techniques offer partial answers for their part of the organization. Together, they can foster holistic decision making without sacrificing accuracy and granularity.

Key takeaways

1) Build a solid data foundation and explore modelling techniques

The accuracy of marketing measurement depends on the quality of available data and how it’s treated

2) Assess the accuracy of reporting and allocate capital accordingly to maximise return

Accurate representation is a process not a destination. Move towards a truer representation of performance by using experimentation as your ground truth

3) Unify measurement techniques

Unifying measurement techniques can be an effective way to answer a broader range of business questions and balance out the weakness of any given technique

Reporting Tool Meta Data

Trending

Industry insights

View all
Add your own content +