Header Bidding Ad Tech Digital Marketing

Putting the science back into measurement

By Marcus Pousette, Country Manager, Southeast Asia, Greater China & Korea

PubMatic

|

Promoted article

March 23, 2021 | 5 min read

Sponsored by:

What's this?

Sponsored content is created for and in partnership with an advertiser and produced by the Drum Studios team.

Find out more

Mention an A/B test and many advertisers immediately think about creative assets. From display ads to emails, pitting one design or bit of content against another is a fast and effective way to learn and improve before or even in the middle of a campaign. The sheer simplicity of an A/B test makes it particularly enduring as a go-to option for many non-technical marketers and advertisers.

woman sitting at desk in office with multiple screens

Getting better at testing is a requirement for getting better at efficient monetization

However, it is little used in other areas of our industry – particularly when it comes to programmatic ad technology. Too often, publisher technical and optimization teams hurry through testing, fail to use a control, or overreach with complex multivariate testing, when A/B tests may be the most reliable and accurate approach. This is true when testing new demand partners, data partners and other vendors in a publisher’s ad stack.

Trying to assess the value of a new technology, feature or revenue stream should be, but rarely is, a scientific process. Bad testing is the same as a flip of a coin, the results will be meaningless. In programmatic advertising in particular, bad testing is rampant – when assessing many important elements including identity solutions, server-side versus client-side placement, what bidders to use, and auction timeouts. Proper A/B testing, with more data-informed decision making, can provide publishers with real guideposts for how to evolve their programmatic strategies, ultimately driving increased revenue potential.

The case for isolated, split A/B testing

There are a variety of common, but incorrect ways that publishers test their ad technology. Some may compare two ID solutions by testing one in the morning and one in the afternoon to see which performs better, or by adding a new demand partner on the server side and simply assessing if revenue goes up. In both scenarios, a host of issues such as outside influences, different test environments and incomparable data can impact the results.

It is better to structure fair A/B tests that consider all variables that need to be equalized and isolate each individually rather than waste time with haphazard tests that take up valuable development resources and don’t provide real answers. Test one thing against a control with randomized split testing, get a clear answer and repeat.

The places where better testing is needed most

Last year, many publishers saw big traffic increases, shifts to mobile and video and lots of new revenue opportunities – all of which can be best monetized through smart testing. Here are the most important elements to prioritize when testing to optimize this year:

  • ID providers: adding an identity provider is only valuable if it drives yield, which depends on who is buying through it. The industry is still in the early stages of identity resolution, and we don’t yet know which ID solutions will ultimately be ‘must-have.‘ Now is the time to experiment and build in flexibility so ID solutions can easily be added and compared against one another as different buyers make their own choices about which solutions to bet on.

  • Bidders – more isn’t more when it comes to monetization. Adding 25 bidders will slow down page performance and eat up whatever net new revenue it might bring. But testing bidders sequentially isn’t going to result in a fair test. Better to isolate one against another and test in rotation.

  • Auction timeouts – to give consumers the best user experience, publishers should use a low auction timeout so everything loads fast, but then bids may be lost. There’s a bell curve of optimal load times, which may change based on key elements such as type of content, or day of the week. Some types of content can afford longer timeouts to get in more bids, while that might be a recipe for revenue loss if people don’t stay on pages very long. Testing timeouts across different elements is the best way to find the best answer.

  • Server- or client-side – a lot of publishers will implement 10 partners on the client-side, but there could be diminishing returns after six to eight. In addition to performance degradation, having a smaller number creates a sense of competition. By constantly comparing and testing demand partners against one another and moving lower performers off-site, publishers will not only see more efficient monetization, they’ll have bargaining power.

Programmatic is complex – there’s no way around it. Every tech stack is different, every channel requires more technology and more processes. The only way to create a truly accurate measurement practice is to cut through the complexity with highly targeted, streamline tests. Getting better at testing is a requirement for getting better at efficient monetization.

Header Bidding Ad Tech Digital Marketing

Content by The Drum Network member:

PubMatic

PubMatic (Nasdaq: PUBM) is an independent technology company maximizing customer value by delivering digital advertising’s supply chain of the future. PubMatic’s...

Find out more

More from Header Bidding

View all

Trending

Industry insights

View all
Add your own content +