By continuing to use The Drum, I accept the use of cookies as per The Drum's privacy policy

Test drive your tech: why a lack of A/B testing leaves money on the table for publishers

PubMatic - Test drive your tech: why a lack of A/B testing leaves money on the table for publishers

In media, the need to adopt new technology is an omnipresent pressure and one that leaves many publishers, brands and agencies alike, pressed for time.

Digital technology platform PubMatic and online media brand 9GAG believe that publishers specifically are missing a trick by not testing technology, but rather putting out an RFI or RFP and selecting partners solely off the contractual paper. To put this hypothesis to the test, 9GAG decided to A/B test several partners during a decision about what header bidding technology to use.

Header bidding uses technology to give publishers more control over the revenue they can generate from the ads around their content by conducting unified auctions. The technology allows publishers to send ad requests to all demand sources simultaneously and pick the highest bidder as the winner, in real-time. To facilitate this, publishers usually select a ‘wrapper’ solution. This piece of code is embedded into the header of their website and 'wraps around’ or manages all the various demand sources, making it easy for a publisher to add incremental demand sources and thereby increase yield.

Testing as a culture

Header bidding has been a revolutionary step for publishers that have felt the pinch of ad revenue prices dropping and online ad dollars being diverted into online behemoth brands like Google and Facebook. In recognition of this, 9GAG decided it was a technology worth spending time over in the selection process, though 9GAG’s global VP, head of programmatic Vincent So, said A/B testing is baked into the 9GAG culture.

“A/B testing is baked into our DNA at 9GAG. We believe in making data-informed decisions based on empirical results. Our process is to research, hypothesize, test, and decide. RFPs are great for initial research, but not enough to make a truly informed decision. They don’t take into account all the variables and factors that impact real-life performance.” he explains.

“Our hypothesis assumed another wrapper solution would drive the best results. A/B testing showed us otherwise. Making a decision solely based on our RFP only would have left money on the table.”

Creating a desire to test

According to PubMatic's country manager for Southeast Asia, Greater China and Korea, Marcus Pousette, this approach isn’t common for publishers.

“RFPs and RFIs are very common, and there is a lot of effort and time invested in comparing solutions – but usually only based on the information provided by the competing vendors. Very few publishers go beyond that by conducting tests that deliver empirical data. To use a car-analogy, in addition to making use of information available from brands, most people looking to buy a new car, would probably as part of their evaluation take the different cars on test-drives helping them make better and more informed decisions,” he explains.

As to why this method isn’t common, Pousette says technical knowledge and pre-existing bias or expectations may drive out a publisher’s desire to test.

“As always, there could be many reasons but I think two of the main ones would be technical bandwidth or ‘know-how’ on how to actually set up and conduct a test that results in fair and comparable data. Another equally important aspect, as Vincent mentioned is the upfront hypothesis or expectation that one vendor will deliver superior results, is so strong that publishers might not believe that a more detailed test is necessary,” he says.

The results

In the case of 9GAG’s testing, the results saw them select PubMatic’s OpenWrap as their wrapper solution of choice. While Pousette expected to see some of the results swing in PubMatic’s favour, he was still surprised by some of the results.

“I was surprised to see that not all Prebid-based wrappers, which were part of the A/B testing, managed to reach the same gross monetisation for 9GAG. Given all are supposedly built on Prebid, running the same adapters, for the same ad units, in the same markets – you would expect everyone to perform at a roughly equal level with regard to gross revenue. That there would be differences between all the wrappers in terms of net revenue is to be expected, mainly due to how different vendors price and charge for their services. However, I was surprised how big the difference could be between equally configured Prebid-based wrappers in this particular case,” he explains.

Don't leave revenue on the table

Ultimately, Pousette sees the biggest learning as the need to rigorously test the technology before making a final decision. “I would encourage all publishers who have adopted, or are in the process of adopting header-bidding, to test multiple solutions in live environments, including publishers who are building and managing their own proprietary Prebid-setups. As the A/B testing shows, there is simply too much potential revenue on the table not to test and compare with alternate solutions.”

Pousette also believes this test highlights the value header bidding technology and working with the right partners, brings to publishers.

“Working with 9GAG has reinforced our view that the most important thing a wrapper can do is help publishers achieve incremental revenue. ,” concludes Pousette.

Find out more by reading the full case study here.