“If you’re a marketer, you measure creativity,” writes columnist Helen Edwards in a recent piece. It’s not even a choice. Above any corporate investment into a marketing campaign looms the question whether it will yield any return worth speaking of at all.
Marketers will need to whip out some assessments to convince fellow execs that the next campaign will indeed be a good thing for the company. That is the easy part. Far more difficult is deciding how to measure the creative work, given the flaws inherent in the common options.
Common ways of measuring
Edwards lays out three options, all of which are far from perfect: pre-testing, post-activity assessments, or a combination of both. One of the key problems with pre-testing is that it often assesses only an early approximation of what will eventually materialize, not the actual thing.
The post-activity option, on the other hand, only assesses what has already been delivered, meaning that the insights gained will not help to improve the campaign during action. A combination approach may yield more data and some interesting insights, but it still leaves marketers with a “before” or “after” measurement only.
There’s a better way
In the end, marketers seemingly have to rely on their gut feeling to make big decisions, not on measurements stemming from imperfect ways of measuring. As Edwards puts it: “You hope the research will decide, only to realise the biggest decisions will be all yours.” But is that true? Are marketers left with what essentially are evolutions of approaches that in their basic structure have been around for a hundred years?
I argue that there is a better way, one that utilises a crucial element marketers still routinely neglect: digital data. And one that also allows improving a campaign even during its commercially active phase. Imagine a technology that can simply track and objectively evaluate a campaign in real time. This is precisely what the Data Creativity Score does.
How it works
While we now offer the Data Creativity Score commercially as part of our portfolio, the roots of the Data Creativity Score are purely academic. In the early 2000s, a group of researchers at the Technical University Berlin – myself among them – hypothesized that creativity could be measured based on two most important dimensions of creativity in the marketing context: creative concept (originality, newness) and consumer activation. The advent of the “digital first” paradigm and big data finally made it possible to create a working measurement tool.
The Data Creativity Score tracks and benchmarks campaigns in real time, calculating scores for their creative concept and activation, which in turn make up the final “creativity score.” This is done by complex crawling and Natural Language Processing algorithms, but what the Data Creativity Score is looking at are essentially two things: how consumers perceive the campaign emotionally, and in how far the campaign motivates them to lookup more information about the product.
The actual data is derived from both future-oriented sources, such as the tonality and emotional intensity of the comments (this is where Natural Language Processing comes into play) but also from more mundane ones, such as search volume and social media mentions.
What marketers can do with it
Digital data has a real-time advantage, and that is also one of the big pluses of a data-driven measurement approach. It allows marketers to cut media budgets for campaigns that fail to activate (and vice versa), saving costs, and to adapt claims during action. In contrast to the post-activity measurement, measuring real-time can also account for changes to the product or service that take place during action – providing clear indications for how exactly the campaign impacts sales. Even more important are all the learnings that can be derived from the plenty of insights generated in a full campaign cycle for the next planned creative endeavour.
Now, am I saying that the Data Creativity Score is an infallible measurement instrument for creativity? Of course not. Critics will point towards the fact that it can only measure what happens online (though I do not consider that to be a weak point at all, as we already live in an online-first world). Others will question our definition of creativity or even the validity of our data sources.
I am certain, however, that it is more perfect than the alternatives, particularly for one reason: it eliminates the element of gut feeling from the measurement stage, where it does not belong to at all. Gut feeling belongs to the world of creativity, the world of “chaos and serendipity,” as Edwards puts it. In the world of metrics, however, it is better to let algorithms do the math.