A few weeks ago, researchers at Harvard University announced the results of an incredible project that enables computers to understand human thought, albeit at a very rudimentary level (the computer was able to understand a single word when the human thought of it). Minutes after the announcement social channels were filled with the dystopian visions of digital mind-control, condemning us to a protracted power battle between humans and machines and advocating resistance against our new digital overlords.
Of course, I don’t see the future playing out in anything like the scenarios we see in the movies, but I am continually bemused as to why we, as a society, so often frame this as a conflict.
Why is it always about humans vs machines when surely the whole point of what we have been doing for the last decade (although I would argue we’ve been doing it for much longer) is to capitalise on the potential of humans plus machines?
This conversation is becoming more and more relevant in our industry, with programmatic leading to advertising that is increasingly based on insights from data harnessed by the growing power of algorithms (in 2010 the programmatic industry was worth nothing, today the IAB and IDC forecast its value hitting £2.5bn by 2017).
This is a world where, when fed with enough data, the algorithms will know what content will go “viral”, when and where it should be placed, and for how long. It’s usually at this point, someone hits the big red button labelled “panic” and we all start worrying about our jobs because, after all, the computers can do all this stuff better than us, can’t they?
At a high level at least, we can help to ease some of the anxiety around our future employment prospects by taking a little time to understand the limitations of algorithms:
They can only make predictions (eg “this must be spam”, “this ad should be placed here”) based on experiences drawn from a huge trove of “training” data.
They can only learn from that data by processing it within a model that has been given to them - they can’t learn from data alone.
As the volume of data expands, machines learn from the results of previous predictions and fine-tune the model. This iterative self-improvement is one of the most powerful features of machine learning but basically means they can improve on the results of the model, but they can’t improve the model itself.
The machines draw conclusions and develop solutions based on probability; they are not human, as such they have no emotion or biases to augment their perspective.
It’s a lot to take in, but thought of in this way, algorithms are only as good as the data provided, constrained by the rules that they’ve been programmed with, and unable to improve the model, or connect multiple models together independently.
The situation should remind us of the old adage about “tools” and how they’re only as good as the people that use them.
So, before the digital industry rises up and forms its own Luddite rebellion (how ironic would that be?) there’s one important thing we should remember. By getting the machines to do more work, we have an opportunity to make better use of the extra capacity they provide, allowing us to focus our skills elsewhere.
We need to remember that computers, algorithms and the data that feeds them are here to help. The success of our industry’s future (not to mention the future of inspiring campaigns and engagement) will depend entirely on our ability to grasp the potential they offer us. As a result, our aspiration should be to do things differently, not the same things slightly better.
If we get this right, we humans won’t have to be in awe of the machines; instead, we will stand high and proud on the shoulders of these mechanical giants and accomplish truly amazing things.
The time for us to make this happen is now. The rise of the humans has already started – and the world will never be the same again.
Dave Coplin is chief envisioning officer at Microsoft.