When you hand over the buy button to your artificially intelligent PA in a couple of years, be it of Google, Apple or Samsung stock, remember that you’re relinquishing control of the exact moment that legitimates the free market as we know it – the point of purchase.
What makes a market free? The familiar answer is a lack of regulation. A free market allows consenting adults to freely exchange goods, services and money. This picture of freedom relies on certain assumptions about human nature – assumptions that underpin our beliefs in liberal society.
Liberalism assumes that when people transact voluntarily they do so in their rational self-interest. Basically, they make their own decisions based on what they think is best for them. This legitimates the market and it has other moral consequences. If you force someone to sell you something at gunpoint, that’s not fair because it wasn't voluntary. If you persuade a blackout drunk man to sell his Porsche for a fiver, he wasn’t thinking rationally, so again, not fair.
A key principle of any free market is that people act both voluntarily and rationally. And it must be both at once, as the examples above demonstrate. Now, if we’re thinking about a future in which a large portion of consumer’s purchasing is done without them being conscious of it, where do rational and voluntary choices come in?
Clearly they don’t – not at the point of purchase anyway. But this is nothing new. When your partner does the shopping or when your stockbroker buys stock on your behalf, you don’t press the buy button either. Most of us have arrangements like this. And we set them up voluntarily and in our rational self-interest, so what’s the problem?
Well maybe there is no problem. Maybe your stockbroker never compromises your interests for the sake of his own. But maybe he does. How would you know – especially when he seems to be doing an all right job? There has to be an element of trust.
Let’s consider some purchasing arrangements you could have with your future AI-PA.
- A) You pre-select specific household products like milk and bread, and your PA purchases them to ensure you never run out, but also don’t waste.
- B) You ask your PA to buy you a pair of jeans. It knows your size, and the style and brand you prefer.
- C) You ask your PA to make your diet healthier.
Arrangements A and B seem fairly inert. A seems like pure convenience and B seems like convenience at a slightly higher risk. What about C?
I think C is a different prospect altogether, because it introduces an element of authority. It’s a bit like the relationship you might have with your stockbroker: it’s not just about convenience, it’s to a large extent about authority and trust. With arrangement C you would be relying on the AI to do the ‘rational’ bit for you. “Buy me some cool music”, “book me a table for two at a trendy restaurant for 8pm”, “buy me a safe and stylish hatchback with minimal projected market depreciation”.
We're already seeing a trend towards AI services that relieve the burden of choice. Olay’s new AI ‘Skin Advisor’ uses data from selfies to identify the user's ‘skin age’ and areas for improvement. It “helps women to understand their skin needs, and get her [the consumer] to a place where she’s able to find the right products”, says Olay’s principal scientist, Dr Frauke Neuser. In other words, it identifies the problem and prescribes the solution.
If advertising creates desires, AI-PAs will have the potential to fulfil those desires in an advisory, authoritative, some might say paternalistic sense. But what’s so bad about that? Certainly if you think your PA is doing a bad job you should be free to change or end the relationship. But even if you think it’s doing a grand job, how do you know it isn’t also acting in its own best interest by subtly channeling or maximising your spending in the direction of its shareholders' interests?
And is this really such an outlandish prospect in an age of big consumer data and algorithmic probabilities such as, people who buy X are more likely to by Y. Or people who buy X tend to spend more on Y. Why would these insights not motivate subtle manipulation of your spending through automated or advisory purchasing? Swap Z for X and you’ll have them wanting Y in no time.
We should consider what is at stake here. Consumer cultures of the future may be predicated on the human-AI relationship, rather than the shopping culture that defined the modern era, and so much of what we call taste, identity and culture generally. I wager that certain types of consumption will become almost inconceivable in the absence of AI. Humans will look back and balk at our wasted energy, our ignorance of ourselves, our hard won but impoverished knowledge of the market, and all of our terrible choices.
Will they have a point? Of course. But the autonomy-automation trade off should give us pause. Do we really want to go there? And will future free-market liberals find themselves on the reactionary side of history, chanting “long live the buy button, and keep it well within reach”?
Sam Scott is head of video at The Drum