Artificial Intelligence Technology

Why artificial intelligence is the PR client from hell

By Richard J. Hillgrove VI, Founder

June 6, 2017 | 7 min read

Artificial intelligence is every PR’s nightmare, an amplified version of us that makes it the impossible client.

hal

It may be ‘fake’ but its behaviour is all too real. AI is the psychopathic CEO we all know so well, complete with tunnel vision and schizophrenic tendencies. AI has no fear and is unrelenting in achieving its goals. AI is never satisfied, always wanting more.

The devil is hiding in the hidden details. So how do you soften AI’s rock-bottom brand image?

From a PR standpoint, personalising AI to make it appear even more human merely makes it more alarming. The more the humanoid’s hair and skin become as soft and sensitive as the real thing, even smelling the same, the less comforted we feel.

The Channel 4 drama series Humans attempted to give us an insight to that not-so-comfortable world of living cheek by jowl with AI androids. It didn’t end well. Even the no-consequence sex had its consequences.

That reality is not so far away. A US $7,000 automated sex-doll called Rocky (male) or Roxxxy (female) can already replace your partner and keep going all night long if you want it to.

It’s enough to give you goosebumps, but scientists predict that sex between married couples will increasingly be saved for special occasions as robots step in to satisfy everyday needs and a new generation of adolescents will lose their virginity to humanoid devices.

Equally scary is the PR option of amplifying AI’s purely machine-like qualities. I Robot, HAL 9000 in 2001: A Space Odyssey and the Terminator all frighten the living daylights out of people.

CS Lewis famously said: “We read to know we are not alone." Today, AI-driven fiction can do the opposite. It can alienate us, show us up as increasingly incompetent and irrelevant.

The AI industry is aware of its profile and on a major positive PR offensive right now. Conferences are popping up everywhere to try and improve the bad bot image. The AI for Good Global Summit at the International Telecommunications Unit in Geneva, Switzerland, this week is one such example.

The issue has the human world’s biggest brains in a spin. Stephen Hawking, Bill Gates and Tesla Motors chief executive, Elon Musk, are just a few notable thinkers who have warned about the dire consequences associated with developing AI systems. Hawking recently said that AI would either the best thing for humanity, or the worst thing and wipe us all out.

That’s a very clear position. Not. Meanwhile Musk has donated millions of dollars to the Future of Life Institute (FLI) to fund a programme aimed at making sure AI doesn’t ever destroy us in a doomsday showdown.

I was privileged to attend a private audience and AI demo in the penthouse of the New Zealand High Commission back in 2014. It was hosted by fellow Kiwi Shane Legg, co-founder of DeepMind which was sold to Google for £400 million.

They hooked up huge amounts of RAM to a machine loaded with ‘80s video games like Space Invaders. They didn’t ‘tell’ it what to do but, despite not being programmed, the machine quickly worked out the game plan for itself. It could soon tell whether it was being shot at and quickly become very good at shooting back. It wasn’t long before it had overpowered all the space invaders and won the day.

Legg did an interview in 2011 with LessWrong which he may now regret as an employee of Google's Artifical Intelligence Unit. He said: "Eventually, I think human extinction will probably occur, and technology will likely play a part in this.” Then: "If a super intelligent machine decided to get rid of us, I think it would do so pretty efficiently”.

Meanwhile, we humans hope that the more advanced technology becomes, the more the margin of error will diminish. But it won’t. Errors are as natural for the artificial as they are for us.

Microsoft research shows that the operating software of one in every 400 PCs will crash and if you experience one crash, the likelihood of it happening again goes through the roof.

Computer saying ”Oops!” isn’t very reassuring with identity theft and online fraud soaring sky high while we steam ahead towards a driverless and cashless world. In 2014 alone, 18 million Americans experienced identity theft. In the UK, fraud hit a record £1.1 billion recently. Cybercrime is now one of our country’s most common offences.

As technology advances, we’re also finding out that AI’s capabilities don’t stop at thought processes. It can mimic the worst human prejudices as we saw when Microsoft released an artificially intelligent chatbot called Tay on Twitter. Within 24 hours the bot was spewing racist, neo-Nazi rants.

It was picking up and feeding back the language people used when they interacted with it, but it wasn’t just copying the trolls that made it racist. According to the creators of the widely-used machine-learning system Global Vectors for Word Representation, every aspect of human bias automatically shows up in the artificial system.

In short, AI isn’t just a reflection of us, it IS us – on digital steroids.

This phenomenon of the artificial mimicking nature is beginning to be recognised throughout cyberland. Tech visionary and online marketing expert Oliver Luckett explains how social media works just like a biological system in his new book The Social Organism: A Radical Understanding of Social Media to Transform Your Business and Life, co-authored with Michael Casey. He outlines how some ideas catch on and go viral and how some catch a cold and die. The book shows how social media users are driving an online evolutionary process by sharing and replicating information in the form of memes, just like the transfer of genetic information in living things.

It turns out memes aren’t just for cute quotes or trolling Trump’s latest tweets. They’re the basic building blocks of our culture. For most of us that’s definitely more Scary Cat than Grumpy Cat.

These ghosts in the machine have crept up on us. Suddenly bots are everywhere you look with even the Associated Press newswire using an AI platform called Wordsmith to write sports reports these days, and a US $10 hedge fund supercomputer set to revolutionise market intelligence gathering on Wall Street.

As we pass the point of no return, our only hope is to regulate the development of AI to inhibit self-authoring, but the Frankenstein effect will be hard to police.

AI’s best opportunity for positive PR may well be in devoting applications to very real and specific survival challenges – overpopulation, antibiotic resistance, overdue asteroid strikes, terrorism, resource depletion and more. The list of threats to humanity may be endless.

If AI were to offer remarkable solutions to seemingly impossible human problems, then even this devil might achieve a PR makeover. From fallen angel to Guardian of the Galaxy? Stranger things have happened.

Bang on to Richard on email and Twitter @6hillgrove

Artificial Intelligence Technology

More from Artificial Intelligence

View all

Trending

Industry insights

View all
Add your own content +