A.I. SERIES: Is AI really more dangerous than North Korea? - ResponseTap
A.I. SERIES: Is AI really more dangerous than North Korea?
In the summer of 2017, Tesla CEO Elon Musk warned that artificial intelligence posed “more of a threat” to the stability of the world than the apparent nuclear capabilities of North Korea.
So we wanted to cut through the hyperbole and find out if AI is really more dangerous than an unhinged despotic leader.
We spoke to three people in the know to set the record straight.
“We’re always scared of things we don’t understand and just about no one on this planet fully understands Artificial Intelligence. Even at its most basic level, the machine learning stuff, we just don’t understand it yet, so we fear it.
Is it more dangerous than North Korea? It will be, when it has an opportunity to be. AI is a technology that, like any other, can be hugely abused in its application.
“But it’s also a technology that if not carefully applied, has the potential to almost abuse its own application. For example, you tell a machine to make paperclips in the most efficient way possible, and the next thing you know it’s consumed the entire world’s resources to make paperclips because you gave it a poor brief. We’re a long way from the technology being advanced enough to make a mistake like that. But the more reliant we are on the technology, and the less we consider its behaviour, we start to see examples of what happens when algorithms run amok.”
Head of Data and Decision Sciences, ResponseTap
“In the hands of a nefarious agent, any technology designed to deliver a benefit can be used to drive chaos. Think of nuclear fission, weaponized in the atomic bomb. The beneficial use of the technology wasn’t snuffed out in the twentieth century because of the potentially disastrous methods in which it could be employed. Instead, myriad rules and regulations around nuclear energy and weapons were put in place; its proliferation was monitored and controlled as much as possible, and the world learned to live with this new source of power and fear. AI has developed in a similar way. Small, contained projects have demonstrated great advances, showing that computers can be as smart, or even smarter than, us; take beating humans at Go for example. But in the minds of many in the field, there will be the understanding that unrestrained and unbounded AI could have a runaway, and uncontrollable, effect – the same as an atomic bomb.”
Head of Marketing, ResponseTap
“We’d be foolish to ignore Elon Musk and the likes of Stephen Hawking, warning about the potential consequences of AI. But it’s also easy to get preoccupied with the sensationalist, headline-grabbing stuff. Separating the Hollywood hyperbole from the reality is important.
“AI is already part of our lives, often, for many people, without even knowing it.”
“From Siri and Google searches, to Amazon recommendations and Call Intelligence, AI is permeating through every industry. Companies like Phrasee for example have been creating headlines for major brands for years. So for now, I don’t think we should be scared about AI and it’s potential consequences, but instead embrace the technology and consider how it can help us do our jobs better.”
How scared are you of AI? Do you think it poses a threat to humanity? Join the conversation now on Twitter.