Artificial Intelligence Stephen Hawking Elon Musk

Mankind must "get ahead of technical development" states PHD's Mark Holden following artifical intelligence warning

Author

By Stephen Lepitak, -

July 29, 2015 | 4 min read

A warning on the development of artificial intelligence (AI) through an open letter signed by Apple founder by Steve Wozniak, Tesla chief Elon Musk, linguist Noam Chomsky and scientist Steven Hawking has been echoed by PHD's worldwide strategy and planning director.

A group of 1,000 robotic experts signed the open letter in a call to ban all killer robots that were developed with AI included, such as drones that featured built-in weaponry.

"If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable,” stated the letter.

"Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce. It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc.”

Having recently published a book on the subject, Sentience: The Coming AI Revolution and the Implications for Marketing, PHD's Mark Holden told The Drum that he believed that it was important that mankind "get ahead of technological development" in order to appreciate the benefits such evolution could afford.

"This has never been more true for AI. But along with the benefits, comes the potential for significant threats," explained Holden. "Not just the obvious threats of AI-driven weaponry and the longer-term potential of AI rethinking the role of us. The more realistic, and more immediate, is the reorganisation of society and, specifically, the working world – the potential disappearance of whole industries being very much on the agenda. For any business, planning for the likely impact AI is going to have in your own back-yard is, arguably, more important than it was to plan for the role the internet was going to have on business 25 years ago”.

Chris Coulter, Lawyer and partner at Cooley LLP was of the opinion that such a disruptive innovation as AI was always likely to generate paranoia or "utopian zealotry."

He added: "AI as it exists today is simply a set of software tools that enable computers and other devices to perform a variety of useful tasks based on rapid processing, and enhanced memory, reasoning and learning. But much as nuclear science requires measured and transparent handling, we would be foolish to embrace all AI developments without consideration.

"For now, AI, like most technology, has the capability of being used for good means or bad, and in this respect AI is no different from any other technology. Just like other technology – from something as relatively simple as a car through to the Internet of Things – AI needs to be monitored, its development needs to be transparent and a degree of oversight needs to be in place. From an ethical, commercial and regulatory perspective AI digital development may have opportunities to learn from fields such as medicine and genetic modification. Using these approaches and providing clear examples of AI being used to make life better may help a sceptical public embrace the concept of AI and its many benign applications."

The full letter can be read here.

Artificial Intelligence Stephen Hawking Elon Musk

More from Artificial Intelligence

View all

Trending

Industry insights

View all
Add your own content +