Robots Ethics Drones

Time to debate the ethics of robots

By Mark Leiser, Research Fellow

October 14, 2013 | 5 min read

Do we need to worry about robots that can make legal and lethal choices on their own? I argue not only is the answer yes, but the issue of robotic ethics is pressing.

The question invokes a Steven Spielberg film or the Terminator films, but we are going to have to come up with some answers on robotic ethics sooner than we think. With drones hovering over the sky doing everything from making arrests to delivering tacos, robots are increasingly becoming self-aware and capable of self-assembly. It is self-evident that robots are defined as mechanical and perform tasks on their own or with guidance from a human master. 11.5 million were sold in 2011 into professional service. South Korea has gone as far as setting the target of one per home by 2015. The Jetsons have become real; we just didn't expect them to be Korean. In the last week, Massachusetts Institute of Technology (MIT) achieved the unthinkable by producing modular robots that can change their geometry according to task. The robots self-determine based on the task that needs to be done. A spinning mass spins inside the robot and takes the motion from the robot and applies it to the robot allowing for various types of motion. Combine this technology with increasingly “smarter robots” that can learn by using big data to make decisions and importantly, make decisions on the fly and we need to start asking some questions about the legal framework that governs robotics.

War Drones

Drones are already used extensively in law enforcement and military applications. The Bush and Obama administration’s policy of using drones over Afghanistan is extremely controversial in both Pakistan and America. Drones, controlled by human operators, are flown over areas where pre-determined targets are chosen to ‘seek and destroy’. New technology is being developed under the sinister sounding acronym, LAR, (Lethal Autonomous Robot). LARs, currently under development, are said to outperform and respond faster than the standard human operator. "If a drone's system is sophisticated enough, it could be less emotional, more selective and able to provide force in a way that achieves a tactical objective with the least harm," said Purdue University Professor Samuel Liles. "A lethal autonomous robot can aim better, target better, select better, and in general be a better asset with the linked ISR [intelligence, surveillance, and reconnaissance] packages it can run."Consider we have self-determining technology coupled with lethal automated technology, and swarm technology, and put them all together and you get self-determining robots, equipped with the power and the tools to kill and they can travel and operate in swarms. If you affect the swarm, they now have the ability to change and re-form into a different type of robot suitable to work in a different environment and under different circumstances. Who controls and is accountable for a robot gone awry?

Aggressive Quadrotors

Reality is that LARs were developed in large part because of computer hackers. Hackers can break into the control module and maintain a remote control over the drone to the point it is out-with the control of the operator. The military saw a pressing need for a drone to be removed from the network. By doing so there was nothing for a computer hacker to hack into, nothing to control. Yet, this autonomy may prevent hacking, but do we really want morality taken out of the equation? Asimov famously created three ethical laws of robotics:1. a robot may not injure a human being or, through inaction, allow a human being to come to harm;2. a robot must obey any orders given to it by human beings, except where such orders would conflict with the first Law;3. a robot must protect its own existence as long as such protection does not conflict with the first or second Law.Let us play Devil’s Advocate for a bit here. Consider a terrorist on a watch list. A drone is flying above a bazaar using pre-installed face recognition software to scan all of the shoppers at the market below. Using big data technology it can scan all of the faces and listen to voices coming from the market. It is programmed to look for the bad guy, but it stumbles across a man with a voice pattern with 92 per cent certainty to be a top associate of the target. Does it take the shot or drop a bomb on the market simply because it is a person of interest? What if it is programmed to do so? What if the drone is self-learning? Is this a sufficient check on the technology in relation to the people in the market's human rights?

Robots Ethics Drones

More from Robots

View all

Trending

Industry insights

View all
Add your own content +