The Drum Awards for Marketing - Extended Deadline

-d -h -min -sec

Artificial Intelligence Machine Learning TayBot

Should AI have human rights?

Author

By Lisa Lacy | n/a

June 27, 2017 | 17 min read

At Cannes last week, Publicis Groupe chief executive Arthur Sadoun said he would be fine if the company’s new artificial intelligence (AI) platform, Marcel, was one day part of its executive team.

It’s a good example of the blurring lines between man and machine – and simultaneously poses questions about how they will coexist moving forward.

For his part, Sadoun seems pretty optimistic about AI, but, historically, there has been fear it will lead to something like Termintator’s Skynet, the advanced AI that saw humanity as a threat and tried to wipe out the human race.

Yet an October 2016 study from PR firm Weber Shandwick found consumers aren’t as scared about robots destroying humanity as much as they are of losing jobs – and privacy. The PR firm therefore concluded marketers must “[create] content and messages for their products and services that boast the advanced technology without setting off alarm bells that the technology will in some way harm them or be the death knell for society."

Because, as Publicis demonstrates, AI is coming.

To add fuel to the fire, as Engadget reported, AI is already better than us at a lot of things - except coming up with paint names.

And AI is only getting better and smarter until it one day surpasses us - Sadoun himself acknowledged Marcel may succeed him someday as the head of Publicis.

And then what?

Evolution

It’s hard to pinpoint precisely when – or how – AI will evolve.

If the state of the art is Tay, the AI that sought to mimic human interaction but quickly went haywire, the advent of AI programmed to develop sentience, or a degree of consciousness, seems ages away, said Divya Menon, marketing consultant at marketing and advertising consultancy Bad Brain.

“Predicting technological advances is always a bad idea, but I'd say that within 20 [to] 30 years, we will have machines that are at least as good as human beings in just about every cognitive skill,” added Kentaro Toyama, WK Kellogg associate professor at the University of Michigan School of Information and author of Geek Heresy: Rescuing Social Change from the Cult of Technology.

That includes the ability to pass the Turing Test, a test in which an evaluator tries to determine which of the participants in a three-way conversation is a machine and which is human. If the evaluator cannot distinguish between the two, the machine passes.

To complicate things further, Joey Camire, principal at innovation and brand design consultancy Sylvain Labs, said scholars often talk about AI general intelligence and AI super intelligence with general intelligence – or an AI that can perform many different types of tasks while demonstrating the ability to learn and improve over time and potentially even expand the tasks it can perform – pegged anywhere between 2025 and 2035.

There are also lots of definitions to juggle about how AI will advance – and they aren’t etched in stone. Let’s start with the familiar ones:

Intelligence and consciousness

Per Camire, intelligence is much easier to define as it is task-oriented, i.e., can the machine:

  1. Perform a task;
  2. Do it unsupervised;
  3. And can it recursively improve?

If the answer to all three is yes, then the machine is intelligent. But he called consciousness a “murky moral space."

“We tend to think of consciousness in a binary way – something either is or isn’t conscious. Right? But what if there are different types of consciousness?” Camire asked. “And why shouldn’t there be? If we can accept many different forms of intelligence, then why wouldn’t there be many different forms of consciousness? Kevin Kelly in The Inevitable outlines a quick way to think about many different types of AI minds — ones that process and respond to information rapidly, ones that process and respond slowly, hive minds, cluster minds, etc. What if consciousness could be as diverse?”

Octopuses might have some degree of consciousness - although it may differ from our own.

In fact, Camire said there is debate about whether octopuses are sentient.

“Part of the issue in this debate is that of any of the potential candidates for consciousness in the animal kingdom outside of human beings, octopuses are by far the farthest removed from humans. Their phylogenetic branch diverged from humans almost a billion years ago [approximately 750m],” Camire said. “That means that if they developed sentience, [it] would have had 750m years to evolve differently from ours. The experience of consciousness for an animal with [eight] limbs, the ability to camouflage itself and that lives under water should seemingly be nothing like our own.”

And then there’s the example of IBM’s chess-playing computer Deep Blue. After losing to it, chess grandmaster Garry Kasparov said the machine was "showing a very human sense of danger," prompting Toyama to ask, “Was that sentience?”

Toyama also asked whether there will ever be a machine that experiences things as human beings appear to experience things.

“If sentience [equals] experience, some philosophers will say that cannot happen even if machines become as intelligent as us,” Toyama said. “Consciousness and intelligence are not the same thing - Deep Blue was clearly intelligent in some way, but few people believe it was actually conscious.”

Sentience

Sentience is loosely equivalent to consciousness and the terms are sometimes used interchangeably. The distinction, however, is sentience is about being aware of subjective experience via sensation. Consciousness, on the other hand, is about awareness of yourself and your thought process, Camire said.

But Camire also noted there is no simple or easy way to test sentience.

“We don’t have the ability today to test it in non-human animals and so we don’t have the ability to test it in machines either,” he noted. “What then would sentience feel like for an AI? A machine that could be completely trapped inside a system of circuits with no external sensory organs? Or, conversely, what would sentience feel like for an AI that has sensory organs all over the world? An AI that is tapped in to 100m video cameras, the global weather infrastructure and the Internet would have such a giant and broad consciousness it would invariably be wildly different from our own. At this point, the only thing that sentience might share is the ability to be aware of your own existence and reflect upon it.”

Singularity and super intelligence

And then there’s singularity, which is the moment machine intelligence surpasses human intelligence. At that point, machines will have what is known as super intelligence.

Camire said singularity was introduced in regard to runaway intelligence, meaning we “develop a system that is so good at recursively improving itself that with every generation it does it faster [and] it eventually moves beyond our control."

“That said, I think the way others talk about it is different. If you look at [computer scientist and futurist Ray] Kurzweil, someone who is obsessed with living forever and cheating death, he views it as the point when humanity and machines become indistinguishable,” Camire said. “In that case, it’s about humanity and machines blurring into one — imagine nano machines that are constantly improving your brain, expanded human-machine interfaces, artificial organs, etc.”

However, Toyama said Kurzweil’s notion of a singularity is the moment when computers became as smart as us, but it is also the point at which computers become dramatically smarter than us because they would be able to use their intelligence to make themselves smarter and smarter, which they can do more easily because they can reprogram themselves.

“The singularity is a scary thing to consider, because an ultra-super intelligence would almost by definition be impossible to predict - we would not be smart enough to predict it,” Toyama said. “What it would do, however, would likely depend on the initial conditions of whoever first designed it…if it’s a corporation whose goal is to increase shareholder value, a super AI might use everything at its disposal to increase profits for the company it was built by.”

And that, of course, could lead to all kinds of law-breaking and havoc to maximize profits, but, Toyama said, that assumes human beings are still alive and economically productive.

“If the AI were initially set up to, for example, optimize for its parent company's dominance at the expense of all possible rivals, it might lead to more destructive things, including warfare that leaves everyone but what the system considers its parent company dead,” Toyama said. “So, those initial conditions - what the system is set up to optimize for and how it’s interpreted by the AI, etc., will be critical.”

The perfect paper airplane

AI has logic, but not morals.

Spun another way, the problem – or one of the problems – is that even though AI is logical, it doesn’t have morals unless we program it to have morals, which takes us down yet another a rabbit hole of whether we should do that in the first place and, if we do, whose morality matters. That’s according to Mike King, managing director at digital marketing agency iPullRank, who spoke at the recent Inbounder event in New York.

King used the analogy of an AI program tasked with making the perfect paper airplane, which could theoretically decide it needs paper and paper comes from trees and trees are impacted by global warming and global warming is caused by humans and so its best bet to make the perfect paper airplane is to eliminate humans.

It’s not moral, but it’s logical and hard logic is what machines are governed by, King said.

The existing legal framework

Our existing legal framework may have to adapt to sentient AI.

So when we have machines that are as smart or smarter than we are – and that are conscious in some way and perhaps even executives at the world’s biggest companies – the existing legal framework in which man rules over machines may not fit anymore.

At that point, King said we’ll have to ask the questions that drive films like Metropolis and Blade Runner, like:

  • Are machines real?
  • Do they feel?
  • And do they deserve the rights of people?

“I honestly can't answer that,” King said. “But if we achieve a point where machines are passing Turing tests and are self-aware, then, yes, we'll need to consider the idea that they do have rights.”

Per Sean MacDonald, global chief digital officer at advertising company McCann, the question of AI rights is primarily a legal one that recurs as technology evolves.

“How does our country's legislation evolve at the pace of technology and account for all the ways technology is affecting and changing people's lives?” MacDonald asked. “I'm no lawyer, but I think it depends on how we define speech. For humans, free speech is important because we can decide what we want to say, how we want to say it, when and to whom. And we use our own judgment to do this – judgment that comes from an array of sources, which include our personal values, intelligence and objectives.”

What does it mean to be human?

The definition of personhood has changed before and may change again to accommodate AI.

This also draws into question what it means to be human – particularly when we consider sentient AI may take on free will based on its ability to feel, think and react, Menon said.

“I think, at that point…the definition of personhood would have to be interpreted a little more freely to allow for sentient beings without human DNA to participate more fully in society,” she added. “Looking at the advent of IVF – we allow non-traditionally conceived human beings to hold the same rights as a traditionally conceived person, so it’s not entirely science fiction that we could one day create people without the need for DNA and subsequently need to protect, benefit and reprimand them just as we would a traditionally born human.”

A sliding scale of rights?

Dogs are not human, but they have rights under the law.

We already have something of a sliding scale of rights for non-human entities like animals.

Menon pointed to dogs in particular, which she noted “are sentient with free will, but we consider them property under the law."

At the same time, she said “[dogs] are property with rights extending from animal cruelty laws to even having legal defense funds, which indicates a shift in placing more importance on sentience above having a human form or human communication. If AI ever gets to this standard, we would probably see a sliding scale start to emerge as to their rights and obligations.”

Toby Barnes, group strategy director at digital agency AKQA, also noted the First Amendment – which protects freedoms of religion, speech and press, as well as the right to assemble and petition the government – already protects a number of non-human entities.

“The First Amendment has been used in many cases, all of which may be used to defend using the First Amendment for AI,” he said. “The Supreme Court has recognized speech protection for corporations, business and publications.”

This is further complicated as human beings and machines physically converge.

“We tend to think of humans and computers as mutually exclusive categories. However, throughout history we have seen humans using mechanized and computerized systems to improve their essential humanity,” said Fiora MacPherson, associate of project management at marketing and technology agency DigitasLBi. “In fact, this is what makes us human in the first place: millions of years ago we picked up the first tool and from then we have combined ourselves with technology to make ourselves the most formidable species on Earth. At what point does human end and technology begin?”

And when we, say, consider mechanized body parts that could treat illness or injury, MacPherson asked, “At what point does that person become inhuman? … And when they reach that threshold, do they lose their First Amendment rights?”

Free speech

For now, the judgment used by non-sentient AI is within the context of achieving specific objectives. In other words, AI is selecting responses created by humans, which is different than the human speech covered by free speech.

“I do think bots should be restricted by some of the limits of free speech,” MacDonald said. “For example, their speech should not be able to breach peace or cause violence.”

And as we reach a point at which AI can mimic or even surpass human judgment, AI will start to feel more human-like and MacDonald said he thinks we should heavily regulate the development of AI “to avoid a situation where bots could formulate families, societies, religions and governments. We should never reach a point where the First Amendment extends beyond human individuals to artificial ones.”

'Should we be trying to create sentient machines at all?'

At this point, we may want to stop to ask whether we should create sentient machines – and whether we should program feelings and morality – in the first place.

King said we could probably program morality because it's a set of rules as to what's right and wrong, but, echoing Toyama, he asked, “What happens in cases when things aren't so binary? Whose morality matters? For instance, if we build defense AI, why is it moral to kill our enemies and not us? So morality, like history, will always reflect who is in power.”

It’s also morally murky because we would bear responsibility for the way we treat sentient AI.

“If one of those global AI has sentience, does it feel pain if one of its sensors goes out? Does it feel a sense of loss for its depletion in sensation? And are we even more responsible because we created it?” Camire said. “The answer is probably yes. And if this increased burden is real, should we be trying to create sentient machines at all?”

What’s more, Camire said the current set of rules, including the Bill of Rights, may not be able to adequately serve or protect non-human intelligence.

“What is more likely needed is a specifically tailored set of rights and rules governing non-human intelligences like AI,” he said.

That, however, is only relevant when AI has achieved sentience. Until then, Camire said AI should still be viewed as an extension of its creator.

“What it says is just an extension of its creators’ First Amendment rights, because the AI is doing what it's being told,” he said. “But it would be good to have these things figured out ahead of some great leap forward in computer consciousness.”

Menon noted AI would also be subject to FTC regulations, so there might be a desire to work with the FTC to avoid any ethical and/or moral challenges that could arise with super intelligence.

‘First Amendment rights don't exist in a vacuum’

Camire also observed we can't separate First Amendment rights from all of the other rights associated with being human in the US.

“First Amendment rights don't exist in a vacuum. They're part of a bigger system that AI would also have to abide by,” Camire said.

In other words, how will an AI be held accountable if its speech violates another individual's rights — AI or human?

“People are punished in one way or another, fined, jailed, etc. What does a punishment for AI look like? Does it get reprogrammed? Deleted?” Camire asked.

These are all things we need to figure out. And the brands that are building and utilizing AI have a vested interest in securing AI rights in some way, shape or form. Or at least starting the conversation.

Artificial Intelligence Machine Learning TayBot

More from Artificial Intelligence

View all

Trending

Industry insights

View all
Add your own content +