Artificial Intelligence Unilever SXSW

Ethics, arrogance and optimism in tech: The Drum and Karmarama panel in Austin

Author

By Doug Zanger, Americas Editor

March 13, 2018 | 9 min read

This year, the bigger rooms at SXSW elicited strong opinions. In one corner, Tesla founder Elon Musk, who made a surprise appearance at the festival has a somewhat sinister view of technology — and compared the threat of artificial intelligence (AI) with nuclear annihilation. On the other end of the spectrum, Tim O’Reilly, founder of O’Reilly Media, broadly speaking, feels that the adoption of technology is something that shouldn’t be feared and can be a vehicle for good. At The Drum SpeakEasy panel on reconnecting technology to humanity, presented by Karmarama, four industry experts took the latter view.

Panelists discuss ethics, arrogance and optimism in tech at The Drum Speakeasy at SXSW

Panelists discuss ethics, arrogance and optimism in tech at The Drum Speakeasy at SXSW

It was one of optimism, that technology can be an essential enabler for good and progress, though the ethical problem is still present.

“Elon Musk is deliberately provocative around the role that AI can be played in society because it's an interesting soundbite and it gives him a counterpoint to the other people in the tech community who have identified themselves as being AI purists,” said Lawrence Webber, Karmarama managing partner and leader of the agency’s Creative Futures practice.

“The general consensus from all the talks that I have seen in the last few days is that we have a five to ten-year window to program some ethics into artificial intelligence before it starts to shape our ethics for us potentially. The problem is, is how do you get consensus across the globe about ethics?”

Ethical issues are discussed as it relates to the development of new technology, yet they are often positioned as a future issue as opposed to one in the present. In today’s climate, ethical problems are rampant, but not necessarily discussed in the context of existing technology and platforms, especially social media.

To Kieran Hannon, global chief marketing officer of Belkin, the future of technology’s cautionary tale may be writing its chapters on a daily basis.

“It's become very abusive,” he said. “It's really frightening. I think that we have to get ahead, and we have to put the checks and balances in there so that it is a safe environment for people. The first amendment is very important, but it doesn't give me the right to abuse people. If I came in here [Prohibition Creamery, an Austin bar] at night time and I started cursing the bartender. What would happen? I'd get thrown out, right? So why am I not thrown off all these other platforms for the same behavior?”

Looking at Asia, Barbara Guerpillon, head of Unilever Foundry SEAA and Level3 said that for her it isn’t necessarily looking at the ethics, especially related to AI, but instead looking at the problems that need to solving in the first place.

“I'm thinking how can I make sure that I have people in India who have access to clean water and soap to fight dysentery,” she noted. “My rule is to bring people back on the ground and help them to [determine] the project statement. What are we solving here? What are we trying to do? Are we trying to do to reach our consumers in a rural area? Do we need AI to do that?”

The arrogance of technology

Another issue that is seemingly not touched upon as often is the arrogance and hubris of technology — which ties back to the ethics conversation. Though it is available to be exploited for personal or professional gain, according to Dr. Kate Stone, director of Novalia, the human aspect is sometimes lost.

"The difference between humans and other animals is we have a higher evolved sense of ego,” she said. “Our greater risk to the world is our ego rather than AI. My behavior is more defined; we are the AI. I think that AI is going to be smarter than everyone is planning and I think it’s going to protect the world from us.”

Webber, who attended a panel on AI featuring a professor of the practice, mentioned that AI might “end up being like the mirror in Snow White, that shows us how dull humanity can be, and this is an interesting debate about how AI will affect humanity. I think if you're an optimist, it will start to help us behave better, rather than the myth of terminators blowing through windows. I think there are some interesting benefits we might get from it if we make the right decisions.”

In the brand and product world, the race to make new things continues at a fever pitch. In fact, According to IHS, there will be almost 75 and a half billion connected devices in 2020, up from 20 billion today. Hannon, who has a front row seat to a high number of consumer products, shared a story about a product manager who introduced a new Bluetooth speaker to the market. To the manager’s peril, the product was squarely in the middle of the market and not only failed to make a dent but lasted less than a year in the market.

“You have to come back and be focused on what the consumer experience is. [And what is] being addressed or being made better,” he said.

The sustainability of how a technology works within a brand is a key consideration as well. To Guerpillon, she sees her own company as one that understands that consumer need shouldn’t be perishable.

“Our CEO, Paul Polman, is empowering everyone in the organization around that mission and that purpose,” she said. Each brand — we have 400 brands — have their purpose, but the overarching one is really about sustainability and making everyone understand that is the goal. In everything we do, we have this in the back of our minds.”

The role of government or education?

A topic that has emerged as of late around technology is the role of government. In the US, there is building momentum about regulation, especially in light of the 2016 presidential election. Additionally, a caste system appears to be emerging where startups may not necessarily have the same access or opportunity to thrive as others.

Tech City in the UK, for example, has garnered substantial attention but, according to Webber, it feels as though it’s an uneven playing field. Hannon added that a more collaborative approach appears to be happening in EU, especially around data.

In Singapore, though, the government appears to be more considered in its approach and, instead of replicating Silicon Valley or other tech hubs, Guerpillon said that the country is looking to build its own, unique models based on current strengths in biotechnology, for example, that can spread to the rest of Asia — and Level3 was created as an ecosystem to help startups grow successfully.

In more mature markets like the US, though, entrepreneurship and private industry are the driving forces with venture funds and brand-led incubators and accelerators leading the activity as opposed to the government. That said, lobbying efforts in Washington DC, according to Hannon, are critical and a large part of educating the lawmakers who may have to start making decisions on technology and especially the role of AI.

Webber sees government’s role as being more about being important vehicles of knowledge for their constituents as technology continues its upward arc.

“What governments really need to do is prepare their constituents or the people in their democracies to be better able to flourish and be creative in the world of AI,” he said. “What we need to do is to go back to more principles of creativity and problem solving and education. What tends to happen, what certainly seems to be happening in the UK and what's happening in the US, is going back to quite outmoded ways of educating the populace. Which is not going to help you survive.”

But there is optimism

In the end, though, despite the headlines and doomsday scenarios, there is a great deal of optimism that goes far beyond the standard headlines — with a few notable caveats.

“I can only be optimistic about technology plus I also meet so many startups, so many entrepreneurs who want to change the world and want to make an impact,” said Guerpillon. “Our role is to educate the younger generation about what technology can do — about AI, about ethics. As long as it's done in that way with the right purpose, technology would be positive.”

“I'd say an optimist, but I think- there's always a ‘but,’” said Stone. “I think we're like adolescents or we're just growing into our teenage years with technology and what we can do with technology. We have no idea how to use it, how to behave, how to think. And so we have it, we will start learning, and we will start to grow up a little bit.”

Noting that he is a short-term pessimist, due to the concentration of power in technology and AI, Webber still believes that there are many good days still on the horizon.

“Short-term pessimistic, because I think we are going to have to live through a bit more of this cycle of questioning the kind of technology that's controlling us. And then longer term, I think the current generation that is growing up will potentially be less obsessed with technology than we are.”

During the session prior to the panel, Doctor Kate Stone talked more about the need for friction in people’s life experiences.

Artificial Intelligence Unilever SXSW

More from Artificial Intelligence

View all

Trending

Industry insights

View all
Add your own content +