Artificial Intelligence AI Generative AI

Why wasn’t AI safety and governance discussed more often at Adobe Summit?

Author

By Webb Wright, NY Reporter

March 29, 2024 | 9 min read

The tech giant has positioned itself as a brand-safe alternative to other companies whose AI models have been trained on copyrighted data. But there was still a conspicuous dearth of conversation at its conference in Las Vegas this year about the risks posed by the technology, according to some attendees.

Adobe Summit 2024

The advent of generative AI was framed as a major technological paradigm-shift at the 2024 Adobe Summit. / Webb Wright

The 2024 Adobe Summit in Las Vegas, which attracted some 10,000 attendees, came to a close yesterday. The primary focus of this year’s conference was Adobe’s investments in AI, particularly around Firefly, its family of custom generative AI models.

The company unveiled a raft of new AI-powered features during the Summit’s opening keynote on Tuesday, including Firefly Services, which enables marketing teams to automate certain operations through the use of APIs. This feature was introduced with a zealous exuberance: “In my mind, this is going to be the biggest change to content creation in decades, and it’s going to transform how enterprises work,” David Wadhwani, chief business officer at Adobe’s digital media division, said onstage.

But amid all of the grandiose talk about AI at the event this week, there didn’t seem to be much discussion about how the technology might be misused. That was surprising, considering the fact that AI safety has been the subject of widespread media coverage over the past year, and has only become more urgent as much of the world heads into another election cycle.

Of course, Adobe has an ethical and legal armor welded by the fact that Firefly has been trained exclusively on materials from the public domain as well as its own licensed content. “It’s not sueable,” as Adobe digital media enterprise vice president Ken Reisman succinctly put it in an interview with The Drum.

The company has also sought to make it easier for users to identify content created by Firefly through the introduction of ’content credentials,’ a small label attached to the upper right corner of an asset. “We expect the new icon will be so widely adopted that it is universally expected, and one day soon become as ubiquitous and recognizable as the copyright symbol,” the brand wrote in a blog post in October.

And so perhaps the lack of a distinct, overarching focus on AI safety and governance at this year’s Adobe Summit can be attributed to the fact that the brand has, in an important sense, positioned itself as a more legally airtight and brand-safe alternative to companies like OpenAI, which train their models on huge swathes of copyrighted content scraped from the internet.

Nonetheless, attendees took note of the relative lack of conversation around the safe and responsible use of AI.

“I’ve already been to so many conferences this year – like CES and Davos – [and] at those summits ... there’s been a lot more talk about the dangers of AI,” says Dan Gardner, CEO of digital agency Code and Theory. “If I compare [Adobe Summit 2024] to every other discussion I've had [about AI] through the year, those have a bit more about: What does this mean for the world?” (Gardner adds, however, that one of the strong suits of Adobe Summit is that it's exclusively about enabling creativity through technology, whereas some other conferences tend to be more diffuse in their areas of focus.)

Ali Alkhafaji, global CEO at consulting firm Credera, says he was personally concerned that the conference was not shining enough of a spotlight on AI governance and data privacy. “We tend [in the marketing] industry to look at that last,” he says, “but the problem is that when you go ahead full force with AI without having those frameworks in place as a practitioner, you end up being in a position where you have all these debts that you have to make up for at the end.”

Better, in his view, to focus first on safety and then move forward with experimentation and deployment. Unfortunately, he says, that approach hasn’t been endorsed broadly at most conferences he’s attended.

Suggested newsletters for you

Daily Briefing

Daily

Catch up on the most important stories of the day, curated by our editorial team.

Ads of the Week

Wednesday

See the best ads of the last week - all in one place.

The Drum Insider

Once a month

Learn how to pitch to our editors and get published on The Drum.

Neither did there appear to be much discussion at this year's Adobe Summit around job security – another surprise, given widespread tech layoffs over the last year and growing concerns about AI’s place in automating what today are largely human tasks.

As I sat among the crowd of marketers during keynote sessions this week, I couldn’t help wondering: How many of these people – many of them C-suiters who might have to sign off on future layoffs – were thinking less about the new technologies being unveiled and more about how they (and AI more broadly) might increasingly marginalize the role of human beings in the workplace?

The party line from Adobe seems to be: Our AI-powered tools will support human creativity, not replace it.

But it can’t be denied that the technology, disruptive in the extreme, will overtake some operations and perhaps entire roles that are currently being managed by humans.

“[When we talk] about automating workflows, it’s not our vision to take humans out of the loop,” says Reisman. “That said, jobs are going to change, we know that. You're going to see less jobs, for example, to just do rote creation of content variants.” Reisman views this as a positive development that will free up time for marketers to attend to more demanding and fulfilling work.

This view has, of course, become something of a cliché since generative AI hit mainstream consciousness in late 2022 with the release of OpenAI’s ChatGPT. This powerful and cost-effective technology won’t replace your jobs, executives have said time and again – it will just make you better at doing your jobs.

Still, it’s natural for people to feel uneasy amid such a sudden and monumental groundswell of technological change.

“Anxiety isn’t just at [Adobe Summit], it’s everywhere,” says Credera’s Alkhafaji. But he feels that such fears about AI are misguided, comparing them to those that were voiced decades ago by teachers who worried that the proliferation of calculators in schools would inhibit students’ ability to learn math.

It’s true that Adobe Summit is focused on equipping the company’s customers with cutting-edge technologies. But as leading tech companies like Adobe continue to deploy AI on a mass scale, the boundaries between customers and society at large will continue to be narrowed down.

Perhaps next year, when AI is likely to be much more powerful and ubiquitous than it is today, the spotlight at Adobe Summit will turn from large-scale deployment in the direction of responsible, sustainable use.

For more on the latest happenings in AI, web3 and other cutting-edge technologies, sign up for The Emerging Tech Briefing newsletter.

Artificial Intelligence AI Generative AI

More from Artificial Intelligence

View all

Trending

Industry insights

View all
Add your own content +