Creative industries must take the lead on AI self-governance
Rachel Aldighieri, managing director of UK data and marketing trade body the DMA, argues that the creative industries must act now to develop an AI ethical framework to supplement any future government regulation.
It is no secret that AI has existed for decades, supercharging societal development through greater technological efficiencies. Yet the news agenda in 2023 is increasingly focusing on the moral implications surrounding AI’s development, questioning whether it is being developed too quickly without frameworks in place to ensure it is truly used as a force for good.
Under the right circumstances and developed with people’s needs placed front and center, it can be our man-made best friend. Yet in the wrong hands, AI can create societal bias, generate misleading information on a mass scale and severely infringe on our human rights.
Regulation is on many people’s minds. But the Data & Marketing Association (DMA UK), the industry body for data-driven marketing, believes that, like with data privacy governance, regulation is only part of the solution – ethics and self-governance have a huge role to play.
Regulation has a key role to play
The UK government’s white paper, A Pro-Innovation Approach to AI Regulation, outlines its approach and future policy towards AI regulation, calling for “responsible use” while avoiding the implementation of what it calls innovation-stifling “heavy-handed legislation”.
As the government correctly points out, for the UK to become an AI superpower, it is crucial that we do all we can to create the right environment to harness the benefits of AI and remain at the forefront of technological developments.
However, creating the right environment means getting the right balance between innovation and protection. We cannot ignore our moral compass just to expedite AI’s development for financial gain or cost-cutting measures. In this instance, it would always cause more harm than good.
AI regulation is essential for creating a deterrent for rogue actors, where their intentions are deliberately unjust, so we must continue with its development. It can also offer useful guidance and clarity where there is uncertainty for those intending to develop and use it ethically.
But regulation takes time and money, and with something as complex as AI where there are many different stakeholders with varying interests, it could encounter many hurdles during its development, which could hinder its implementation. With the rapid rate at which AI is progressing, we need to establish ethical safeguards without delay.
For genuine actors in the AI ecosystem, this is where principals-based frameworks can also help and they can be implemented at a faster rate if our industry’s key stakeholders work in cohesion.
Frameworks with people at their heart
We must create ethical frameworks for AI’s development that place self-governance, transparency and accountability at the forefront, essentially embedding into the design phase considerations about its potential impact on people – whether that be the end user, those civilians it indirectly affects, the staff maintaining and overseeing it and even those it could eventually replace in a professional capacity.
Similar frameworks already exist. The Scottish government’s AI strategy places people at its heart and creates ethical and inclusive principles to build trust in AI. It aims to strengthen the AI ecosystem over the next five years through these methods.
The DMA has its own principles-based framework for our members to help marketers ethically navigate through the constantly evolving world of marketing and technology. In 2014, we created the DMA Code to make our industry more accountable, trustworthy and inclusive. It is designed to transcend all iterations of technological development and focus on our behaviors and values as individuals and organizations.
I cannot speak for other industries, but I do know that the serious players in the advertising and marketing industries have always tried to do what’s best for our customers and staff – after all, a customer-centered approach is critical for sustainable growth.
An AI ethical framework can help us to collaborate around publicly supporting a people-first approach to build consumer trust versus relying on regulation to fill the trust gap.
Suggested newsletters for you
AI can be a creativity catalyst
The DMA’s Creative Committee recently met to discuss AI’s role in our industry and how we embrace it ethically – particularly generative AI, like ChatGPT and Copy AI. We discussed themes that we believe could have the potential to transcend all AI ethics.
For generative AI to be considered ethical, it needs to be clear when, where and how AI has been used – with a human proofing content before it is made public. We must remain accountable for any content we produce, just like when a human has created it. Transparency is essential.
Copyright infringements are another significant challenge, as generative AI takes existing content from the internet to create copy. We must ensure that it can acknowledge the source if unique content has been used to support our own content generation.
We must increase awareness around unconscious bias in data and generative AI to limit its proliferation. If humans identify where it is prevalent and always consider any person affected in AI’s decision-making process, we can move a step closer toward optimizing its ethical use.
For these reasons, the human-AI team is our best future as AI operates more effectively as a tool that humans use to assist and enhance our own abilities. Mike Bugembe of Decidable Global accurately describes it as a scenario where we can use the best of human intuition, strategy and experience in conjunction with AI’s remarkable machine calculation and memory.
AI can be a unique instrument to supercharge creativity and streamline content generation, with its ‘private researcher-like capabilities’ helping users find a vast variety of solutions at the click of a button, as well as offering us a digital companion to bounce ideas off.
This will save us time and resources on mundane tasks, freeing up our best minds for more complex or creative tasks that AI cannot or should not handle.
Its success rates will always be dependent on who is using the technology and how they are trained to get the best out of it.
This will create a range of new job opportunities across a variety of industries, altering the job market as opposed to diminishing human job prospects.
As part of an ethical approach to AI, it will always require some human intervention to promote our values as moral beings – we must never forget the people it is intended to serve.
AI is a force for good
As business leaders, we must not lose our human connection with customers and people-centric values. By creating an industry-wide ethical framework to supplement pending regulation, we can set our own high standards in terms of how we develop and use AI to engage with our customers and the wider world.
To help our industry on this path, the DMA recently established a multi-discipline task force among our 18 committees and councils to develop AI guidance for marketers and to support the government’s approach, building on the important pillars of enhanced innovation and public trust laid out in the pending DPDI legislation.
The DMA believes that self-governance and regulation should co-exist to enhance and supplement one another.
AI can be developed and used as a force for good, but we must now take a proactive approach to ensure any regulation created supplements an industry infrastructure designed to put people first.
Rachel Aldighieri is the managing director of the DMA.