The EU’s ambitious AI law aims to curb tech misuse – but can it?
With legislation passed by the European Parliament, the legislative battle lines around AI are starting to set. Here, Brew Digital’s Gareth Lewellyn contextualizes the law and asks: can it succeed?
A close look at the EU's first attempts to regulate artificial inteligence / Tingey Injury Law Firm via Unsplash
It’s commonly accepted that technology always outpaces legislation. There are a couple of reasons for this. One is that legislators – often by design, but sometimes through dysfunction – are slow, deliberative bodies that have to consult with stakeholders, and think of the long-term ramifications of their actions. This means that decisions take a long time to come to fruition, while the inexorable march of progress sees technology rapidly advance.
The second reason is the idolatry of innovators; a mindset among legislators that technology moguls shouldn’t be hindered. This has played out to devastating effects. Just look at the WeWork IPO implosion, where founder Adam Neumann convinced early investors that what was objectively a property rental business was somehow a tech empire, and received a valuation of $47bn. Upon release of the company’s S-1 filing in preparation for the IPO, a crescendo of criticism formed around its valuation, business structure and Neumann’s leadership, resulting in an abandoned IPO and Neumann stepping down.
There was a similar story with Elizabeth Holmes and Theranos, the company that claimed to have developed technology that could complete blood tests quickly and accurately with a very small sample of blood. Holmes took millions from early investors before an investigation exposed the fraudulent product – which allegedly misdiagnosed patients who had had miscarriages and others living with HIV. Homes was this year sentenced to over 11 years in prison for fraud and wire fraud.
These are just two of the many examples we’ve seen in the last 10 years.
EU & AI: TL;DR
Presumably learning lessons from the scandals that have torn at the very fabric of democracy, the European Union is taking a more proactive approach to regulating the latest technology breakthrough: artificial intelligence (AI).
In July, the European Parliament became the first House in the world to pass legislation that would regulate the creation and use of artificial intelligence. Included in the proposed law is the need for AI creators to prevent illicit content being created, and a “post-market monitoring system” for software that “learns” after release, to ensure issues don’t develop.
The law further stipulates that generative AI should list the sources of any copyrighted material that it used for its output. It will also put a limit on the “use and the processing of biometric data involved in an exhaustive manner," with a particular focus on facial-recognition technology already used for law enforcement and security.
First proposed in 2021, the framework puts different obligations on providers of AI and their users, depending on the level of risk they pose, with more risk resulting in more regulation, or even outright banning. The three levels of risk are Limited Risk, High Risk, and Unacceptable Risk, with each level attracting more onerous penalties.
The law had to be expanded to accommodate generative AI, following the explosion in popularity of apps like ChatGPT and MidJourney. The changes require users to stipulate that any compelling deep fake – whether image, video or audio – disclose that it is artificially created. Presumably this is to anticipate the wave of misinformation that could come in the wake of these services proliferating. It also demonstrates how quickly technology advances, or how slowly legislation moves – depending on your perspective.
Suggested newsletters for you
Push and pull
This law has not yet been ratified and could still be subject to changes when it reaches the European Council later this year. Already there has been pushback from business leaders who say that the proposed regulation could “jeopardize Europe’s competitiveness and technological sovereignty.” In a letter signed by over 150 executives, businesses like Siemens, Airbus, Renault, and TomTom have said that:
“Companies developing and implementing such systems would face disproportionate compliance costs and disproportionate liability risks. Such regulation could lead to highly innovative companies moving their activities abroad, investors withdrawing their capital from the development of European Foundation Models and European AI in general. The result would be a critical productivity gap between the two sides of the Atlantic.”
Expanding on the risk of diverging standards and decreased competition, the letter calls on trans-Atlantic cooperation to ensure parity.
“It is a prerequisite to ensuring the credibility of the safeguards we put in place. Given that many major players in the US ecosystem have also raised similar proposals, it is up to the representatives of the European Union to take this opportunity to create a legally binding level playing field.”
Although there’s certainly a risk to overregulation, it’s hard not to see the bluster as opportunistic. In many cases, through the years, we’ve seen big tech organizations say a law is unworkable before walking it back. Facebook has repeatedly tried it, for example when it threatened to pull news out of Australia after the enactment of the News Media Bargaining Code. It eventually reached an agreement – but is currently trying the same move in Canada over their own media bill. Sam Altman, chief executive of OpenAI, issued a similar thread around pulling the company out of Europe, before backtracking and saying there are no plans to leave.
The law will now be debated with the European Council, with a projected timeline of being formally ratified before the end of the year.
Content by The Drum Network member:
We are not your typical digital marketing agency. We are a full service agency that places trust, partnership, and human connection at the centre of what we do.Find out more