The EU has approved the Artificial Intelligence Act, the world’s first major regulation aimed at creating safeguards for the rapidly advancing technology. But what does it mean for UK technology businesses?
On Wednesday European lawmakers voted overwhelmingly to pass the EU AI Act, bringing the bloc one step closer to regulating high-impact, general-purpose AI models and high-risk AI systems.
EU nations are expected to formally approve the deal in May this year. Implementation will then be staggered from May 2025 onwards.
The provisional agreement comes after three years of negotiations. During that time, the widespread adoption of generative AI tools like OpenAI’s ChatGPT or Google’s Gemini has increased pressure on governments to establish new rules.
Non-compliance with the EU AI Act can lead to penalties ranging from €7.5m or 1.5% of turnover, or €35m or 7% of global turnover – depending on the type of violation.
It takes a risk-based approach, which means the greater the risks, the more stringent the compliance requirements.
While the laws are made in Brussels, they are expected to have a global impact on businesses – including in the UK.
‘Brussels effect’
“Approving the AI Act will have a huge effect for industries across the globe, as the ‘Brussels effect’ kickstarts legislative changes across international borders,” said Agur Jõgi, CTO of US software company Pipedrive.
Now that the UK is no longer part of the EU, the AI Act will not directly apply to British businesses. But it will be relevant for any UK businesses’ activities in the EU, such as customers or operations.
“It will impact any UK tech businesses that are looking to scale their product globally or which operate a global customer base,” said Jason Raeburn, head of the London intellectual property and technology practice at Paul Hastings, a global law firm.
“Most UK tech will either already have a global customer base, or will have ambitions of having global users or customers, so the requirements will be relevant to the vast majority of UK tech businesses.”
Raeburn added that the “greatest friction” for UK firms will be those operating in areas prohibited by the EU AI Act, such as certain types of biometric technologies or indiscriminate scraping of facial images from the internet.
The UK is yet to introduce its own AI regulations but last month published an AI whitepaper. Ministers have indicated the UK will not introduce its own new AI laws in a hurry. But when it does, it could find itself using the EU’s AI act as a blueprint.
British firms have been in a similar position before with Europe’s General Data Protection Regulation (GDPR), which came into force in 2018 while the UK was still a member of the EU. The government brought GDPR into UK law post-Brexit.
“Unlike GDPR, which significantly impacted all companies with customers in Europe, the EU AI Act only impacts companies doing specifically risky activities with AI,” said Alastair Paterson, CEO and co-founder of AI security startup Harmonic.
“For those companies, the impact is significant. However, for most UK tech startups, even those building with AI, the EU AI Act is not very impactful and doesn’t preclude innovation.”
Race to comply with EU AI Act
Attention for businesses using AI, particularly those deemed higher risk, will now turn to becoming compliant.
“The exact time organisations will have to get into compliance will vary between six and 36 months, depending on the type of AI system that they develop or deploy,” said David Dumont, head of Hunton Andrews Kurth’s data privacy office.
There are concerns that the EU AI Act will create additional red tape that stifles innovation. As with any regulation, the cost of compliance will carry a disproportionately higher burden for startups.
“Small companies and startups will experience issues more strongly; the regulation acknowledges this and has included provisions for sandboxes to foster AI innovation for these smaller businesses,” said Curtis Wilson, staff data scientist at the Synopsys Software Integrity Group.
“However, these sandboxes are to be set up on the national level by individual member states and so UK businesses may not have access.”
Roi Amir, CEO of British startup Sprout.ai, said while the EU’s “risk-based approach to AI regulation sets a good framework”, it’s important that it “doesn’t stifle innovation by dividing risks into the wrong category”.
Tom Whittaker, senior associate at law firm Burges Salmon, said that the “transition period is relatively short and the regulatory compliance obligations may be a significant challenge for some”.
A survey conducted by global compliance e-learning provider VinciWorks found that 35% of UK compliance professionals were most concerned about not being prepared for the new rules, while just over a quarter said they were worried about misunderstanding the regulations.
“The EU AI Act is the first significant attempt to regulate AI in the world – it remains to be seen whether the cost of compliance stifles innovation or whether the AI governance model that it establishes is a flagship export for the EU,” said Emma Wright, partner and head of technology, data and digital at Harbottle & Lewis.
Opportunity for the UK?
For some, the EU AI Act could provide an opportunity for the UK to forge its own path post-Brexit.
Brian Mullins, CEO of Oxford-based AI firm Mind Foundry, told UKTN that the EU AI Act has an “overly broad and ambiguous scope” and warned that companies may move their operations to “regions with a more favourable regulatory environment”.
He added: “The UK has got its approach right by enabling industry bodies – like the CMA – to regulate industries’ use of AI, rather than regulating the technology itself, as is the case now in the EU. It is this approach that is going to help us in the UK develop pioneering AI products and stay at the forefront of global innovation. Regulate the use, not the technology.”
Josh Little, IP lawyer and partner at law firm Marriott Harrison, said: “I see the AI Act more as an opportunity for the UK, than a source of friction. Going forward, the UK may be viewed as a more attractive place to base an AI business than countries on the continent, where they know they are likely to face more strict regulation.”
However, should the UK opt for lighter-touch regulations in a bid to attract more high-growth businesses, it runs the risk of forcing businesses to navigate multiple regulatory regimes.
“UK-based tech businesses that have plans to expand internationally may eventually find themselves having to adhere to multiple regulations in order to stay consistent and streamline their operations,” said Michael Baron, commercial director of BWS, a bid writing company.
Saqib Bhatti, the UK’s tech minister, told UKTN last November that he was “confident” that Britain would find a way to ensure rules made in Westminster are interoperable with those from Brussels.
Russ Shaw CBE, founder of Tech London Advocates & Global Tech Advocates, said: “Globally renowned for their fair governance and not bowing to the will of big tech, UK regulatory bodies will need to identify the areas from today’s legislation which ought to be replicated here to ensure the necessary cross-border safeguards on AI are in place.”