Veriff
LibraryFraud centerFraud learnHow the US, UK, and EU approach the creation of legal frameworks on artificial intelligence (AI) 

How the US, UK, and EU approach the creation of legal frameworks on artificial intelligence (AI) 

With many countries in the process of discussing or already moving forward with legislation to govern the use of artificial intelligence, we have pulled together the most important information you need to know on how key economies approach the regulation of AI

Header image
Author
Aleksander Tsuiman
Head of Product Legal & Privacy
September 18, 2024
Fraud
Identity Verification
Share:
On this page
1. United States
2. United Kingdom
3. Europe

Global perspective on the legal frameworks on artificial intelligence (AI)

AI ethics concerns have taken center stage as AI systems become more integrated into our daily lives.

According to Stanford’s 2023 “The AI Index Report,” the legislative records of 127 countries show that the number of bills containing the term “artificial intelligence” passed into law grew from just 1 in 2016 to 37 in 2022. An analysis of the parliamentary records on AI in 81 countries likewise shows that mentions of AI in global legislative proceedings have increased nearly 6.5 times since 2016.

Why is that?

There are increasing ethical questions around AI systems as there is a growing deployment of, and research around, AI systems due to increased availability. The ethical issues imposing high risks around AI that negatively affect safety or fundamental rights have become more apparent to the general public. Those issues also make deep intersections into existing areas of regulation, for example around personal data protection and privacy, copyrights, and protection against discrimination. Meanwhile, startups and large companies are in a race to deploy and release newer and more powerful AI models, for example around generative AI.

Policy-level effect

The growing popularity of AI has led to intergovernmental, national, and regional organizations taking steps towards AI governance on a strategic and operational level. Governments are increasingly motivated by addressing the rising societal and ethical concerns in order to create trust and maximize the technology’s benefits. Therefore, the governance of AI technologies has become essential for governments across the world. 

Potential risks of regulation

The surge in AI regulation does also come with risks. One is the challenge of finding a balance between empowering trust and ensuring safety versus overregulation turning into stifled innovation and economic standstill. This is also the reason why some governments, e.g. the UK and US, have been rather conservative in taking bold and aggressive moves towards regulation. The other potential pitfall for companies is a regulatory patchwork which makes market entry difficult resulting in companies shying away from expansion. This is where intergovernmental and international alignment is needed - as the driver is often economic synergy then bodies like the US-EU Trade and Technology Council have the potential to bring down hurdles for businesses seeking to operate across borders by creating policy-level alignment.

1. United States

The US does not currently have a uniform and federal “AI law” as compared to what was adopted in the EU. Although the intent for AI regulation in the US is accelerating then not many initiatives have succeeded in the legislature. 

There is also increased activity on state-level, with different states intending to regulate different aspects of AI.  For example:

  • The states of Colorado and Utah are the first ones to have successfully passed comprehensive cross-sectoral private-sector AI governance laws, which directly affect how the private sector uses AI systems. It is worthwhile to note that although the law in Utah is more focused on regulating the use of generative AI models, then Colorado’s Consumer Protections for Artificial Intelligence Act bears a high level of similarity with the EU’s AI Act. This is an indication that there is already a certain level of “Brussels effect” taking place due to lawmaking in the EU designed to regulate AI systems and AI models, inc. designating certain AI systems and acceptable and unacceptable risk.
  • California is also a state pushing out multiple laws around regulating artificial intelligence. However, it focuses more on regulating bigger models and synthetic/generated content and related transparency.
  • Multiple other states are also processing laws that apply to certain and specific use-cases of certain AI models and AI systems. For example, focusing on uses-cases related to employment

The landscape of state AI laws in the US is changing fast, so it’s good to follow an available legislation tracker from an authoritative source (here is one example) to keep yourself up to date. 

Although the US lacks a comprehensive AI regulatory framework, there is a strong sentiment by the US authorities that AI-related issues, e.g. bias and potential discrimination, can and should be enforced under the current legislative framework. On the 25th of April 2023, a “Joint Statement on Enforcement Efforts Against Discrimination and Bias in Automated Systems” was released by the Federal Trade Commission, Consumer Financial Protection Bureau, Department of Justice’s Civil Rights Division, and Equal Employment Opportunity Commission. According to the statement, the authorities promise to vigorously enforce their collective authorities and monitor the development and use of automated systems.

A couple of initiatives also give a strong sentiment on the intended policy direction and also create certain regulatory obligations. For example, the White House did release “Blueprint for an AI Bill of Rights” in October 2022. However, this is currently a non-binding roadmap and it’s still providing valuable guidance for entities on what to expect concerning future laws affecting AI usage in the US. While it does not have the force of law, it outlines key principles and recommendations that may shape legislative and regulatory frameworks. Understanding these guidelines can help organizations prepare for potential legal obligations and align their AI practices with anticipated developments.

Secondly, on a more important note, on the 30th of October 2023,  President Biden released his long-awaited Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. With the Executive Order, the President directs actions to protect Americans from the potential risks of AI systems,systems to strengthen the nation's capabilities in artificial intelligence (AI), to promote scientific discovery, economic competitiveness, and national security.

"The surge in AI regulation does also come with risks. One is the challenge of finding a balance between empowering trust and ensuring safety versus overregulation turning into stifled innovation and economic standstill."

Aleksander Tsuiman, Head of Product Legal and Privacy, Veriff

2. United Kingdom

Although the UK’s AI sector is booming (according to UK’s AI sector study) then the country is relatively light on regulation.

The Department for Science, Innovation and Technology published a white paper on 29 March 2023 titled "AI Regulation: A Pro-Innovation Approach", which sets out the UK Government’s proposals to regulate artificial intelligence (AI) in a pro-innovation manner. The paper acknowledges the potential benefits of AI, such as improving healthcare, enhancing transport systems, and boosting economic productivity, while also recognizing the potential risks and challenges associated with this emerging technology.

Currently, the UK does not have a comprehensive AI regulation in place, and first-hand it seems that there is not a serious one on the horizon. However, the UK government intends to give more powers to the existing sector-specific regulators to approach the AI risks around the following 5 principles: (i) safety, security and robustness; (ii) appropriate transparency and explainability; (iii) fairness; (iv) accountability and governance; and (v) contestability and redress.

The previous UK government has confirmed that it will avoid “heavy-handed legislation” to avoid hindering the ability to respond to technological advances, and will instead “take an adaptable approach to regulating AI”, allowing regulators to use their expertise to modify the implementation of the principles to suit the specific context of AI in such regulator’s respective sector. The regulators will likely have a duty to apply the principles if they are unable to implement and enforce them voluntarily. Key regulators are encouraged to issue further guidance and resources on how to implement the five principles and how the principles will apply within their specific sectors. 

For example, in April 2024, the UK’s Financial Conduct Authority (FCA), the authority supervising financial services and fintech companies in the UK, published its AI update. While looking backwards and forwards, the FCA did stress that they are collaborating with firms to understand the usage of AI in the sector but also clearly focusing on the increase in risks around operational resilience, outsourcing and critical third parties. Companies need to take it into account when asked about  their AI usage.

However, there are likely to be some gaps between the various approaches by regulators, and therefore, it is possible that legislation may be required to ensure consistent consideration of the principles. Also, it is expected the UK Government will re-evaluate their approach in case the relatively lightly centralized approach will cause an increase of the “regulatory patchwork”. 

First signs of such re-evaluation are already here as talks around some level of AI-models and model-providers regulation has intensified with the new Government taking office. As the UK seeks to strengthen its relationship with the EU, the potential “UK AI Bill” may be a step towards such alignment, especially considering the fact that the UK, US and EU have all signed the first international treaty addressing risks of artificial intelligence.

3. Europe

In April 2021, the European Commission proposed the first comprehensive framework to regulate the use of AI. The European Union’s priority is to ensure that AI systems used and developed in the European Union are safe, transparent, traceable, non-discriminatory, and environmentally friendly.

Like the EU’s General Data Protection Regulation (GDPR), that regulates the processing of personal data, the EU AI Act could become a global standard, determining to what extent AI has a positive rather than negative effect. The EU’s AI regulation is already making waves internationally and you can read more about it in our previous blog.

Want to learn more?

Talk to one of Veriff's compliance experts to see how IDV can help your business.

Get the latest from Veriff. Subscribe to our newsletter.

Veriff will only use the information you provide to share blog updates.

You can unsubscribe at any time. Read our privacy terms.