The use of artificial intelligence (AI) in the European Union will be regulated by the EU AI Act, the world’s first comprehensive AI law. With a broad definition of AI, many businesses will be affected and should start preparing for compliance.
The world’s first regulation on AI
AI technologies are becoming increasingly powerful and ubiquitous. Used properly, they offer great value to individuals, society, and industry. However, like any advanced technology, they can also be used irresponsibly, unethically, or with malicious intent. The EU has therefore made it part of their digital strategy to enact the world’s first regulation on AI – the EU AI Act. By removing regulatory uncertainty, the EU wants to promote safe AI innovation and deployment while preventing use that violates EU values or exposes people to unacceptable risk.
Extensive scope and risk-based requirements
Given the broad and constantly changing range of AI solutions and applications, the EU AI Act defines AI equally broadly: essentially any data-driven system that is deployed in the EU, irrespective of where it is developed or sources its data, will fall under its purview. With such a sweeping definition, the EU takes a risk-based approach to regulating AI, imposing different requirements for different risk levels.
Unacceptable risk
AI systems that are deemed to pose a threat to people fall into the category of unacceptable risk. Examples include:
- Social scoring systems
- Systems that aim to manipulate children or other vulnerable groups
- Real-time remote biometric systems
Such systems are banned.
High risk
High-risk AI systems are systems that negatively affect safety or fundamental rights. These are divided into two subcategories:
1. AI systems used in products falling under the EU’s product safety legislation, including toys, aviation, cars, medical devices, and lifts.
- Biometric identification and categorization of natural persons
- Management and operation of critical infrastructure
- Education and vocational training
- Employment, worker management, and access to self-employment
- Access to and enjoyment of essential private services and public services and benefits
- Law enforcement
- Migration, asylum, and border control management
- Assistance in legal interpretation and application of the law
High-risk systems must be assessed before market introduction and throughout their lifecycle.
Low risk
Low-risk AI systems include chatbots and image-, audio-, and video-generating AI. Such systems must comply with transparency requirements, informing users that they are interacting with an AI system and allowing users to decide whether they wish to continue using it. Generative AI models, such as ChatGPT, must also be designed and trained to prevent generation of illegal content, and their makers must publish summaries of copyrighted data used for training.
Timeline
In April 2021, the European Commission proposed the first regulatory framework for AI. In June 2023, the members of the European Parliament adopted the Parliament’s negotiating position on the Act. The next step is for the European Council member states to agree on the final form of the law. The ambition is to enact the law by the end of 2023 and make it fully operational after a two-year grace period.
Your company
DNV encourages all businesses that will or may be covered by the law to start preparing. If your company uses or plans to deploy AI systems in the EU, now is the time to take the first steps towards compliance:
- Familiarize yourself with the Act and assess whether your company will be affected.
- Determine the risk categories of your AI systems.
- Ensure and document compliance by building system-specific conformity cases.
- Seek advice to avoid surprises.
The EU AI Act is intentionally sweeping and generic. To bridge the gap between the law and your specific conformity case, DNV has released a Recommended Practice (RP) on the Assurance of AI-enabled systems. This RP provides your company with a practical interpretation of the EU AI Act that allows you to identify the applicable requirements and collect evidence substantiating your conformity claims.