Inside the EU AI Act: Key Insights for AI Stakeholders
AI Act requirements
As you may have read from our previous blog, the EU AI Act, which entered into force in August 2024, regulates artificial intelligence in the European Union. But what type of AI systems does the AI Act cover, and what are its requirements for providers and deployers?
Recap of the different roles introduced by the AI Act
First, let us go through the different roles that an organization may have under the AI Act:
A provider refers to someone who either (a) develops an AI system (or a general-purpose AI model1) or (b) has it developed for them and places it on the market (or puts it into service under its own name or trademark). It does not matter if the product is for payment or free of charge.
Whereas providers will face the highest number of requirements, deployers will be the largest group of organizations affected by the AI Act. A deployer is someone who uses an AI system under its authority. As mentioned in the previous blog post, these are companies using AI tools, such as Microsoft Copilot or ChatGPT, to support their work.
In addition to the above roles, the AI Act acknowledges different actors throughout the supply chain, namely manufacturers, authorized representatives, importers, and distributors of AI systems.2
Defining AI systems and their scope of application
The AI Act defines AI systems as follows:
A machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.
In short, for the AI Act to apply, the system needs to operate with some level of autonomy without human intervention.
The AI Act regulates AI systems and general-purpose AI models that are placed on the EU market or AI systems put into service in the EU. Certain intended use cases, such as those of the defence industry and scientific research, are exempted from the scope.
It is worth noting that whether the AI Act applies or not depends on the product. Therefore, there is, for example, no organizational size requirement for determining applicability (differing from certain other pieces of EU legislation, such as the NIS2). To a certain extent, it does not matter where the organization is established. For example, a provider of high-risk AI systems will have to abide by the AI Act’s requirements even if the company is registered in a third country.
What is the risk-based approach in the EU AI Act?
The AI Act divides AI systems into different categories based on their level of risk. While most requirements are connected to high-risk AI systems, even a provider of other, non-high-risk AI systems will have to register itself and the system before market entry. However, this is the case only when certain AI systems that are presumed high-risk are assessed as non-high risk. Providers of non-high-risk AI systems are also required to fulfil certain transparency requirements whenever AI systems are intended to interact directly with natural persons. In addition, general-purpose AI models are subject to specific requirements. For more information on other risk categories, see DNV’s whitepaper on new AI regulation.
High-risk AI systems include, for example, systems designed to function as safety components in the management and operation of certain critical infrastructure, or systems that perform profiling of individuals, such as analyzing and filtering job applications. On the other hand, some AI systems can also be labelled as non-high risk if they do not pose a significant risk of harm to the health, safety, or fundamental rights of natural persons, including by not materially influencing the outcome of decision-making.
AI Act Article 6 details the classification rules for high-risk and non-high-risk AI systems, and Annex III provides more concrete examples.
Providers will face obligations both before and after market entry
Following the AI Act’s risk-based approach, providers of high-risk3 AI systems will face the most requirements. For example, providers must establish, implement, document, and maintain a quality management system and a risk management system, apply cybersecurity measures for their high-risk AI system, and ensure up-to-date technical documentation.
Before placing their product on the market, certain high-risk AI systems will have to undergo assessment procedures to be issued a CE marking to indicate their conformity with the AI Act. Post-market obligations of providers include, for example, reporting serious incidents or malfunctions.
The AI Act introduces a wide set of obligations for AI system providers, and the examples mentioned above present a sample of them.
Deployers’ obligations will largely depend on the types of AI systems they use
The AI Act introduces operational obligations for deployers, such as implementing measures to ensure that high-risk systems are used according to their instructions. In addition, the AI Act introduces risk-management obligations for high-risk AI system deployers. For example, certain deployers will be required to perform fundamental impact rights assessments prior to using high-risk AI systems. This approach can also be integrated into existing data privacy assessment processes.
The AI Act also includes transparency obligations. For example, both deployers and providers of AI systems that generate certain audio, image, video, or text content (such as deepfakes) must disclose that the content has been artificially generated or manipulated.
Deployers that are public authorities will also have specific registration obligations when using certain high-risk AI systems.
The requirement of AI literacy affects both providers and deployers of AI systems
The AI Act requires sufficient AI literacy in the provider and deployer organization. AI literacy includes providing employees with the skills, knowledge, and understanding necessary to make informed decisions when deploying AI and awareness of the potential opportunities and risks. Here’s our Awareness colleague Hanna Raitanen’s approach to raising an organization’s AI understanding: The measures the organization takes to ensure awareness of AI should consider employees’ technical knowledge, experience, education, and training and the use of AI systems. Correct measures for building AI awareness will depend on the organization and the specific AI systems they provide and/or deploy. Tailored communication and training are essential for increasing awareness and can be tracked to verify compliance. By focusing on clear, relevant training and communication, organizations can ensure that their employees are able to work responsibly with AI.
More to come on AI Act topics
In our next blog, we will investigate how the AI Act relates to existing and upcoming management system standards.
Need expert advice on the AI Act or other cybersecurity regulations? We’re here to help. Our product security experts also assist organizations in the secure, lawful development and compliance of AI systems. Whether your organization is developing or acquiring AI systems, we ensure they meet the expected standards. Additionally, our experts provide technical security testing and assessments for all organizations utilizing AI solutions. Reach out to us for assistance, and let’s agree on the next steps.
1/29/2025 9:43:00 AM