In July 2024, the Artificial Intelligence Regulation (AI Act) was published in the Official Journal of the European Union. Regulation (EU) 2024/1689 is the first legal framework on AI worldwide. The AI Act addresses the risks of artificial intelligence by regulating the development, placing on the market, putting into service, and use of AI systems in the European Union.
The Regulation defines AI systems as as “a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments”.
To whom does the AI Act apply?
The AI Act applies to both public and private actors, located in the EU or not, as long as the AI system is used or affects users in the Union. Specifically, the Regulation applies to:
- Providers of AI systems or general-purpose AI models, whether based in the EU or not
- Deployers of AI systems in the EU
- Providers and deployers of AI systems, whether based in the EU or not, where the output produced by the AI system is used in the Union
- AI systems’ importers and distributors
- Manufacturers placing on the market or putting into service an AI system together with their product and under their name or trademark
- Authorised representatives of providers, when not established in the Union
- Affected persons that are located in the Union
According to the definitions provided in Article 3, a provider is “a natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model or that has an AI system or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge”. On the other hand, deployer means “a natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity”. In this sense, a provider is, for instance, a developer of a system, while a deployer is the entity (e.g., a company) buying or using this system for their activities.
Classification of AI systems: a risk-based approach
The Regulation regulates AI according to its risk level:
- Unacceptable risk entails all AI systems which represent a threat to safety, livelihoods, and rights of people. This includes, for instance, toys using voice assistance which could encourage dangerous behaviours (cognitive behavioural manipulation), systems that classify people based on behaviour, socio-economic status, or personal characteristics (social scoring), or biometric categorisation systems. All systems falling within this risk category are banned.
- High-risk can include, among others, AI technologies used in law enforcement, critical infrastructures, educational or vocational training (e.g., exam scorings), safety components of products (or AI products) falling under the legislation listed in Annex II, employment (e.g., CVs’ sorting) as well as essential private and public services, justice administration and democratic processes, migration, asylum, and border control management. Such technologies are subject to strict requirements. AI systems that fulfil the conditions of Article 6(1) are classified as high-risk. Annex III lists additional categories of AI systems which would also be considered high-risk.
- Limited risk groups systems where the potential risks are associated with lack of transparency in their usage. For instance, this group includes certain AI systems interacting with humans, such as chatbots, where it should be made clear that users are interacting with a machine and the content created is AI-generated.
- Minimal or no-risk AI technologies can be freely used as long as providers respect a code of conduct included in the Regulation.
Conformity assessments of high-risk AI
Stricter requirements apply to providers of high-risk AI systems. According to Section 2 of the Regulation, high-risk providers must:
- Establish a risk management system, planned and run throughout the entire lifecycle of a high-risk AI system, with regular reviews and updates.
- Conduct data governance and management when the system use techniques involving the training of AI models with data.
- Compile a technical documentation to demonstrate compliance. The technical documentation must include, at least, the elements set out in Annex IV.
- Implement a system for record-keeping of events (logs) over the lifetime of the system.
- Design and develop their system in such a way as to ensure that it can be overseen by humans and their operation is sufficiently transparent to allow deployers to interpret a system’s output and use it appropriately.
- Design their system to achieve appropriate levels of accuracy, robustness, and cybersecurity throughout the system’s lifecycle.
- Have a Quality Management System (QMS) in place.
- Provide instructions for use (IFUs).
Deployers of such systems have certain obligations, too, such as ensuring that the provider applied the correct conformity assessment path.
What is a General Purpose AI (GPAI) model?
GPAI models are AI technologies capable of performing different tasks and emulate human intelligence across different activities (e.g., large generative AI models). GPAI models serves different purposes and can be either directly used or integrated into other AI systems. Such systems – when not posing systemic risks – are subject mainly to transparency requirements.
Overall, all providers of GPAI models have to at least compile a technical documentation according to Annex XI, provide information and documentation to supply to downstream providers which want to integrate the GPAI model into their AI system, comply with copyright law, and have a summary of the content used for the GPAI training. Additional requirements apply if GPAI models are used or integrated in high-risk technologies.
Timelines and implementation
The AI Act applies from 2 August 2026, 24 months after entering into force. However, there are some exceptions. Chapters I and II, on “general provisions” and “prohibited practices”, apply from 2 February 2025, while the following chapters and articles apply from 2 August 2025:
- Chapters III section 4 on notifying authorities and notified bodies
- Chapter V on general-purpose AI models (GPAI models)
- Chapter VII on governance
- Chapter XII on penalties – with the exception of Art.101 on fines for providers of general-purpose AI models.
- Article 78 on confidentiality
AI systems which are components of large-scale IT systems (Annex X), and that have been placed on the market or put into service before 2 August 2027, must be compliant with the AI Act by 31 December 2030.
Obligations for high-risk systems apply from 2 August 2027. Nonetheless, high-risk AI systems not part of Annex X and that have been placed on the market or put into service before 2 August 2026, must comply with the AI Regulation once they are subject to significant changes in their design. If the provider or deployer is a public authority, the timeline for compliance is 2 August 2030.
General-purpose AI models’ providers placed on the market before 2 August 2025 have to take necessary steps to comply with the Regulation by 2 August 2027.
Do you have questions on AI in Europe? Subscribe for free and contact us here (prodlaw@obelis.net)!
References:
European Commission (2024). Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence. Retrieved on 25.07.2024.
European Commission (2024). Shaping Europe’s digital future – AI Act. Retrieved on 25.07.2024.
Leave a Reply