The EU Artificial Intelligence Act – what does it change and how to prepare for it?

14 March 2024

On 13 March 2024, the European Parliament adopted the EU Artificial Intelligence Act (the “AI Act”). This is a good time for a brief summary of where we are and what lies ahead in this field.

We invite you to read the following overview of the key issues raised by the AI Act. There will certainly still be time for in-depth analyses. However, we are convinced that it is important to start thinking about the right approach to managing AI in an organisation. On the one hand, we must not forget that the application of AI solutions also directly affects other regulations that are fully in force, for example in the areas of privacy protection, intellectual property and cyber security. On the other hand, proper implementation of AI governance principles (referring to existing norms and standards) is the first step towards ensuring compliance with the provisions of the new EU regulation.

What is the AI Act and what does it regulate?

The AI Act is a wide-ranging EU regulation on artificial intelligence and, as a directly applicable act of law, does not require transposition into national law. On the basis of this legislation, AI systems will, in principle, be regulated uniformly in all EU member states. This is to ensure a uniform approach and guarantee a single standard of protection in all EU member states.

The AI Act establishes rules on transparency, the use of AI, and human oversight with respect to AI. It is therefore as much about requirements related to the marketing of an AI system or model as it is about the further use of AI. In this sense, it can be said that the AI Act addresses the entire AI life cycle and imposes certain obligations on each participant in this cycle (suppliers, importers, distributors and users). The requirements and rules will vary depending on the role of an AI solution, but above all on the category of such solution and the risks that will be assigned to that category.

The AI Act also sets out a list of prohibited practices in the use of AI. These are particularly risky uses of AI relating to the marketing, commissioning and use of a given system. For example, the AI Act prohibits AI systems that: (i) deploy subliminal techniques or deliberate manipulative techniques with the purpose or effect of distorting a person’s behaviour; (ii) are used to assess or predict the likelihood of an individual committing a criminal offence solely based on profiling; (iii) compile facial recognition databases by untargeted scraping of facial images from the internet or CCTV footage; or (iv) are used to infer emotions in workplaces and educational institutions.

An AI system and a GPAI system – a few definitions

The AI Act defines an AI system as a machine-based system that is designed to operate with varying levels of autonomy, that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers how to generate outputs from the input it receives. This definition has been amended many times, which is probably a good indication of the level of complexity of the legislative work. In the end, we have a very broad definition that covers a broad range of AI-based systems and tools.

One of the subcategories of AI systems regulated by the AI Act is ‘general-purpose artificial intelligence’ (GPAI). The AI Act includes a definition of ‘GPAI model’. This is an AI model that displays a significant level of generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications. The AI Act provides additional requirements for GPAI models, in particular with regard to evaluation, estimation of systemic risk, and notification obligations. Providers of such AI models will be required, among other things, to draw up and regularly update their technical documentation. The documentation will need to describe the process of training and testing a particular system. The AI Act also stipulates that the documentation of GPAI models should be made available to other providers who wish to use them as part of their AI system. Importantly, providers of such models will be required to have a policy established for compliance with the EU Copyright Directive.

High-risk systems – key requirements

Some AI systems can be categorised as ‘high-risk systems’ [It is these that are most affected by requirements and obligations. High-risk AI systems include biometric identification systems and systems used as part of critical infrastructure or for recruitment purposes, including job application filtering and candidate evaluation (systems listed in Annex III to the AI Act).

The consequence of assuming that we are dealing with a high-risk system will be the need to comply with a number of additional requirements and obligations. In particular, this includes the obligation to have a risk management system in place for such AI systems. Also, the data used to train high-risk AI systems  will have to meet certain quality criteria and will have to be managed in accordance with the criteria set out in the AI Act. The regulation also sets out specific requirements for event recording, transparency of high-risk AI systems, and human oversight of their operation.

It is also worth noting that prior to the deployment of a high-risk AI system, the deployer will have to perform an assessment of the impact on fundamental rights of the use of such system (a fundamental rights impact assessment – FRIA). Such an assessment will need to include: (i) a description of the deployer’s processes in which the high-risk AI system will be used in line with its intended purpose; (ii) a description of the period during which each high-risk AI system is intended to be used and the frequency of such use; and (iii) the categories of natural persons and groups likely to be affected by its use in a given context. This is similar to the Data Protection Impact Assessment (DPIA) already known from Article 35 of the GDPR. Besides, the AI Act explicitly refers to the DPIA, allowing the FRIA to be treated as complementary to it in certain cases.

Administrative fines

There are more similarities with the GDPR. They relate, among other things, to the structure and amount of administrative penalties imposed by the competent supervisory authorities. In this respect, the AI Act provides, for example, for fines of EUR 35 million or 7% of a company’s global annual turnover for using banned AI applications (whichever is greater), as well as EUR 7.5 million or 1% of total annual worldwide turnover for supplying incorrect information to the competent authorities.

Important dates – when will the AI Act come into force?

The AI Act will enter into force 20 days after its publication in the Official Journal of the EU, but most of the provisions will not be applicable until a further 24 months have passed after the AI Act’s entry into force. In practice, this means a two-year period to comply with the regulation. However, some provisions will come into effect earlier, i.e. six or 12 months after the entry into force of the AI Act. This applies, in particular, to prohibited uses of AI and selected administrative fines.

We would like to invite you to the free webinar “The AI Act and AI governance: how to prepare for the new AI regulations” set to take place on 27 March 2024. To register, please fill out the form available >>here<<

More alerts

The topic of whistleblowing comes up again

11 January 2024

New obligations for entities having 50 or more employees A new version of the bill on the protection of persons reporting breaches of law (referred to as whistleblowers) has recently appeared on th...

The topic of whistleblowing comes up again

Confirm your email

Check your mailbox and click the link to confirm your subscription to our Newsletter.

Thank you!