Skip to main content

Artificial Intelligence Act: What can we learn from it?

Artificial Intelligence can bring a wide array of economic and societal benefits but also generate new risks for individuals. The Artificial Intelligence Act presents a risk-based regulatory approach to AI across the EU without unduly constraining or hindering technological development.


What is the Artificial Intelligence Act?

Main objectives

  • Ensure that AI systems available on the European market are safe and respect the fundamental rights of citizens and the values of the EU.
  • Ensure legal certainty to facilitate investment and innovation in AI.

  • Improve governance and effective enforcement of existing legislation on fundamental rights and safety requirements for AI systems.

  • Facilitate the development of a single market for safe, legal, and trustworthy AI applications, and prevent market fragmentation.

Which Companies are Concerned?

  • Providers who distribute or offer AI systems in the European Union, whether such providers are established in the EU or outside of it.
  • Users of AI systems located within the EU.
  • Providers and users of AI systems located in a country outside the EU, where the results produced by the system are used in the EU.

The percentage of total worldwide annual turnover, depending on the violations, that must be paid in fines, in case of non-conformity. 

Member States will be responsible for designing the sanctions regime.

2, 4 or 6%

Supervising Authorities

  • EuropeEuropean Artifical Intelligence Board
  • France: TBD


Associated Regulations

The AI Act is part of the European data package and is therefore linked to the DSA, DGA, DMA, etc. and also to GDPR.

Artificial Intelligence Act timeline

Focus & Impacts

Key Focus

Risk-based Category of Obligations

The obligations in the text depend on the risk level of the AI system used (unsustainable, high, or non-high) and the actor involved (supplier, distributor, user, other third parties). There are also specific obligations for importers of high-risk AI systems into the EU.

Regulatory Sandboxes

National authorities may establish regulatory sandboxes that offer a controlled environment for testing innovative technologies for a limited time. These sandboxes are based on a test plan agreed with the relevant authorities to ensure the compliance of the innovative AI system and to accelerate market access. SMEs and start-ups can have priority access to them.

High-risk AI Systems List

The list of high-risk systems is defined and updated by the European Commission to reflect the rapid evolution of technologies.

CE Marker & Registration

The AI Act creates a CE marker for high-risk AI systems. This marker is mandatory and is provided by notified bodies. There is also an obligation to register high-risk autonomous AI systems in a European database.


Prohibited Artificial Intelligence Practices

AI systems that contravene the values of the European Union by violating fundamental rights are prohibited, such as:

  • Unconscious manipulation of behavior;
  • Exploiting the vulnerabilities of certain groups to distort their behavior;
  • AI-based social rating for general purposes by public authorities;
  • The use of "real-time" remote biometric identification systems in publicly accessible spaces for law enforcement (with exceptions).

High-risk Systems (defined and listed by the EU Commission)

Companies are subject to several obligations related to documentation, risk management systems, governance, transparency, or safety, depending on their status (supplier, user, distributor, and other third parties). These systems must also be declared to the EU and bear a CE mark. 

Specific Risk Systems

These are systems that (i) interact with humans, (ii) are used to detect emotions or determine association with (social) categories based on biometric data, or (iii) generate or manipulate content (‘deep fakes’). For these systems, there is an obligation to disclose whether the content is generated through automated means or not.

Non-High-Risk Systems

Voluntary creation and enforcement of a code of conduct that may include commitments to environmental sustainability, accessibility for people with disabilities, stakeholder participation in AI system design and development, and development team diversity.


How are High-Risk Systems Impacted?

Types of High-risk Systems

This includes the safety component of a product or a product requiring a third-party conformity assessment according to existing regulations (Dir 2009/48/EC on the safety of toys, Reg 2016/424/EU on cableways, etc).

It also includes products listed in Annex III:

  • Biometric identification and categorization of humans
  • Management and operation of critical infrastructure
  • Education and vocational training
  • Employment, worker management, and access to self-employment
  • Access to, and enjoyment of, essential private services and public services and benefits
  • Law enforcement
  • Migration, asylum, and border control management
  • Administration of justice and democratic processes

System Requirements

  • Risk Management System

Continuous iterative process run throughout the entire lifecycle of a high-risk AI system (identification, evaluation of risks, and adoption and testing of risk management measures)

  • Accuracy, Robustness, and Cybersecurity

Implementation of measures and information in the instructions

  • Human Monitoring

Ensure human monitoring during the period in which the AI system is in use

  • Transparency and Provision of Information to Users

Transparent design & instructions for users

  • Record-keeping

Design and development with capabilities enabling events to be recorded automatically

  • Technical Documentation

Demonstration of high-risk AI system compliance with requirements

  • Data and Data Governance

Training, validation and testing of data sets to be sure they meet quality criteria

Focus on High-Risk Systems

GENERAL REQUIREMENTS Ensure that the system is compliant No distribution of a non-compliant high-risk system and if the high-risk AI system is already in the market, Ensure the relevance of the data entered Stop the use of the system if it is considered to present risks to health, safety, the protection of fundamental rights, or in the event of a serious incident or malfunction.
Take the necessary corrective actions if the high-risk AI system is not compliant  Storage or transportation conditions must not compromise the system's compliance with requirements
Verify that the high-risk AI system bears the required CE mark of conformity
PROCESSES Have a quality management system (strategy, procedures, resources, etc.) Third party monitoring: to verify that the supplier and importer of the system have complied with the obligations set out in this regulation and that corrective action has been or is being taken Keep logs automatically generated by the system if they are under their control
Write technical documentation
Conformity assessment EU declaration and CE marking
Design and develop systems with automatic event logging capabilities
Maintain logs generated automatically by the system
Establish and document a post-market surveillance system
TRANSPARENCY & INSTRUCTIONS Design transparent systems Ensure that the AI system is accompanied by operating instructions and required documentation Obligation to use and monitor systems following the instructions of use accompanying the systems
Draft instructions for use
INFORMATION & REGISTRATION Obligation to inform the competent national authorities in case of risks to health, safety, protection of fundamental rights or in case of serious incidents and malfunctions. Obligation to inform the supplier/importer of a non-compliant high risk system and the competent national authorities Obligation to inform the supplier/distributor, or the market surveillance authority, if the user cannot reach the supplier and the systems present risks to the health, safety, protection of fundamental rights of the persons concerned.
Register the system in the EU database

Contact us to learn more

Sia Partners integrates this data in its client database to send you marketing communications (invitations to events, newsletters and new commercial offers).
This data will be kept for 3 years before being deleted and you can withdraw your consent to the processing of your data at any time.
To learn more about the management of your personal data and to exercise your rights, please consult our Data Protection Policy.


Your data are used by Sia Partners to process your contact request. Please note that you have rights regarding your personal data. For more information, we invite you to read our data protection policy