Improving the quality of a customer's digital…

Artificial Intelligence can bring a wide array of economic and societal benefits but also generate new risks for individuals. The Artificial Intelligence Act presents a risk-based regulatory approach to AI across the EU without unduly constraining or hindering technological development.
Ensure legal certainty to facilitate investment and innovation in AI.
Improve governance and effective enforcement of existing legislation on fundamental rights and safety requirements for AI systems.
Facilitate the development of a single market for safe, legal, and trustworthy AI applications, and prevent market fragmentation.
The percentage of total worldwide annual turnover, depending on the violations, that must be paid in fines, in case of non-conformity.
Member States will be responsible for designing the sanctions regime.
2, 4 or 6%
The AI Act is part of the European data package and is therefore linked to the DSA, DGA, DMA, etc. and also to GDPR.
The obligations in the text depend on the risk level of the AI system used (unsustainable, high, or non-high) and the actor involved (supplier, distributor, user, other third parties). There are also specific obligations for importers of high-risk AI systems into the EU.
National authorities may establish regulatory sandboxes that offer a controlled environment for testing innovative technologies for a limited time. These sandboxes are based on a test plan agreed with the relevant authorities to ensure the compliance of the innovative AI system and to accelerate market access. SMEs and start-ups can have priority access to them.
The list of high-risk systems is defined and updated by the European Commission to reflect the rapid evolution of technologies.
The AI Act creates a CE marker for high-risk AI systems. This marker is mandatory and is provided by notified bodies. There is also an obligation to register high-risk autonomous AI systems in a European database.
AI systems that contravene the values of the European Union by violating fundamental rights are prohibited, such as:
Companies are subject to several obligations related to documentation, risk management systems, governance, transparency, or safety, depending on their status (supplier, user, distributor, and other third parties). These systems must also be declared to the EU and bear a CE mark.
These are systems that (i) interact with humans, (ii) are used to detect emotions or determine association with (social) categories based on biometric data, or (iii) generate or manipulate content (‘deep fakes’). For these systems, there is an obligation to disclose whether the content is generated through automated means or not.
Voluntary creation and enforcement of a code of conduct that may include commitments to environmental sustainability, accessibility for people with disabilities, stakeholder participation in AI system design and development, and development team diversity.
This includes the safety component of a product or a product requiring a third-party conformity assessment according to existing regulations (Dir 2009/48/EC on the safety of toys, Reg 2016/424/EU on cableways, etc).
It also includes products listed in Annex III:
Continuous iterative process run throughout the entire lifecycle of a high-risk AI system (identification, evaluation of risks, and adoption and testing of risk management measures)
Implementation of measures and information in the instructions
Ensure human monitoring during the period in which the AI system is in use
Transparent design & instructions for users
Design and development with capabilities enabling events to be recorded automatically
Demonstration of high-risk AI system compliance with requirements
Training, validation and testing of data sets to be sure they meet quality criteria
OBLIGATIONS FOR PROVIDERS | OBLIGATIONS FOR DISTRIBUTORS | OBLIGATIONS FOR USERS | |
---|---|---|---|
GENERAL REQUIREMENTS | Ensure that the system is compliant | No distribution of a non-compliant high-risk system and if the high-risk AI system is already in the market, | Ensure the relevance of the data entered Stop the use of the system if it is considered to present risks to health, safety, the protection of fundamental rights, or in the event of a serious incident or malfunction. |
Take the necessary corrective actions if the high-risk AI system is not compliant | Storage or transportation conditions must not compromise the system's compliance with requirements | ||
Verify that the high-risk AI system bears the required CE mark of conformity | |||
PROCESSES | Have a quality management system (strategy, procedures, resources, etc.) | Third party monitoring: to verify that the supplier and importer of the system have complied with the obligations set out in this regulation and that corrective action has been or is being taken | Keep logs automatically generated by the system if they are under their control |
Write technical documentation | |||
Conformity assessment EU declaration and CE marking | |||
Design and develop systems with automatic event logging capabilities | |||
Maintain logs generated automatically by the system | |||
Establish and document a post-market surveillance system | |||
TRANSPARENCY & INSTRUCTIONS | Design transparent systems | Ensure that the AI system is accompanied by operating instructions and required documentation | Obligation to use and monitor systems following the instructions of use accompanying the systems |
Draft instructions for use | |||
INFORMATION & REGISTRATION | Obligation to inform the competent national authorities in case of risks to health, safety, protection of fundamental rights or in case of serious incidents and malfunctions. | Obligation to inform the supplier/importer of a non-compliant high risk system and the competent national authorities | Obligation to inform the supplier/distributor, or the market surveillance authority, if the user cannot reach the supplier and the systems present risks to the health, safety, protection of fundamental rights of the persons concerned. |
Register the system in the EU database |