The EU regulation on artificial intelligence is entering its final phase with the adoption by the EU Parliament of its position on the AI Act on14 June 2023.
The risk-based approach to be applied to AI systems has been confirmed and clarified. Another important input from the EU Parliament has decided to apply specific rules according to the characteristics of the various AI systems.
The AI Act will establish obligations for providers of AI systems depending on the risk level such systems could generate for users. Currently four risk categories have been created:
- Unacceptable risk (such as predictive policing systems)
- High risk (such as data safety)
- Limited risk
- Minimal or no risk
AI systems with an unacceptable level of risk will be banned such as high-scale biometric systems using sensitive data.
The definition of high-risk AI systems has been extended to give a larger protection to data subjects and users.
AI systems with a limited risk on data subjects and users will be subject to transparency requirements whereas AI systems with minimal risk will not be subject to any specific requirement under the AI Act.
To increase transparency towards the public, the members of the EU Parliament wish to require providers of high-risk AI systems to register into a dedicated EU database. This database would be maintained in compliance with the EU GDPR.
DL Corporate & Regulatory regularly assists tech companies with respect to the rolling out of innovative and AI-based projects. Vincent de Bonnafos, one of our lawyers is a member of DL4T a research project in Nice University leading in the field of regulations applicable to AI and deep technologies.