The Regulation for Artificial Intelligence (RIA or AI Act) was published in the Official Journal of the European Union on Thursday, July 12, 2024. Announced in April 2021 by the European Commission, the aim of this regulation is to strengthen the security and fundamental rights of users while fostering innovation and trust in AI.
Artificial Intelligence (AI) is rapidly transforming various sectors of our society, from healthcare and education to finance and transportation (not forgetting customer service). However, this powerful technology also poses challenges in terms of safety, ethics and human rights. To address these concerns, the European Union has proposed the AI Act, a regulatory framework to ensure the safe and ethical development and use of AI.
The RIA, also known as the AI Act, is the world's first legal framework specifically dedicated to the regulation of artificial intelligence. It addresses the risks associated with AI. The main aim of this legislation is to promote trustworthy AI, ensuring that AI systems respect fundamental rights, safety and ethical principles, while addressing the risks posed by powerful and influential AI models.
The AI Act aims to provide AI developers and users with clear requirements and precise obligations regarding the use of AI. At the same time, the regulation seeks to reduce administrative and financial burdens for businesses, particularly small and medium-sized enterprises (SMEs). The law is part of a broader set of policy measures, including the AI Innovation Package and the Coordinated AI Plan. Together, these initiatives aim to ensure the safety and fundamental rights of citizens and businesses while boosting adoption, investment and innovation in AI across the EU.
The AI Act ensures that Europeans can have confidence in AI and its benefits. While the majority of AI systems present little or no risk and can help solve many societal challenges, some systems pose significant risks that need to be managed to avoid undesirable consequences.
The RIA proposes a risk-based approach by classifying AI systems into four levels:
Some AI systems are banned because they pose clear threats to people's safety, livelihoods and rights. For example, AI systems that manipulate human behavior to cause harm, or those that exploit the vulnerabilities of vulnerable groups (such as children).
High-risk AI systems are those that can affect people's safety or fundamental rights, justifying strict requirements for their development and deployment. Areas concerned include critical infrastructure, education, employment, essential public services, law enforcement and migration management. Specific examples:
Requirements for these systems include:
In principle, remote biometric identification in public spaces is prohibited, with strict exceptions (e.g. the search for a missing child, the prevention of a terrorist threat, etc.), subject to judicial authorization and appropriate limits.
Limited-risk AI systems are subject to transparency obligations to ensure user trust and understanding. For example, AI systems that interact with humans, such as chatbots, must inform users that they are communicating with a machine, enabling users to make informed decisions.
Specific obligations include:
These measures are designed to increase transparency and foster public confidence in the use of AI.
For minimal-risk AI systems, the AI Act imposes no specific restrictions. These applications, such as AI-powered video games or spam filters, are widely used without strict oversight. However, suppliers are encouraged to adopt voluntary codes of conduct to promote responsible practices.
The majority of AI systems currently in use in the EU fall into this category, allowing free use and promoting innovation without red tape.
The RIA also provides a framework for general-purpose AI models, such as the large language models (LLMs) offered by companies like Mistral AI or OpenAI. These models, used for a variety of tasks, escape traditional classifications.
For these models, the RIA imposes several levels of obligations, ranging from minimal transparency to in-depth assessment and mitigation of systemic risks, such as major accidents, cyber-attacks, and discriminatory bias.
The AI Act establishes different levels of requirements in terms of transparency, security and oversight, mainly based on the classification of AI systems according to their level of risk. Here is an overview of the main requirements in these areas:
Transparency requirements vary according to the level of risk:
Safety requirements are particularly stringent for high-risk systems:
Human oversight is a key element of the AI Act, especially for high-risk systems:
It is important to note that these requirements are more stringent for high-risk systems, while minimal-risk systems are largely exempt from specific regulation. The overall aim is to ensure that AI is used safely, transparently and ethically, while fostering innovation in the field.
To understand the impact of this law on customer service, we can take the example of chatbots and virtual assistants, which are generally classified as minimal or limited risk levels. Companies will have to :
If these systems can make significant decisions, such as refunds, they could be reclassified as high risk. Companies will then have to :
For Klark, the AI Act is both a challenge and an opportunity. We already integrate RGPD best practices into our products and will adopt the AI Act as a mandatory framework for our future functionalities. Our chatbot respects human supervision requirements, offering escalation to a human if needed, and we specify that it is "powered by Klark". By anticipating these constraints, we reinforce user confidence and avoid costly compliance modifications.
The AI Act represents a major step forward in the regulation of artificial intelligence in Europe, establishing a clear framework to guarantee the safety, ethics and transparency of AI systems. By classifying AI systems according to their level of risk, this legislation offers greater protection for users while stimulating innovation and confidence in AI technologies.
For Klark, the AI Act is an opportunity to demonstrate our commitment to responsible and transparent practices. By integrating these regulatory requirements into our products from the outset, we are building customer confidence and ensuring the compliance of our AI solutions.
Ultimately, the AI Act aims to create an environment where AI can thrive safely, benefiting society as a whole. By adopting these new rules, companies like Klark can not only comply with the legislation, but also lead the industry towards a more ethical and trustworthy future.