The Artificial Intelligence Regulation (AIR) and its impact on customer service

The Regulation for Artificial Intelligence (RIA or AI Act) was published in the Official Journal of the European Union on Thursday, July 12, 2024. Announced in April 2021 by the European Commission, the aim of this regulation is to strengthen the security and fundamental rights of users while fostering innovation and trust in AI.

Artificial Intelligence (AI) is rapidly transforming various sectors of our society, from healthcare and education to finance and transportation (not forgetting customer service). However, this powerful technology also poses challenges in terms of safety, ethics and human rights. To address these concerns, the European Union has proposed the AI Act, a regulatory framework to ensure the safe and ethical development and use of AI.

AI Act

What is it? 

The RIA, also known as the AI Act, is the world's first legal framework specifically dedicated to the regulation of artificial intelligence. It addresses the risks associated with AI. The main aim of this legislation is to promote trustworthy AI, ensuring that AI systems respect fundamental rights, safety and ethical principles, while addressing the risks posed by powerful and influential AI models.

The AI Act aims to provide AI developers and users with clear requirements and precise obligations regarding the use of AI. At the same time, the regulation seeks to reduce administrative and financial burdens for businesses, particularly small and medium-sized enterprises (SMEs). The law is part of a broader set of policy measures, including the AI Innovation Package and the Coordinated AI Plan. Together, these initiatives aim to ensure the safety and fundamental rights of citizens and businesses while boosting adoption, investment and innovation in AI across the EU.

The AI Act ensures that Europeans can have confidence in AI and its benefits. While the majority of AI systems present little or no risk and can help solve many societal challenges, some systems pose significant risks that need to be managed to avoid undesirable consequences.

What does the RIA provide? 

The RIA proposes a risk-based approach by classifying AI systems into four levels:

  • Unacceptable; 
  • High ; 
  • Limited ; 
  • Minimal or zero.

Unacceptable risk : 

Some AI systems are banned because they pose clear threats to people's safety, livelihoods and rights. For example, AI systems that manipulate human behavior to cause harm, or those that exploit the vulnerabilities of vulnerable groups (such as children).

High risk : 

High-risk AI systems are those that can affect people's safety or fundamental rights, justifying strict requirements for their development and deployment. Areas concerned include critical infrastructure, education, employment, essential public services, law enforcement and migration management. Specific examples:

  • Critical infrastructures: life-threatening transport using AI.
  • Education and training: exam grading influencing access to education.
  • Product safety: AI in robot-assisted surgery.
  • Employment: CV sorting software.
  • Essential services: credit rating affecting access to loans.
  • Law enforcement: evaluation of evidence, which may affect fundamental rights.
  • Migration management: automated examination of visa applications.
  • Justice and democracy: search for court decisions.

Requirements for these systems include:

  • Risk assessment and mitigation.
  • Data quality to minimize bias.
  • Activity traceability.
  • Detailed documentation for authorities.
  • Clear information for users.
  • Appropriate human supervision.
  • High robustness, safety and precision.

In principle, remote biometric identification in public spaces is prohibited, with strict exceptions (e.g. the search for a missing child, the prevention of a terrorist threat, etc.), subject to judicial authorization and appropriate limits.

Risk pyramid

Limited risk

Limited-risk AI systems are subject to transparency obligations to ensure user trust and understanding. For example, AI systems that interact with humans, such as chatbots, must inform users that they are communicating with a machine, enabling users to make informed decisions.

Specific obligations include:

  • Clear identification: users need to know that they are interacting with an AI.
  • AI-generated content: AI-generated texts, audios and videos, especially those informing the public about issues of general interest, must be labeled as artificially generated.
  • Fight against deepfakes: audio and video content manipulated by AI must be clearly identified to avoid confusion and abuse.

These measures are designed to increase transparency and foster public confidence in the use of AI.

Minimal risk 

For minimal-risk AI systems, the AI Act imposes no specific restrictions. These applications, such as AI-powered video games or spam filters, are widely used without strict oversight. However, suppliers are encouraged to adopt voluntary codes of conduct to promote responsible practices.

The majority of AI systems currently in use in the EU fall into this category, allowing free use and promoting innovation without red tape.

General-purpose AI models

The RIA also provides a framework for general-purpose AI models, such as the large language models (LLMs) offered by companies like Mistral AI or OpenAI. These models, used for a variety of tasks, escape traditional classifications.

For these models, the RIA imposes several levels of obligations, ranging from minimal transparency to in-depth assessment and mitigation of systemic risks, such as major accidents, cyber-attacks, and discriminatory bias.

Different requirements

The AI Act establishes different levels of requirements in terms of transparency, security and oversight, mainly based on the classification of AI systems according to their level of risk. Here is an overview of the main requirements in these areas:

Transparency

Transparency requirements vary according to the level of risk:

  • High-risk systems: These must be designed and developed in such a way as to guarantee sufficiently transparent operation to enable users to interpret and use the results correctly. These systems must be accompanied by detailed operating instructions containing information on their characteristics, capabilities and performance limits.
  • Limited-risk systems: These are subject to lighter transparency obligations. Developers and deployers must ensure that end users are aware that they are interacting with AI, as in the case of chatbots and deepfakes.
  • Minimal-risk systems: These are generally not subject to specific transparency requirements.

Security and cybersecurity

Safety requirements are particularly stringent for high-risk systems:

  • Resilience: high-risk systems must be resistant to manipulation and cyber-attacks.
  • Robustness: they must have an appropriate level of precision, robustness and cybersecurity.
  • Cybersecurity protocols: cybersecurity protocols are required for high-risk systems.

Monitoring

Human oversight is a key element of the AI Act, especially for high-risk systems:

  • Requirement for human oversight: the AI Act introduces a requirement for human oversight of AI systems.
  • Risk mitigation measures: risk mitigation measures must be put in place for high-risk systems.
  • Ongoing monitoring: suppliers of high-risk systems must set up a post-market monitoring system to collect, document and analyze relevant data on the performance of these systems throughout their life cycle.

It is important to note that these requirements are more stringent for high-risk systems, while minimal-risk systems are largely exempt from specific regulation. The overall aim is to ensure that AI is used safely, transparently and ethically, while fostering innovation in the field.

Impact on Customer Service

To understand the impact of this law on customer service, we can take the example of chatbots and virtual assistants, which are generally classified as minimal or limited risk levels. Companies will have to :

  • Transparency: clearly informing users that they are interacting with an AI.
  • Data accuracy and verification: set up AI verification and training processes.
  • Algorithmic biases: use diversified data sets and perform regular audits.
  • Data protection: ensuring compliance with the RGPD.

If these systems can make significant decisions, such as refunds, they could be reclassified as high risk. Companies will then have to :

  • Human supervision: integrate meaningful human supervision into decision-making.
  • Transparency and explicability: inform users about the use of AI and explain the basis for decisions.
  • Recourse and non-discrimination: enabling customers to challenge decisions and ensuring fairness without discrimination.

Transparency: users are clearly informed that they are interacting with an AI.

Impact on Klark

For Klark, the AI Act is both a challenge and an opportunity. We already integrate RGPD best practices into our products and will adopt the AI Act as a mandatory framework for our future functionalities. Our chatbot respects human supervision requirements, offering escalation to a human if needed, and we specify that it is "powered by Klark". By anticipating these constraints, we reinforce user confidence and avoid costly compliance modifications.

Towards an ethical and secure future for AI

The AI Act represents a major step forward in the regulation of artificial intelligence in Europe, establishing a clear framework to guarantee the safety, ethics and transparency of AI systems. By classifying AI systems according to their level of risk, this legislation offers greater protection for users while stimulating innovation and confidence in AI technologies.

For Klark, the AI Act is an opportunity to demonstrate our commitment to responsible and transparent practices. By integrating these regulatory requirements into our products from the outset, we are building customer confidence and ensuring the compliance of our AI solutions.

Ultimately, the AI Act aims to create an environment where AI can thrive safely, benefiting society as a whole. By adopting these new rules, companies like Klark can not only comply with the legislation, but also lead the industry towards a more ethical and trustworthy future.

You might like

- 5 MIN READING 

The rise of "AI Agents" and "AI Copilots": what's the difference and why is it important?

The business world is buzzing with excitement over the potential of AI, particularly the emergence of “AI Agents” and “AI Copilots”. These terms are not as interchangeable as one might think.
Co-founder and Co-CEO
- 5 MIN READING 

Data Privacy versus AI: Klark’s GDPR Approach to AI-Powered Service

How, as an AI-based startup, RGPD is an integral part of our strategic approach.
Co-founder and Co-CEO