Agentic RAG: what concrete impact for Customer Service?

Nicolas Pellissier
Published on
8/2/2025

Over the next two days,AI Summit Paris will shine the spotlight on a French AI scene that's more dynamic than ever.
That's something to be proud of! But above all, we need to keep rolling up our sleeves. 💪

On the program: real-life use cases, today's and tomorrow's technologies... the Agentic RAG will often come up in discussions.

This article is here to help you brush up just before the event, so you'll be in the know tomorrow. 😉

Enjoy your reading!

🔄 Back to Basics: What is RAG again?

Retrieval-Augmented Generation (RAG) is a technique that combines the retrieval of relevant information with text generation. 

The idea is (very) simple: rather than asking a model to answer a question based solely on its general knowledge, it is provided with specific, relevant documents to enrich its answer.

📌 F or example, here's a classic RAG process:

  1. A user asks a question.
  2. AI searches for relevant information in a database.
  3. It injects its information into the request.
  4. The LLM generates a response enriched with this data.
GenAI RAG diagram
RAG operating diagram


From Klark's earliest days, we've used this technology to draft emails to customer service agents. We have put this technology at the heart of our R&D to ensure that every customer question is matched with the best possible knowledge.

🏗️ Agentic RAG: Agents take to the stage!

The Agentic RAG takes logic a step further.
Rather than following a rigid process, he acts like a chef 🧑‍🍳 who doesn't blindly follow a recipe (or a piece of knowledge). He tastes along the way, adjusting ingredients and adapting the recipe to suit the context.

RAG, super cook

Unlike a conventional RAG, which retrieves data and generates an answer in a single pass, an Agentic RAG is capable of running several improvement cycles, testing its own answers and refining them before delivering them.

More concretely, an Agentic RAG is based on an LLM interacting continuously with several specialized modules: it can reformulate the initial request, enrich the context by integrating other sources of knowledge and, finally, evaluate the quality of its own answers before displaying them. 

This architecture enables dynamic optimization of the generation process, and greater consistency in the answers produced.

Some would say that we're "marketing" a simple while loop or if condition with a trendy concept. In reality, the Agentic RAG, if properly implemented, introduces more flexibility and adaptability, enabling dynamic reasoning rather than a simple sequence of fixed steps.

So we move from a deterministic model to a more creative approach where the LLM agent has more freedom to refine its responses. But beware: less determinism means potentially more risk and less control.

👉 Finally, even the Agentic RAG can't work miracles if the input data is bad.
Shit in, shit out remains the golden rule.


📞 Case studies in Customer Service

Theory is important, but nothing beats practice to really understand 😅.
Here, then, are three Customer Service use cases that show how Agentic RAG can be put to practical use:

1️⃣ Contextual data management 

🔹 Objective: Determine whether a customer response requires specific contextual information (e.g. delivery status, warranty end date). 

🔹 Implementation logic :

  • The agent receives the question and retrieves the answer from the RAG.
  • It analyzes whether additional context is required.
  • It then retrieves the missing data, adjusts the query and resubmits it to the RAG.
  • The RAG returns a response that takes into account the customer context. 
  • The agent can iterate until it is satisfied with the final answer.
Agentic RAG diagram
Diagram of an Agentic RAG

Thanks to this AI agent, we can now deal with questions outside Level 1, adapting to each customer's situation.

2️⃣ Automatic QA : An "LLM as a Judge" to validate response quality

🔹 Objective: How to ensure that a generated response is relevant before sending it to a customer?

🔹 Implementation logic : 

  • An LLM "judge" evaluates the response generated.
  • If it is judged to be poor, he suggests ways of improving it.
  • Another LLM reformulates the initial query, incorporating these suggestions.
  • The whole thing is sent back to the RAG for a better response.
  • The LLM "judges" again and evaluates this new response.

This allows the AI to take a step back and reflect on its final answer, limiting inconsistencies.

3️⃣ Automatic FAQs update

🔹 Objective: Maintain an up-to-date knowledge base without incommensurable effort.

🔹 Implementation logic :

  • An agent analyzes all existing FAQs and detects inconsistencies.
  • It lists the articles in "error" and the passages concerned, with an explanation.
  • A second agent reformulates the articles concerned.
  • To check that the new articles are consistent with the entire FAQ, the first agent starts again.

Such an agent saves support teams an enormous amount of time by drastically simplifying the maintenance of their knowledge, while at the same time instilling confidence.

🌳 And tomorrow? Towards an even "deeper" Agentic RAG?

The future of Agentic RAG may lie in a better structuring of knowledge via Knowledge Graphs (like Neptune databases). These graphs enable information to be represented and organized in an interconnected way, facilitating a more global analysis.
Rather than comparing documents individually, we could reason at a more global level, establishing connections between several sources of information.

Currently, Agentic RAG is content to compare isolated elements, as if examining the leaves of a tree one by one. By integrating a Knowledge Graph, it could analyze interconnected sets of information - from FAQs to customer tickets to transactional databases - like a data forest.

This multi-dimensional approach paves the way for global reasoning, further enhancing the relevance of answers.

Knowledge Graph and Tree

🎯 Finally

The Agentic RAG brings a significant improvement in the way AI interacts with knowledge. For customer service, such AI agents represent a powerful lever for automating, enriching and validating responses in a more intelligent way. 

However, as always, AI is only a tool, and it's its concrete implementation that will make the difference. So, before implementing such a solution, it's crucial to ask: What are the right use cases where an AI agent could bring real added value?

You might like

- 5 MIN READING 

The Artificial Intelligence Regulation (AIR) and its impact on customer service

The Regulation for Artificial Intelligence (RIA or AI Act) was published in the Official Journal of the European Union on Thursday, July 12, 2024. Announced in April 2021 by the European Commission, the aim of this regulation is to strengthen the security and fundamental rights of users while fostering innovation and trust in AI.
Marketing Manager
- 5 MIN READING 

The rise of "AI Agents" and "AI Copilots": what's the difference and why is it important?

The business world is buzzing with excitement over the potential of AI, particularly the emergence of “AI Agents” and “AI Copilots”. These terms are not as interchangeable as one might think.
Co-founder and Co-CEO
- 5 MIN READING 

Data Privacy versus AI: Klark’s GDPR Approach to AI-Powered Service

How, as an AI-based startup, RGPD is an integral part of our strategic approach.
Co-founder and Co-CEO