Over the next two days,AI Summit Paris will shine the spotlight on a French AI scene that's more dynamic than ever.
That's something to be proud of! But above all, we need to keep rolling up our sleeves. 💪
On the program: real-life use cases, today's and tomorrow's technologies... the Agentic RAG will often come up in discussions.
This article is here to help you brush up just before the event, so you'll be in the know tomorrow. 😉
Enjoy your reading!
Retrieval-Augmented Generation (RAG) is a technique that combines the retrieval of relevant information with text generation.
The idea is (very) simple: rather than asking a model to answer a question based solely on its general knowledge, it is provided with specific, relevant documents to enrich its answer.
From Klark's earliest days, we've used this technology to draft emails to customer service agents. We have put this technology at the heart of our R&D to ensure that every customer question is matched with the best possible knowledge.
The Agentic RAG takes logic a step further.
Rather than following a rigid process, he acts like a chef 🧑🍳 who doesn't blindly follow a recipe (or a piece of knowledge). He tastes along the way, adjusting ingredients and adapting the recipe to suit the context.
Unlike a conventional RAG, which retrieves data and generates an answer in a single pass, an Agentic RAG is capable of running several improvement cycles, testing its own answers and refining them before delivering them.
More concretely, an Agentic RAG is based on an LLM interacting continuously with several specialized modules: it can reformulate the initial request, enrich the context by integrating other sources of knowledge and, finally, evaluate the quality of its own answers before displaying them.
This architecture enables dynamic optimization of the generation process, and greater consistency in the answers produced.
Some would say that we're "marketing" a simple while loop or if condition with a trendy concept. In reality, the Agentic RAG, if properly implemented, introduces more flexibility and adaptability, enabling dynamic reasoning rather than a simple sequence of fixed steps.
So we move from a deterministic model to a more creative approach where the LLM agent has more freedom to refine its responses. But beware: less determinism means potentially more risk and less control.
👉 Finally, even the Agentic RAG can't work miracles if the input data is bad.
Shit in, shit out remains the golden rule.
Theory is important, but nothing beats practice to really understand 😅.
Here, then, are three Customer Service use cases that show how Agentic RAG can be put to practical use:
🔹 Objective: Determine whether a customer response requires specific contextual information (e.g. delivery status, warranty end date).
🔹 Implementation logic :
Thanks to this AI agent, we can now deal with questions outside Level 1, adapting to each customer's situation.
🔹 Objective: How to ensure that a generated response is relevant before sending it to a customer?
🔹 Implementation logic :
This allows the AI to take a step back and reflect on its final answer, limiting inconsistencies.
🔹 Objective: Maintain an up-to-date knowledge base without incommensurable effort.
🔹 Implementation logic :
Such an agent saves support teams an enormous amount of time by drastically simplifying the maintenance of their knowledge, while at the same time instilling confidence.
The future of Agentic RAG may lie in a better structuring of knowledge via Knowledge Graphs (like Neptune databases). These graphs enable information to be represented and organized in an interconnected way, facilitating a more global analysis.
Rather than comparing documents individually, we could reason at a more global level, establishing connections between several sources of information.
Currently, Agentic RAG is content to compare isolated elements, as if examining the leaves of a tree one by one. By integrating a Knowledge Graph, it could analyze interconnected sets of information - from FAQs to customer tickets to transactional databases - like a data forest.
This multi-dimensional approach paves the way for global reasoning, further enhancing the relevance of answers.
The Agentic RAG brings a significant improvement in the way AI interacts with knowledge. For customer service, such AI agents represent a powerful lever for automating, enriching and validating responses in a more intelligent way.
However, as always, AI is only a tool, and it's its concrete implementation that will make the difference. So, before implementing such a solution, it's crucial to ask: What are the right use cases where an AI agent could bring real added value?