Knowledge Graphs improve GenAI

Joakim Nilsson
5 min readJun 20, 2024

--

— validating results builds trust for organizations

Generative AI can make recommendations that will transform decision-making for organizations — but how can people trust the answers GenAI provides? Knowledge graphs can play a vital role in ensuring the accuracy of GenAI’s output, bolstering its reliability and effectiveness.

In Douglas Adams’ The Hitchhiker’s Guide to the Galaxy, a supercomputer called Deep Thought is asked for the answer to “Life, the universe, and everything.” After 7.5 million years, Deep Thought responds “42.” Representatives from the civilization that built Deep Thought immediately ask how it arrived at the answer, but the computer cannot tell them. When Adams wrote this scene in the 1970s, he was (arguably) making a joke — but today, many people find themselves in this situation when interacting with generative AI (GenAI).

GenAI works by drawing upon millions of pieces of data — a volume that’s impossible for humans to effectively analyze. Businesses are excited by its potential to deliver valuable insights and make well-informed predictions — but if different GenAI tools are asked the same question and give different answers, how could an organization decide which result is more correct? How would a person fact- check the responses?

Addressing the shortcomings of unstructured, implicit data

The challenge relates to the large language models GenAI relies upon. An LLM can contain massive amounts of data, but it’s commonly stored in an unstructured, implicit manner. This makes it difficult to investigate how a GenAI tool arrived at its answer.

Since the release of ChatGPT in late 2022, Neo4j and Capgemini have been working independently and collaborating to overcome this challenge by using knowledge graphs. These store complex, structured data and the relationships between them. Instead of relying solely on LLMs to directly generate database queries, our solution incorporates a high-level interface that allows the LLM to interact seamlessly with a knowledge graph via database query templates. These templates serve as structured frameworks, guiding the LLM to fill in specific parameters based on the user’s request.

This simplifies the task for the LLM by abstracting away complex logic. (See Figure 1).

Figure 1

This separation of concerns ensures the LLM focuses on natural language understanding and generation, while the query templates handle the technical aspects of database interaction – improving the overall accuracy and efficiency of retrieval.

In this example, the query template uses a vector search to locate relevant nodes within the knowledge graph that correspond to the entities present in the user’s question. This identifies the nodes relevant to the query, which are then used to retrieve neighborhoods or shortest paths around the nodes within the graph. This helps contextualize the retrieved information and provides a more comprehensive answer to the user’s query. More information about this specific query template is available in this blog post https://blog.langchain.dev/enhancing-rag-based-applications-accuracy-by-constructing-and-leveraging-knowledge-graphs/

Tailored templates

Query templates can be tailored to discrete domains such as finding dependencies within supply chains or executing aggregation operations for business intelligence purposes, enabling organizations to address specific challenges. This more targeted approach best leverages the LLM’s capabilities to generate insights by ensuring they are not only relevant but deeply informed by the underlying data structures, helping enterprises to efficiently transform their raw data into actionable intelligence.

That said, the complexity of business requirements often exceeds what a single query template can accommodate when an LLM interfaces with a Knowledge Graph. Therefore, it’s essential to embrace an adaptive approach, providing a rich assortment of query templates that can be selectively deployed to match specific business scenarios. Leveraging the LLM’s capability to invoke functions, GenAI can dynamically select and employ multiple query templates based on the context of the user’s request or the specific task at hand. This results in a more nuanced and flexible interaction with the database, and significantly amplifies the LLM’s ability to solve intricate business intelligence and analytics problems. (See Figure 2).

Figure 2

This LLM-powered movie agent uses several tools, orchestrated through carefully designed query templates, to interact with the Knowledge Graph.

  • The information tool retrieves data about movies or individuals, ensuring the agent has access to the latest and most relevant information.
  • The recommendation tool provides movie recommendations based on user preferences and input.
  • The memory tool stores information about user preferences in the Knowledge Graph, allowing for a personalized experience over multiple interactions.

Democratizing data and empowering business users

The Knowledge Graph acts as a bridge, translating user intent into specific, actionable queries the LLM can execute with increased accuracy and reliability. By allowing any user — regardless of technical knowledge — to inspect how the LLM arrived at its answers, people can validate the information sources themselves. Benefits include:

  • Results that are explainable, repeatable, and transparent. This can enhance trust in GenAI in everything from research and discovery in life sciences to digital twins in sectors such as manufacturing, aerospace, and telecommunications.
  • Better-informed and better trusted business decisions
  • Freed up time for experts such as prompt engineers to concentrate on tasks that require their specialized skills.

As we look ahead, we expect Knowledge Graphs to help Large Language Models embrace iterative processes to improve their output. Our enthusiasm is shared by other experts in the field including Andrew Ng at DeepLearningAI, underscoring the widespread recognition of their transformative capabilities. As we help create the future, it’s clear the journey with these intelligent systems is only just beginning — and is moving much faster than Deep Thought ever did — so it’s critical that people are given the means to fact-check generative AI as it evolves.

Innovation takeaways

Trust is important: Knowledge graphs can boost confidence in the output from GenAI systems — making it easier for people and organizations to embrace them.

Tools for the tool: With knowledge graphs, large language models can dynamically employ multiple query templates to match specific business scenarios, making interactions with GenAI more nuanced.

Democratizing data: By making it easier for everyone in an organization to interact with generative AI, knowledge graphs can free up experts to focus on tasks that require their specific skills.

Source

This is an article originally published in Data-Powered Innovation Review 8th edition. Authors where

Joakim Nilsson Knowledge Graph Lead I&D, Capgemini Sweden

Magnus Carlsson CTO I&D Capgemini Sweden

Tomaz Bratanic Senior GenAI Developer at Neo4j.

Link to original paper: https://www.capgemini.com/insights/research-library/data-powered-innovation-review-wave-8/?utm_source=linkedin_insightsdata&utm_medium=social&utm_content=insightsdata_grouporganic_video_report_none&utm_campaign=AI_Analytics_dpir_wave8

--

--

Joakim Nilsson
Joakim Nilsson

Written by Joakim Nilsson

Joakim is the Knowledge Graph Lead for Capgemini Sweden and has extensive experience in Knowledge Graph projects both in Sweden and abroad.