Skip to main content
eScholarship
Open Access Publications from the University of California

UCLA

UCLA Electronic Theses and Dissertations bannerUCLA

Neural-Symbolic Methods for Knowledge Graph Reasoning

No data is associated with this publication.
Abstract

Knowledge graph (KG) reasoning has been studied extensively due to its wide applications, which is addressed by two lines of research, i.e., traditional symbolic reasoning and modern neural networks-based techniques.

Symbolic reasoning has been the most studied approach toward KG reasoning since the earliest days. This approach represents expert knowledge as statements in a logic-based representation language and implements reasoning as logical inference, primarily addressing high-level reasoning and cognitive processes. For instance, consider the logical rule \emph{$\text{SpeakLanguage}(x, y) \leftarrow \text{LiveIn}(x, z) \wedge \text{OfficialLanguage}(z, y)$}, along with the observed facts \emph{``LiveIn(Mina Miller, USA)''} and \emph{``OfficialLanguage(USA, English)''}, we can infer the new fact \emph{``SpeakLanguage(Mina Miller, English)''}. Symbolic reasoning approaches have demonstrated strong interpretability and generalizability due to the power of logical rules. Nevertheless, they encounter certain limitations when it comes to scaling up to vast datasets, managing ambiguity, and capturing the correlations between entities and relations. To illustrate, consider the case where both \emph{``USA''} and \emph{``United States''} denote the same country. Symbolic reasoning approaches treat them as distinct entities, thereby failing to automatically transfer the knowledge from the existing triple \emph{``LiveIn(Mina Miller, USA)''} to deduce the new fact \emph{``LiveIn(Mina Miller, United States)''}.

Neural networks-based techniques have recently emerged as state-of-the-art methods in KG reasoning, owing to their successful applications. Unlike symbolic reasoning, these techniques aim to predict unseen triples by exploring the graph structure in KGs to capture the similarity of entities. Neural networks-based techniques have demonstrated good scalability and a strong ability to capture correlations between entities and relations. However, they fall short in modeling higher-order dependencies of KG relations. For instance, while neural networks-based techniques can recognize that \emph{``USA''} and \emph{``United States''} refer to the same country based on their similarity in embeddings, they cannot infer that \emph{``Mary Stilwell''} and \emph{``Mina Miller''} speak \emph{``English''} without leveraging logical rules.

While symbolic reasoning and neural networks-based techniques are typically seen as distinct approaches toward achieving the ultimate goal of KG reasoning, they possess a natural potential to complement and enhance each other. Symbolic approaches offer significant interpretability and generalizability, leveraging logical rules to provide insights into inferred results and enabling easy generalization to unobserved objects. However, they struggle to handle large datasets and fail to capture correlations between entities and relations. Conversely, neural networks-based techniques address the limitations of symbolic approaches by exploring intrinsic similarities among triples, but they lack interpretability and generalizability. By integrating both techniques into a unified framework, neural-symbolic reasoning provides a more efficient, generalizable, and interpretable way to perform KG reasoning. In this thesis, we provide a comprehensive overview of techniques in integrating symbolic logical rules and neural networks for KG reasoning, with a particular focus on two specific tasks, KG completion and logical rule learning and propose our own solutions to address the limitations of existing works.

Main Content

This item is under embargo until March 22, 2026.