Towards Explainable Knowledge Graph Embeddings by Respecting Logical Commitments

Tracking #: 3051-4265

This paper is currently under review
Authors: 
Mena Leemhuis
Oezguer Oezcep
Diedrich Wolter

Responsible editor: 
Guest Editors Ontologies in XAI

Submission type: 
Full Paper
Abstract: 
Knowledge graph embeddings (KGEs) can be seen as opportunity to integrate machine learning (ML) with knowledge representation and reasoning. In KGEs, concepts and relations are represented by geometric structures that are induced by ML. Explicit representation of concepts and relations empowers reasoning which can augment ML. This additional symbolic layer linked to ML models is widely advocated to foster explainability. However, symbolic reasoning and ML need to be aligned beyond the level of concept symbols in order to obtain explanations of what was actually learned. We characterize explainability as the alignment of reasoning in two agents, which calls for a rigorous understanding of reasoning grounded in KGEs. The desired alignment can be achieved by investigating the logical commitments made in KGE approaches by identifying models of logics that are aligned with ML models. Not until logical commitments of KGEs are aligned with common modes of reasoning, explanations for learnt models can be generated that are both effective and semantically congruent with what has been learnt. We critically review existing approaches to KGEs and then analyze a cone-based model capable of grasping full negation, a property common to symbolic reasoning but not yet captured in current KGE approaches. To this end, we propose orthologics as basis to characterize cone-based models.
Full PDF Version: 
Tags: 
Under Review