An Abduction-based Method for Explaining Non-entailments of Semantic Matching

Tracking #: 3837-5051

This paper is currently under review
Authors: 
Ivan Gocev
Georgios Meditskos
Nick Bassiliades

Responsible editor: 
Stefano Borgo

Submission type: 
Full Paper
Abstract: 
Explainable Artificial Intelligence (XAI) attempts to give explanations for decisions made by AI systems. Despite the fact that for knowledge-based systems this is perceived as inherently easier than for black-box AI systems based on Machine Learning, research is still required for computing satisfactory explanations of reasoning results. In this paper, we focus on explaining non-entailments as a result of semantic matching in EL⊥ ontologies. In the cases where the result of semantic matching is an entailment, the already established methods of justifications and proofs provide excellent results. On the other hand, the cases in which the result of semantic matching presents in the form of a non-entailment demand an alternative approach. Inspired by abductive reasoning techniques, we present a method for computing subtree isomorphisms between graphical representations of EL⊥ concept descriptions, which are then used to construct solutions to abduction problems, i.e. explanations, for semantic matching non-entailments in EL⊥ ontologies. We improve existing results by generalizing our approach to be able to abduct complex concept expressions of all formats that also consist of role restrictions, rather than concepts alone, as well as the time needed to compute solutions to abduction problems in EL⊥ ontologies. We then illustrate our method with an example scenario and perform synthetic experiments to stress the methods’ capabilities and experiments on realistic ontologies to show the practical performance of our method.
Full PDF Version: 
Tags: 
Under Review