An Abduction-based Method for Explaining Non-entailments of Semantic Matching

Tracking #: 3742-4956

This paper is currently under review
Authors: 
Ivan Gocev
Georgios Meditskos
Nick Bassiliades

Responsible editor: 
Stefano Borgo

Submission type: 
Full Paper
Abstract: 
Explainable Artificial Intelligence (XAI) attempts to give explanations for decisions made by AI systems. Despite the fact that for knowledge-based systems this is perceived as inherently easier than for black-box AI systems based on Machine Learning, research is still required for computing satisfactory explanations of reasoning results. In this paper, we focus on explaining non-entailments as a result of semantic matching in EL⊥ ontologies. In the cases where the result of semantic matching is an entailment, the already established methods of justifications and proofs provide excellent results. On the other hand, the cases in which the result of semantic matching presents in the form of a non-entailment demand an alternative approach. Inspired by abductive reasoning techniques, we present a method for computing subtree isomorphisms between graphical representations of EL⊥ concept descriptions, which are then used to construct solutions to abduction problems, i.e. explanations, for semantic matching non-entailments in EL⊥ ontologies. We then illustrate our method with an example scenario and discuss the results.
Full PDF Version: 
Tags: 
Under Review