InterpretME: A Tool for Interpretations of Machine Learning Models Over Knowledge Graphs

Tracking #: 3404-4618

This paper is currently under review
Yashrajsinh Chudasama
Disha Purohit
Philipp Rohde
Julian Gercke1
Maria-Esther Vidal1

Responsible editor: 
Guest Editors Tools Systems 2022

Submission type: 
Tool/System Report
In recent years, knowledge graphs have been considered pyramids of interconnected data enriched with semantics for complex decision-making. The potential of knowledge graphs and the demand for interpretability of machine learning (ML) models in diverse domains (e.g., healthcare) have gained more attention. The lack of model transparency negatively impacts the understanding and, in consequence, interpretability of the predictions made by a model. Data-driven models should be empowered with the knowledge required to trace down their decisions and the transformations made to the input data to increase model transparency. In this paper, we propose InterpretME, a tool for fine-grained representations in a knowledge graph, of the main characteristics of trained machine learning models. They include data- (e.g., features’ definition and SHACL validation) and model-based characteristics (e.g., relevant features and interpretations of prediction probabilities and model decisions). InterpretME allows for defining a model’s features over knowledge graphs (KGS) and relational data in various formats, including CSV and JSON; SHACL states domain integrity constraints. InterpretME traces the steps of data collection, curation, integration, and prediction; it documents the collected metadata in the InterpretME KG. InterpretME is publicly available as a tool; it includes a pipeline for enhancing the interpretability of ML models, the InterpretME KG, and an ontology to describe the main characteristics of trained ML models.
Full PDF Version: 
Under Review