InterpretME: A Tool for Interpretations of Machine Learning Models Over Knowledge Graphs

Tracking #: 3511-4725

Authors: 
Yashrajsinh Chudasama
Disha Purohit
Philipp Rohde
Julian Gercke
Maria-Esther Vidal

Responsible editor: 
Guest Editors Tools Systems 2022

Submission type: 
Tool/System Report
Abstract: 
In recent years, knowledge graphs (KGs) have been considered pyramids of interconnected data enriched with seman- tics for complex decision-making. The potential of KGs and the demand for interpretability of machine learning (ML) models in diverse domains (e.g., healthcare) have gained more attention. The lack of model transparency negatively impacts the under- standing and, in consequence, interpretability of the predictions made by a model. Data-driven models should be empowered with the knowledge required to trace down their decisions and the transformations made to the input data to increase model transparency. In this paper, we propose InterpretME, a tool that using KGs, provides fine-grained representations of trained ML models. An ML model description includes data- (e.g., features’ definition and SHACL validation) and model-based characteris- tics (e.g., relevant features and interpretations of prediction probabilities and model decisions). InterpretME allows for defining a model’s features over data collected in various formats, e.g., RDF KGs, CSV, and JSON. InterpretME relies on the SHACL schema to validate integrity constraints over the input data. InterpretME traces the steps of data collection, curation, integration, and prediction; it documents the collected metadata in the InterpretME KG. InterpretME is published in GitHub and Zenodo. The InterpretME framework includes a pipeline for enhancing the interpretability of ML models, the InterpretME KG, and an ontology to describe the main characteristics of trained ML models; a PyPI library of InterpretME is also provided. Addition- ally, a live code, and a video demonstrating InterpretME in several use cases are also available.
Full PDF Version: 
Tags: 
Reviewed

Decision/Status: 
Accept

Solicited Reviews:
Click to Expand/Collapse
Review #1
Anonymous submitted on 25/Aug/2023
Suggestion:
Accept
Review Comment:

In this version, the authors have addressed the concerns and remarks mentioned previously. I think the paper is now much improved.

(1) Quality, importance, and impact of the described tool or system (convincing evidence must be provided).

- More details are added to describe the architecture of the InterpretME tool. The user study has also been described in detail to address the concerns I had previously.

(2) Clarity, illustration, and readability of the describing paper, which shall convey to the reader both the capabilities and the limitations of the tool.

- In general, the clarity of the paper has significantly improved. However, the font of the labels of the nodes and links in Fig. 8 is too small and it should increased a bit for visibility/readability purposes.

Please also assess the data file provided by the authors under “Long-term stable URL for resources”. In particular, assess (A) whether the data file is well organized and in particular contains a README file which makes it easy for you to assess the data,

- The GitHub page is well organized and it contains a detailed README file.

(B) whether the provided resources appear to be complete for replication of experiments, and if not, why,

- Yes, the provided resources are complete.

(C) Whether the chosen repository, if it is not GitHub, Figshare or Zenodo, is appropriate for long-term repository discoverability, and

- GitHub and zenodo are used.

(4) whether the provided data artifacts are complete.

- Yes, they are complete.

Review #2
Anonymous submitted on 07/Sep/2023
Suggestion:
Accept
Review Comment:

I would like to thank the authors for addressing my comments, particularly by enriching the paper with additional figures. I suggest to accept the paper.

Review #3
By Lise Stork submitted on 11/Sep/2023
Suggestion:
Accept
Review Comment:

I have read the revisions, and believe my comments were addressed. I recommend the submission for publication.

However, I would still urge the authors to organise the results of the user study a bit more clearly, specifically the numbering of the questions. Four questions are enumerated in the text of section 4.5, followed by 'the 12 questions..'. Later, separate questions are discussed 'In the first question..'. Maybe use (i) for enumeration in the beginning, and indicate how many questions relate to the topic? I do think this is an easy fix, and believe another round of minor revisions is not necessary.