Explanation Ontology: A General-Purpose, Semantic Representation for Supporting User-Centered Explanations

Tracking #: 3133-4347

Authors: 
Shruthi Chari
Oshani Seneviratne
Mohamed Ghalwash
Sola Shirai
Daniel M. Gruen
Pablo Meyer
Prithwish Chakraborty
Deborah L. McGuinness

Responsible editor: 
Guest Editors Ontologies in XAI

Submission type: 
Full Paper
Abstract: 
In the past decade, trustworthy Artificial Intelligence (AI) has emerged as a focus for the AI community to ensure better adoption of AI models, and explainable AI is a cornerstone in this area. Over the years, the focus has shifted from building transparent AI methods to making recommendations on how to make black-box or opaque machine learning models and their results more understandable by experts and non-expert users. In our previous work, to address the goal of supporting user-centered explanations that make model recommendations more explainable, we developed an Explanation Ontology (EO), a general-purpose representation, to help system designers, our intended users of the EO, connect explanations to their underlying data and knowledge. This paper addresses the apparent need for improved interoperability to support a wider range of use cases. We expand the EO, mainly in the system attributes contributing to explanations, by introducing new classes and properties to support a broader range of state-of-the-art explainer models. We present the expanded ontology model, highlighting the classes and properties that are important to model a larger set of \textit{fifteen} literature-backed explanation types that are supported within the EO. We build on these explanation type descriptions to show how to utilize the EO model to represent explanations in \textit{five} use cases spanning the domains of finance, food, and healthcare. We include competency questions that evaluate the EO's capabilities to provide guidance for system designers on how to apply our ontology to their own use cases. This guidance includes allowing system designers to query the EO directly and providing them exemplar queries to explore content in the EO represented use cases. We have released this significantly expanded version of the Explanation Ontology at https://purl.org/heals/eo and updated our resource website, https://tetherless-world.github.io/explanation-ontology, with supporting documentation. Overall, through the EO model, we aim to help system designers be better informed about explanations and support these explanations that can be composed, given their systems' outputs from various AI models, including a mix of machine learning, logical and explainer models, and different types of data and knowledge available to their systems.
Full PDF Version: 
Tags: 
Reviewed

Decision/Status: 
Minor Revision

Solicited Reviews:
Click to Expand/Collapse
Review #1
Anonymous submitted on 07/Jul/2022
Suggestion:
Minor Revision
Review Comment:

Overview:

This research is presented as a continuation of previous work toward developing an explanation ontology (EA) model. The manuscript is divided into eight sections: introduction, state-of-the-art, approach, use cases, evaluation, and conclusions.

Section three is dedicated to the definition of the EA model, ontology composition, and explanation types. Section four is dedicated to four use-cases for using the EA model in different areas, including food recommendations, proactive retention, health survey analysis, and medical expenditure.

Further sections are dedicated to systems designers and evaluation of the approaches presented.

Data provided:

I highly appreciated that the authors included several links to the artifacts implemented, including a detailed GitHub repository.

Recommendations to authors:

- Section 3 should include additional details on the "participatory evaluation studies" and cooperation with physicians. How did this cooperation lead to the conclusions mentioned?

- It wasn't easy to understand how the use cases were implemented and the link to the figures presented for each of them. I suggest authors include precise examples of the objects, properties, and data.

- Professional proofreading is required. I found several problems that require additional review.

Review #2
By Ilaria Tiddi submitted on 29/Jul/2022
Suggestion:
Minor Revision
Review Comment:

The paper presents an ontology (Explanation Ontology or EO) aimed at representing user-centred explanations in domain applications. This is mostly meant to help system designers in preparing their systems, i.e. modelling the explanations that can be obtained from the method(s) they develop. The EO model is described in terms of classes and properties, which are well liked to existing vocabularies, and further showcased in 5 real-world scenarios, where AI systems created through the IBM AIX-360 explanation toolkit provide user-centred explanations. The evaluation of the ontology is achieved through competency questions, aimed at assessing the ontology quality both from the tasks and the applications perspective.

In terms of the review criteria:
- originality: the work is original in what little work is devoted to structuring explanations and the processes that generate them. While it is indeed an extension of an already published ontology, the paper presents a new version of it which includes new classes, properties and vocabulary alignment. This is mostly driven by the use-cases that need(ed) to be represented with the EO.
- significance of the results : the paper is mostly an engineering exercise, so it does make little sense to talk about significance of the results. The use-cases and the competency questions however show the validity of the ontology, particularly the ability to represent multiple explanation types (and consequently designers needs) across domains. More in one of my points below.
- quality of writing : the paper is well-written and well-structured.
- reproducibility : the ontology is open source and publicly available on GitHub. The EO ontology uses PURL's domain which ensure longevity. It is well-documented, and I was able to check the data without issue. Replication of the experiments should not be a problem.

I recommend for acceptance, but I would like a few points to be further addressed. Apologies for the random order:
- The work is focused on the concept of user-centred explanations, but a bit more space should be dedicated in characterising them. One could argue that all explanations are somehow user-centred, as explanations are part of a communication process involving some sort of users (humans or machines). The paper seems to suggest (or, I wonder if) that user-centred explanations are those that come as outputs of the modern AI methods, but in the real world explanations might come also as part of other (non-AI) communication systems. Also, AI methods are mentioned, but all scenarios are ultimately reduced to explaining recommendations. So the ontology is only meant for systems providing recommendations? Perhaps I would suggest the authors to take a bit of time to discuss what do they mean by user-centred explanations, whether these are focused around AI methods and which (and why not on the rest), and which explanations might not be user-centred. This also relates to Section 3.2, where explanation types are discussed : where does the specification/granularity of explanation types end? How do you establish it? (again, perhaps the motivation is the answer to this?
- The point above could also be explained through a motivation section or paragraph, i.e. the concrete reason bringing the authors to tackle the given research problem. The introduction of Section 3 seems to suggest something in this direction, but could be expanded and should definitely come much earlier in the text (when i read the related work I am already wondering what user-centred explanations are). Overall, I would like to understand if the use-cases are the motivation that made the work happen, or the other way round.
- A few points regarding the model. First, while classes and properties are well described, as well as the instances in the use-cases, I would still be interested in having some concrete numbers of the ontology wrapped up in a table. How many classes and properties? instances per class? What are the most useful classes in your scenarios/properties? This is also information that could be useful to system designers, as well as for yourselves for ontology maintenance and evolution. Regarding Figure 2, it would be good to take some time and talk the reader through it (currently it is only quickly discussed). This would put the reader at ease with the classes you are manipulating (for instance, I am still confused on what an "object record" is). Also, it seems to me that only few of the classes in Figures 2 (and 1?) are used in Section 4, and some classes seem rather ad-hoc per scenario (e.g. ingredient, Recipe, column, cell, or classes I found on the Widoco documentation such as Income or Clinical Pearls). Are these meant to be part of the ontology? Shall they not be instances given that they are domain-specific? In addition to that, it would be good to see more classes in common among Figures 3 -- 8 (I could only see some). Finally, the EO is defined as upper-level in Section 7. From Wikipedia: an upper ontology [...] is an ontology [...] which consists of very general terms (such as "object", "property", "relation") that are common across all domains. Are all the classes from the EO fitting this definition?
- The authors claim initially that a broad set of competency questions are presented, but then only a few (6 + 7 ) are presented in the evaluations of Section 6. I would perhaps rephrase the initial statement. In general, the competency questions should be organised in terms of complexity, i.e. from easy to complex things to answers. I would also add what the ontology is *not* able to answer. The authors also mention that the first 6 Qs were the ones they managed to come up with. Did they try checking these with their intended users, i.e. the system designers? Could they not help identifying more simple and more complex queries, and also the CQs that the ontology was not able to cover? Was the ontology evaluated at all with the intended users? If yes, this should be better specified (how many, etc)
- How do the authors plan for the system designers to use the ontology? Is it currently only accessible through Protege, or is there an annotation tool/webapp they can use?

Minors:
- p. 3 l.22 : point > pointed
- subject matter experts
- p 3 l. 45 : don't > do not (there's several others to be changed)
- p4 l 26 : in explanation methods papers > in papers
- p6 l 24 : "we introduced classes if they were not a part of well-used ontologies" > should the *not* be removed?
- p 6 l47 : These include such things as as > These include things as contrastive
- p 9 l 18 : made and provides > made and provide
- p 12 l 42 : unlabelled > unlabeled
- In the figures, the point of the arrows (empty/non-empty triangle) should be consistent between legend and the graph
- p 16 l 40 : support system designers who are looking to support... > rephrase
- p 17 l 47 : . In use cases where the details of the explanation consumers are 47 present ... >> I am not sure I can syntactically parse this sentence
- p 19 l 25 : For the task-based abilities we aim to > For the task-based abilities, we

Review #3
Anonymous submitted on 14/Aug/2022
Suggestion:
Minor Revision
Review Comment:

The paper extends previous work on defining an open-source explanation ontology that allows to model system´s explanations taking into account components of the system, interface, and attributes from users. The ontology is designed so it can be used across domains but allows it to be instantiated and expanded to represent user-centered explanations in specific use cases, where much domain knowledge and details are required. The authors provided the model for fifteen types of user-centered explanations from the literature.
The ontology can address a wide range of questions from (different types of) users
and can provide multiple views to support human reasoning of AI-based system´s outputs.
The authors also describe in more detail five use cases in order to show a designer how to connect the ontology to a specific domain and tasks (they also provide the corresponding resources to reuse these use cases instantiations).

The proposal is original and it extends existing work significantly (to the point I check).
The tool presented is very interesting and thorough, it can really be used as a design tool to guide the process of AI systems
to ensure adequate levels of explainability. I think this is a very valuable resource.
The resources are well organized, they appear to be complete for use and replication.

Some comments:

- The document is difficult to read, I think this is the case because it is intended for practitioners, designers, that are trying to understand the ontology trying it out and using this document as support and examples to replicate in their particular use cases.

- It is not clear the method proposed to evaluate the ontology. From what I understand it has to do with expressiveness but that part of the document is either not self contained or not formal enough to reach a conclusion of how good the ontology is. I guess a user study could be a way to evaluate this, though I understand it is out of the scope of this work and something that is not that easy to do. The authors should expand on this section and explain better the evaluation methodology, metrics and conclusions.

- I think there are some inconsistencies in the snippets, for instance fig 3, the green arrow should be not full in the graph, right? In the legend some head arrows are full but then in the graph they are not. This happens across the document.