Towards Explainable Automated Knolwedge Engineering with Human-in-the-loop

Tracking #: 3814-5028

Authors: 
Bohui Zhang
Albert Meroño-Peñuela
Elena Simperl

Responsible editor: 
Guest Editors KG Construction 2024

Submission type: 
Full Paper
Abstract: 
Knowledge graphs are important in human-centered AI as they provide large labeled machine learning datasets, enhance retrieval-augmented generation, and generate explanations. However, knowledge graph construction has evolved into a complex, semi-automatic process that increasingly relies on black-box deep learning models and heterogeneous data sources to scale. The knowledge graph lifecycle is not transparent, accountability is limited, and there are no accounts of, or indeed methods to determine, how fair a knowledge graph is in downstream applications. Knowledge graphs are thus at odds with AI regulation, for instance, the EU's AI Act, and with ongoing efforts elsewhere in AI to audit and debias data and algorithms. This paper reports on work towards designing explainable (XAI) knowledge-graph construction pipelines with humans in-the-loop and discusses research topics in this area. Our work is based on a systematic literature review, in which we study tasks in knowledge graph construction that are often automated, as well as common methods to explain how they work and their outcomes, and an interview study with 13 people from the knowledge engineering community. To analyze the related literature, we introduce use cases, their related goals for XAI methods in knowledge graph construction, and the gaps in each use case. To gain an understanding of the role of explainable models in practical scenarios, and reveal the requirements for improving the current XAI methods, we designed interview questions covering broad transparency and explainability topics, along with example discussion sessions using examples from the literature review. From practical knowledge engineering experience, we collect requirements for designing XAI methods, propose design blueprints, and outline directions for future research: (i) tasks in knowledge graph construction where manual input remains essential and where AI assistance could be beneficial; (ii) integrating XAI methods into established knowledge engineering practices to improve stakeholder experience; (iii) the need to evaluate how effective explanations genuinely are making human-machine collaboration in knowledge graph construction more trustworthy; (iv) adapting explanations for multiple use cases; and (v) verifying and applying the XAI design blueprint in practical settings.
Full PDF Version: 
Tags: 
Reviewed

Decision/Status: 
Minor Revision

Solicited Reviews:
Click to Expand/Collapse
Review #1
By Sitt Min Oo submitted on 11/Mar/2025
Suggestion:
Accept
Review Comment:

Thank you very much to the authors for addressing most of the comments and clarifying the misunderstandings that I had (especially w.r.t data provenance and regulations) while reading this paper!
I have raised the score to an "accept".

Review #2
By Raul Águila Escobar submitted on 15/Mar/2025
Suggestion:
Accept
Review Comment:

The authors have incorporated most of the requested changes and have justified their consideration of the difference between KGC and OE. I believe the article meets the requirements of quality, originality, and methodological rigor for publication. Furthermore, the topic is sufficiently relevant to be published, considering the existing automation driven by generative AI

Review #3
Anonymous submitted on 18/Apr/2025
Suggestion:
Accept
Review Comment:

This is a review to a revision of a previous submission, for which I had already recommended acceptance.
The authors have in their revised version addressed the recommendations and concerns of all reviewers, including in particular mine.
I therefore naturally still recommend to accept.

Review #4
By Irene Celino submitted on 29/Apr/2025
Suggestion:
Minor Revision
Review Comment:

I thank the authors for the careful review and I confirm that this second version is much in a better shape than the first submission. I also acknowledge that the provided GitHub link is a nice addition for future reference. With respect to the reviewing criteria:
- Originality: I confirm that the paper covers an interesting intersection between XAI and knowledge engineering
- Significance of results: the literature review together with the interview study let the authors reveal interesting aspects and draw unexplored research lines
- Quality of writing: as said, the paper improved also on this aspect wrt the original submission

I would still recommend a few improvement for the final acceptance:
- Figure 1: it seems to me that there is a missing arrow between stage B (or its resulting KG) and stage C, that is described in the text
- Section 3.1.2: I still find the 4 use cases a bit questionable; the authors added a text to explain how they came up with those 4 use cases, but it's not fully convincing, in that it still seems a (reasonable) post-hoc choice
- Table 3: it may be worth adding a column with the "sector" (academic vs. industry); also, the job role column doesn't add much, in that those titles were self-attributed if I got it correctly
- Table 7: I really welcome the addition of this table; I would recommend putting it at the end of Section 4, exactly as a summary of the previous sub-sections
- Section 5 and Figure 6: while the explanatory text has improved a lot, I still find the picture a bit obscure in that (1) it is not fully clear how it should be read (what's the meaning of the arrows?) and (2) there is not a clear correspondence between the text and the figure
- Section 6: I would recommend adding a few paragraphs that summarise how the authors addressed the original research questions, as the reader is left wondering whether RQ1-RQ4 were fully answered or if part of them are part of the future work