Review Comment:
The paper presents a graph-based representation of AI systems explanation. The goal is to provide a way to represent conveniently the content of this explanation. First, such a graph is extracted from the AI model and then the graph feeds a Natural Language Generation tool, in this case TS4NLE.
The paper is well written in the sense explanations are clear. Unfortunately, the paper has many typos and must be proofread before publication. I will end this review with few typos (but they are too numerous to be listed exhaustively). Some figures may be reduced.
The bibliography at first sight seems complete, even if with a field as XAI it is difficult to pretend to completeness. Nevertheless, ref [1] cannot be cited as an example of XAI: it compares different software for playing games. Even if explainability has been a concern for system experts since the beginning, XAI (and the acronym) are more recent. But reading some of the papers in the bibliography and paper which cites them, I discovered previous work that has the same goal as this paper, in particular the papers of Ismail Baaj and his supervisors:
- Baaj, I. & Poli, J. (2019). Natural Language Generation of Explanations of Fuzzy Inference Decisions. FUZZ-IEEE 2019 International Conference on Fuzzy Systems, June 2019, New Orleans, USA
- Baaj, I., Poli, J. & Ouerdane, W. (2019). Some Insights Towards a Unified Semantic Representation of Explanation for eXplainable Artificial Intelligence (XAI). 1st Workshop on Interactive Natural Language Technology for Explainable Artificial Intelligence @ INLG 2019 International Natural Language Generation Conference, October 2019, Tokyo, Japan
- Baaj, I., Poli, J., Ouerdane, W. & Maudet, N. (2021). Representation of Explanations of Possibilistic Inference Decisions @ ECSQARU 2021 European Conference on Symbolic and Quantitative Approaches to Reasoning with Uncertainty, September 2021, Prague, Czechia .
They also argue for a representation of explanations, but they based their work on conceptual graphs and they also as perspective, tell that they will use domain knowledge (that is comparable to two sources of knowledge in this work). They also provide adaptations of CG for possibilistic inference rules. Baaj’s et al work also shows that obtaining a good explanation from a symbolic AI is not as easy as the authors seem to pretend in this article. This diminishes the novelty of this paper.
I also disagree with the authors about the tasks of connectionist approaches (multiclass, multilabel or regression): it is not proper to connectionism and can be applied to symbolic AI.
A representation is defined by syntax and semantics. In this paper, only the syntax (graph) is given, but there is no clue about the semantics.
My concern is also about the explanation produced for black-box models: from the definition of explanation, it is important to give some clues about the mechanism that leads to the decision. An explanation cannot be only a list of inputs or predicates; it is also the combination of them. Moreover, there are different types of explanations (counterfactual, etc.) and the paper does not tell how to generate a corresponding graph. Is it possible to represent disjunctions? Negation?
Finally, the paper gathers good ideas, but lacks of formalism. It does not solve any of the issues that arise from explanation of decisions.
Example of typos:
- Abstract: research is[has] … grown
- P4, l. 14: developes
- P5, l. 46: and integrated
- P8, l.9: to tailoring
- P8, l. 24-26 “she”/”her”
- P8, l.28 cliniciancontaining
- P8, l.40: udnerlying
- …
|