Review Comment:
The paper presents the OWL Explanation Workbench, a suite for working with justification-based explanations of entailments in OWL ontologies. It comprises a software library, exploitable in standalone OWLAPI-based applications, and a Protege plugin. The suite is released as open-source.
Having used several times the Explanation Workbench Protege plugin since its introduction (2008), I can *really* confirm the usefulness and effectiveness of having explanations when authoring an ontology. I'm not aware of any other tool offering such functionalities.
Now, regarding the dimensions along which the tool and system reports should be evaluated for this journal:
1) Quality, importance, and impact of the described tool or system (convincing evidence must be provided)
Haven't checked in details the code (so I cannot comment on code aspects), but the tool performs well, and it's usage (Protege plugin) is quite intuitive. The tool is "right to the point": there are not many fancy things or innovative ideas on the presentation of the entailments, but I guess the task itself does not require them.
As "regular customer" of the Explanation Workbench, I'm confident in confirming the importance and usefulness of such tool: tasks such as understanding the reasons for the inconsistency in the ontology you are developing are definitely simplified by the availability of the Explanation Workbench.
Concerning the impact, the paper falls a little bit short in providing "convincing evidence". I know, it's a difficult information to give... but the number of Protege registered users in the "uptake and usage" section does not necessarily imply something on the usage of the functionality: e.g., I'm one user, but I've used these functionality several times, while there may be thousands of user who download Protege without having clicked ever on the "?" next to an axiom (e.g., people may use Protege to build lightweight ontologies, simple taxonomies, without needing the functionalities offered by the Explanation Workbench). Maybe reporting more on other papers/big projects using the tool could be more appropriate (e.g., TAMBIS is cited at some point in the paper). Also, nothing is said about people downloading / forking the library version from GitHub, or on the usage of the library version of the workbench in applications (i.e, not in Protege).
(2) Clarity, illustration, and readability of the describing paper, which shall convey to the reader both the capabilities and the limitations of the tool.
The paper is well written. Concerning the content, I think too much space is dedicated to the justification research stream (first 4 out of 8/9 pages of the paper): given that the paper should focus on the ("brief and pointed") description of the system, I think that Section 3 (The Rise of Prominence of Justifications) could be safely entirely dropped (or drastically reduced to one / two paragraphs max), as it is not needed to understand the description of the tool. This would also reduce the reference section, which is abnormally long (72 entries) for a system report paper.
The description of the Explanation Workbench (Section 4) is very effective. It may be worth to complement the current description with a functional description of the workbench and some more details on how the tool actually works: I mean, what happen "behind the curtains" when someone clicks the question mark in the Protege interface (e.g., the reasoner is invoked on the specific axiom / all ontology / all axioms involving the classes mentioned in the current axiom, then...)?
I have some further observations, that I would like the authors to address in order to improve the current version of the paper.
First, the authors claim a few times in the paper that "the library can be used in standalone OWL API based applications that require the ability to generate and consume justifications". Would it be possible to name, or make more explicit, a few examples of such applications? Or, even better, it may be good to mention concrete applications using the Explanation Workbench (besides Protege), and the tasks for which they are using it.
Second, the Explanation Workbench (or at least one of its initial versions) is around for some years now: indeed, a (short) system description of the tool was already presented at the ISWC 2008 Posters and Demos Track [1], built on the research work presented in [2] (note: both these references are very relevant for the work here described, but they are not included / mentioned in the submitted version of the paper).
I ask the authors to explain and clarify what are the differences between the Explanation Workbench described in the submitted paper and the version presented in [1], and also why a tool description of the Explanation Workbench is timely now, several years after its initial release (2008).
[1] Matthew Horridge, Bijan Parsia, Ulrike Sattler: Explanation of OWL Entailments in Protege 4. International Semantic Web Conference (Posters & Demos) 2008
http://wifo5-03.informatik.uni-mannheim.de/bizer/pub/iswc2008pd/iswc2008...
[2] Matthew Horridge, Bijan Parsia, Ulrike Sattler: Laconic and Precise Justifications in OWL. International Semantic Web Conference 2008: 323-338
http://mowl-power.cs.man.ac.uk/2009/esslli-explanation/HoPaSa08.pdf
|