The OWL Explanation Workbench: A toolkit for working with justifications for entailments in OWL ontologies

Tracking #: 992-2203

Matthew Horridge
Bijan Parsia
Uli Sattler

Responsible editor: 
Oscar Corcho

Submission type: 
Tool/System Report
In this article we present the Explanation Workbench, a library and tool for working with justification-based explanations of entailments in OWL ontologies. The workbench comprises a software library and Protégé plugin. The library can be used in standalone OWL API based applications that require the ability to generate and consume justifications. The Protégé plugin, which is underpinned by the library, can be used by end-users of Protégé for explaining entailments in their ontologies. Both the library and the Protégé plugin are open-source software and are available for free on GitHub.
Full PDF Version: 

Minor revision

Solicited Reviews:
Click to Expand/Collapse
Review #1
By Francois Scharffe submitted on 28/Feb/2015
Minor Revision
Review Comment:

This manuscript was submitted as 'Tools and Systems Report' and should be reviewed along the following dimensions: (1) Quality, importance, and impact of the described tool or system (convincing evidence must be provided). (2) Clarity, illustration, and readability of the describing paper, which shall convey to the reader both the capabilities and the limitations of the tool.

This paper presents the OWL explanation workbench, an API, reference implementation and Protégé Plugin distributed together with Protégé 5.
The tool is grounded in years of research on computing justifications, and uses techniques making the computation efficient. The user interface in Protégé also uses techniques to improve the presentation of justification. This results in a mature, usable tool, that ontology engineers will certainly (are already) welcome.

The paper is well written, with a good backround introcution and a detailed, illustrated presentation of the tool and user interface.

The tool is however presented as a finished product. It would be helpful to give limitations, and to give plans for future releases or improvements.

The historical presentation of research on justifications is subjectively presenting two pieces of work. The amount of attention given to other (cited) works seems unbalanced. In particular, it would be good to compare to other tools for justification/debugging.

- section 1. A user
is faced ... They need

Review #2
By Marco Rospocher submitted on 12/Mar/2015
Major Revision
Review Comment:

The paper presents the OWL Explanation Workbench, a suite for working with justification-based explanations of entailments in OWL ontologies. It comprises a software library, exploitable in standalone OWLAPI-based applications, and a Protege plugin. The suite is released as open-source.

Having used several times the Explanation Workbench Protege plugin since its introduction (2008), I can *really* confirm the usefulness and effectiveness of having explanations when authoring an ontology. I'm not aware of any other tool offering such functionalities.

Now, regarding the dimensions along which the tool and system reports should be evaluated for this journal:

1) Quality, importance, and impact of the described tool or system (convincing evidence must be provided)

Haven't checked in details the code (so I cannot comment on code aspects), but the tool performs well, and it's usage (Protege plugin) is quite intuitive. The tool is "right to the point": there are not many fancy things or innovative ideas on the presentation of the entailments, but I guess the task itself does not require them.
As "regular customer" of the Explanation Workbench, I'm confident in confirming the importance and usefulness of such tool: tasks such as understanding the reasons for the inconsistency in the ontology you are developing are definitely simplified by the availability of the Explanation Workbench.
Concerning the impact, the paper falls a little bit short in providing "convincing evidence". I know, it's a difficult information to give... but the number of Protege registered users in the "uptake and usage" section does not necessarily imply something on the usage of the functionality: e.g., I'm one user, but I've used these functionality several times, while there may be thousands of user who download Protege without having clicked ever on the "?" next to an axiom (e.g., people may use Protege to build lightweight ontologies, simple taxonomies, without needing the functionalities offered by the Explanation Workbench). Maybe reporting more on other papers/big projects using the tool could be more appropriate (e.g., TAMBIS is cited at some point in the paper). Also, nothing is said about people downloading / forking the library version from GitHub, or on the usage of the library version of the workbench in applications (i.e, not in Protege).

(2) Clarity, illustration, and readability of the describing paper, which shall convey to the reader both the capabilities and the limitations of the tool.

The paper is well written. Concerning the content, I think too much space is dedicated to the justification research stream (first 4 out of 8/9 pages of the paper): given that the paper should focus on the ("brief and pointed") description of the system, I think that Section 3 (The Rise of Prominence of Justifications) could be safely entirely dropped (or drastically reduced to one / two paragraphs max), as it is not needed to understand the description of the tool. This would also reduce the reference section, which is abnormally long (72 entries) for a system report paper.
The description of the Explanation Workbench (Section 4) is very effective. It may be worth to complement the current description with a functional description of the workbench and some more details on how the tool actually works: I mean, what happen "behind the curtains" when someone clicks the question mark in the Protege interface (e.g., the reasoner is invoked on the specific axiom / all ontology / all axioms involving the classes mentioned in the current axiom, then...)?

I have some further observations, that I would like the authors to address in order to improve the current version of the paper.
First, the authors claim a few times in the paper that "the library can be used in standalone OWL API based applications that require the ability to generate and consume justifications". Would it be possible to name, or make more explicit, a few examples of such applications? Or, even better, it may be good to mention concrete applications using the Explanation Workbench (besides Protege), and the tasks for which they are using it.

Second, the Explanation Workbench (or at least one of its initial versions) is around for some years now: indeed, a (short) system description of the tool was already presented at the ISWC 2008 Posters and Demos Track [1], built on the research work presented in [2] (note: both these references are very relevant for the work here described, but they are not included / mentioned in the submitted version of the paper).
I ask the authors to explain and clarify what are the differences between the Explanation Workbench described in the submitted paper and the version presented in [1], and also why a tool description of the Explanation Workbench is timely now, several years after its initial release (2008).

[1] Matthew Horridge, Bijan Parsia, Ulrike Sattler: Explanation of OWL Entailments in Protege 4. International Semantic Web Conference (Posters & Demos) 2008
[2] Matthew Horridge, Bijan Parsia, Ulrike Sattler: Laconic and Precise Justifications in OWL. International Semantic Web Conference 2008: 323-338

Review #3
Anonymous submitted on 19/Mar/2015
Minor Revision
Review Comment:

This paper is the report for the system which enables users to get explanation of OWL entailment. The system, the OWL Explanation Workbench, is well distributed with Protégé 5.
As a “reports for tools and systems” category paper, it is basically well written. But there need more descriptions to clarify the scientific value of the paper as follows;
- The algorithms for computing justifications: Although the algorithms themselves are not the part of the paper, it is needed to explain what kind of algorithms are provided as a reference implementation (left, pp.5). In section 3, the authors introduced the history and variety of computing justification. So the readers may wonder how computing justification is realized in the system. Even the algorithm can be “plug-in”, the reference implementation is important since it is expected that most of users use the system without customization. The authors can refer other articles for the details of the algorithm but should introduce the basic feature and pros/cons.
- The limitation of the system: As a running system, there may be some limits to work the system properly depending on hardware, data size and so on. Probably it is not easy to describe them definitely since it works along with reasoners like FaCT++. But there need more explanations how the system is enough to apply practical ontologies at least, for example, showing use cases with statistical figures.
- Usage: As the usage, the authors report only the number of Protégé users. It is not so desirable. The reviewer understands that it is not easy to show how it is used since it is not a stand-alone system. But still it would be better to have information on usage of the system like use cases at least.