Review Comment:
This paper introduces OOPS!, a web-based tool to evaluate OWL ontologies according to a catalogue of "bad practices" (i.e., pitfalls) that may occur when developing ontologies.
As the authors explicitly say, this is not the first paper dedicated to OOPS! but, contrary to their previous works, it focuses on overviewing the whole system from technological and practical perspectives - which is totally fine with 'Tools and Systems Report' submissions.
# Summary
I firmly believe that this paper is a valid contribution to the SWJ and that should be published in this venue. However, I think that several issues should be addressed before deserving publication, in particular those referring to specific criteria proper to this kind of submissions, e.g., "clarity", "impact of the tool" and "limitations".
Please find my suggestions and concerns as follows.
# "Ontologies" and "OWL ontologies"
Reading the abstract (and other parts of the paper), it seems that OOPS! is able to analyse and evaluate any ontology, despite of the particular format used for developing such ontology and its actual expressiveness. For instance in the abstract the authors refer to "ontology implementation languages", in the introduction we have "an online service intended to help ontology developers", in section 2.1 "takes as input the ontology".
While I totally agree that the 40 pitfalls of the catalogue can be applicable to any kind of DL ontology, OOPS! works explicitly (and, as far I have understood, only) with OWL ontologies. This point should be clearly stated and clarified within the paper.
However, from the description of the "important" pitfalls provided in section 2, it seems that some pitfalls are referring explicitly to OWL, e.g., P25 talks about a constructor that the particular ontological language (i.e., OWL) can provide. Thus, are there some pitfalls that are actually OWL-specific only?
# Previous works on OOPS!
The part about authors' previous papers in the introduction (i.e., "While our previous papers [...] point of view of the system") should be introduced before the paragraph "The rest of the paper [...]" and should be expanded a bit in order to clearly understand what was analysed in the previous papers and what is the actually real new contribution in this article. In particular, I think "research and analysis work" is a bit vague here.
# Weak points of existing tools
The weak points of existing tools presented in the introduction do not seem so "weak" to me. In particular, why having a tool developed as a plugin of an existing system should be a negative point? The authors refer to the fact that there could be a misalignment between the earliest version of the main tool and the related plugin. However, I think this kind of scenarios can happen also in the context of web-based tools. What if I have a new (or old) version of a browser and OOPS! is not behaving correctly on it? Is it not the same issue?
The issue (b), the one about the installation of a Wiki technology, is not actually an issue of the Wiki itself, rather a problem of any kind of software that needs to be installed.
Finally, in the issue (c) the authors say that the "pitfalls" arisen by analising existing ontology evaluation tools are comparatively limited. Comparatively with whom - consider that the authors introduce OOPS! for the very first time in the following paragraph; maybe they should think to reword this part.
Summarising: the authors should extend the text here to explain and better support their claims.
# Limitations of OOPS!
In section 2.1 (and in other parts of the paper) there is an explicit reference to the fact that OOPS! handles only 32 pitfalls out of 40. This point is a limitation of the current implementation of the system. I would like to suggest to collect it with other existing limitations (e.g., the fact that currently it is able to accept only RDF/XML sources) in an appropriate section and then discussing possible ways to address them in future implementations of the system, even suggesting possible (and specific) approaches for dealing with them recognition.
In particular, for the recognition of the 8 remaining pitfalls, it is not enough to say that NLP tools will be used - I would like to understand which particular NLP tools the authors think are useful to address that limitation.
# About pitfalls
I understand that this paper is not about pitfalls and it focuses on OOPS!. However the 4th footnote states that some of the pitfalls recognised by the system may not be real errors, depending on the particular requirements the ontology engineer wants/is obliged to follow. I was wondering if some of these pitfalls that are not context-dependent and, thus, are always considered as errors whatever is the scenario one can consider.
# About the input via REST
In section 4 the authors present the fields of a specific XML document to send (via POST) to the service for analysing the related ontology. An example of such XML should be added within the text.
In addition, in the field "OntologyContent" the authors say that it is used to specify the source code of the ontology to analyse in RDF. Does it means that (even in the future) it won't be possible to use the Manchester Syntax or OWL/XML (that are not RDF-based syntaxes but still allow one to define OWL ontologies)?
# More evidence of use
In section 4.1 and section 6 there are specified some evidences that should demonstrate the actual impact of the system within the community, e.g., the fact that OOPS! has been used in existing project, and the fact that it has been used 2000 times from 50 different countries. I would like to see a (extensive) improvement of Section 4 in order to include, at least:
* a detailed indication (for instance, month by month) of when and how the service has been used, since when, what was the country that used it more (e.g., by looking at the logs in the server);
* how many different ontologies have been checked - in addition, the authors can consider to reveal the first ten;
* a detailed overview of other existing (and published) works that used OOPS! for validating ontologies, such as:
http://link.springer.com/article/10.1007/s13740-014-0041-9
http://www.semantic-web-journal.net/system/files/swj642.pdf
http://link.springer.com/chapter/10.1007/978-3-319-04114-8_5
http://link.springer.com/chapter/10.1007/978-3-319-13413-0_6
Note that I've just looked for them on Google Scholar in 5 mins, but a better job could be done with a more appropriate analysis of the existing literature.
# Comparison table
A table comparing all the other ontology evaluation tools with OOPS! according to specific functional points should be added. As an inspiration for such analysis, you can consider the (similar) table in
Peroni, S., Shotton, D., Vitali, F. (2013). Tools for the automatic generation of ontology documentation: a task-based evaluation. In International Journal on Semantic Web and Information Systems, 9 (1): 21-44. Hershey, Philadelphia, USA: IGI Global. DOI: 10.4018/jswis.2013010102
Preprint: http://speroni.web.cs.unibo.it/publications/peroni-2013-tools-automatic-...
# Interface issues
When I select "advanced evaluation" and then "Select Pitfalls for Evaluation" a list of pitfall ids is showed with no clear explanation of what they refer to. I see that a tooltip is poped up when hovering the cursor on them, but to understand their actual meaning I should hover each id, one by one. I would like to suggest to change a bit the approach here, writing explicitly (after the pitfall id) the actual label, and putting a more extensive description of the pitfall in the tooltip.
# Minor issues
- A general consideration: I actually think OOPS! is useful to any ontology developer, from newbies to experts. Anyone can make a mistake, after all.
- In the introduction, XD-Analyzer and Radon should be accompanied by appropriate links.
- In Section 2, the authors say that 32 out of 40 pitfalls are automatically handled by OOPS!, but there is not clue what these 32 are. Maybe a reference to Figure 1 (as the authors do at the end of Section 2) could be appropriate here.
- Figure 1 is introduced the first time at the end of Section 2, and I expected to find it on the next page.
- Is the bug about the RDF/XML as only possible input fixed in the earliest version of the system?
- In Figure 1 (printed in black&white) it is not clear the difference between important pitfalls and the others.
# Typos
intended to help -> for helping
bibliographical references, when possible -> bibliographic references when possible
how critical the pitfalls are -> the criticality of the pitfalls
ññRDF/XML -> RDF/XML
|