Review Comment:
The paper presents NEOntometrics, a tool for assessing the quality of ontologies by calculating a set of metrics.
The metrics supported by the tool were collected by surveying the state-of-the-art (the authors introduced no new metrics).
Moreover, the tool allows its users to compute new metrics by combining the “elemental” ones.
In the case of ontology published on GitHub, the tool is also able to monitor the evolution of the metrics over time.
The software is provided as a microservice application which relies on standard technologies like Redis, Django, flutter, dart and owlapi.
The source code is openly available on GitHub as an online demo of the software.
Finally, the authors present a case study in which the tool has been used for observing the evolution of the Evidence and Conclusion Ontology (ECO).
This is a very interesting work; generally, the text is well-written and easy to read.
This tool is technologically sound and relevant to the topic of the SI.
Most relevant work in ontology evaluation is cited and clearly positioned concerning the paper's contribution.
The resource is novel and useful, it fills a gap since most of the existing tools are not maintained anymore.
Another strength of the paper is providing a discussion regarding a real case.
However, in my opinion, the paper needs rework to be worthy of publication.
My major criticism comes from the research questions formulated in the paper and how these questions are investigated.
The authors have to clarify what is the research question investigated by the work and present an in-depth study proving that the proposed tool answers the question. The questions mentioned in the abstract remain unexplored and the presented use case doesn’t answer them. As for the first question (i.e. “How can we select the ontology that best fits the individual use case?”), the use case involves an ontology only. Of course, the tool has the potential for doing that, but this needs to be proved. Concerning the second question (i.e. “How to compare two ontologies or assess their different versions?”), Section 4 reports the evolution of a subset of metrics over time, which is cool, but there is no discussion of how the quality ontology quality changes as the presented metrics evolve. Again, I think that work has the potential for supporting that, but this has to be further investigated.
The other point is related to the tool’s features. The feature that is mainly advertised is the extensibility of the tool (i.e. the ability of a user to compute new metrics). However, I think that this is partially true. It is true that the tool allows us to compute new metrics but (as far as I understand) the metrics have to be derived from the existing ones and there is no way for a user/developer to implement a piece of code and extend the platform (without diving into the open source implementation and find an extension point). I would suggest the authors provide a tutorial for semantic web practitioners that want to implement new metrics.
Finally, the online documentation can be improved. The README gives information on how to deploy the application with docker but no information is provided on how to access and use the application. I think that the tool would benefit of a step-by-step tutorial.
|