NEOntometrics – A Public Endpoint For Calculating Ontology Metrics

Tracking #: 3205-4419

Authors: 
Achim Reiz
Kurt Sandkuhl

Responsible editor: 
Guest Editors Tools Systems 2022

Submission type: 
Tool/System Report
Abstract: 
Ontologies, the cornerstone of the semantic web, are available from various sources, come in many shapes and sizes, and differ widely in their attributes like expressivity, degree of interconnection, or the number of individuals. As sharing knowledge and meaning across human and computational actors emphasizes the reuse of existing ontologies, how can we select the ontology that best fits the individual use case? How to compare two ontologies or assess their different versions? Automatically calculated ontology metrics offer a starting point for a quality assessment. In the past years, a multitude of metrics have been proposed. However, metric implementations and validations for real-world data are scarce. For most of these proposed metrics, no software for their calculation is available (anymore). This work aims at solving this implementation gap. We present NEOntometrics, an open-source, flexible metric endpoint that offers (1.) an explorative help page that assists in understanding and selecting ontology metrics, (2.) a public metric calculation service that allows assessing ontologies from online resources, including git-based repositories for calculating evolutional data, with an (3.) adaptable architecture to adopt new metrics quickly. We further take a quick look at an existing ontology repository that outlines the potential of the software.
Full PDF Version: 
Tags: 
Reviewed

Decision/Status: 
Major Revision

Solicited Reviews:
Click to Expand/Collapse
Review #1
By Cogan Shimizu submitted on 09/Sep/2022
Suggestion:
Major Revision
Review Comment:

This manuscript presents NEOntometrics, which is a tool for running a number of automatic ontology evaluation metrics.

Overall, this manuscript is easy to read, makes convincing arguments, and describes a tool that is (relatively) timely and useful.

My concerns are largely minor, but will require a significant addition to the paper (hence my recommendation for major revision). In order of significance:

* I would have expected more description of the capabilities of the tool: what metrics are already included in the base tool?

* I would have expected a deeper guide to extending this tool, perhaps walking through the particular implementation of one of the already supported metrics.

* I miss an in-depth explanation of the ontology describing these metrics.

* From what I can tell, NEOntometrics is never defined? This is also inline with many other acronyms that -- with best practices -- should be expanded. Section 2.1, 3.2

* Figure 7 & 8 should also have legends

Review #2
By Luigi Asprino submitted on 29/Sep/2022
Suggestion:
Major Revision
Review Comment:

The paper presents NEOntometrics, a tool for assessing the quality of ontologies by calculating a set of metrics.
The metrics supported by the tool were collected by surveying the state-of-the-art (the authors introduced no new metrics).
Moreover, the tool allows its users to compute new metrics by combining the “elemental” ones.
In the case of ontology published on GitHub, the tool is also able to monitor the evolution of the metrics over time.
The software is provided as a microservice application which relies on standard technologies like Redis, Django, flutter, dart and owlapi.
The source code is openly available on GitHub as an online demo of the software.
Finally, the authors present a case study in which the tool has been used for observing the evolution of the Evidence and Conclusion Ontology (ECO).

This is a very interesting work; generally, the text is well-written and easy to read.
This tool is technologically sound and relevant to the topic of the SI.
Most relevant work in ontology evaluation is cited and clearly positioned concerning the paper's contribution.
The resource is novel and useful, it fills a gap since most of the existing tools are not maintained anymore.
Another strength of the paper is providing a discussion regarding a real case.

However, in my opinion, the paper needs rework to be worthy of publication.

My major criticism comes from the research questions formulated in the paper and how these questions are investigated.
The authors have to clarify what is the research question investigated by the work and present an in-depth study proving that the proposed tool answers the question. The questions mentioned in the abstract remain unexplored and the presented use case doesn’t answer them. As for the first question (i.e. “How can we select the ontology that best fits the individual use case?”), the use case involves an ontology only. Of course, the tool has the potential for doing that, but this needs to be proved. Concerning the second question (i.e. “How to compare two ontologies or assess their different versions?”), Section 4 reports the evolution of a subset of metrics over time, which is cool, but there is no discussion of how the quality ontology quality changes as the presented metrics evolve. Again, I think that work has the potential for supporting that, but this has to be further investigated.

The other point is related to the tool’s features. The feature that is mainly advertised is the extensibility of the tool (i.e. the ability of a user to compute new metrics). However, I think that this is partially true. It is true that the tool allows us to compute new metrics but (as far as I understand) the metrics have to be derived from the existing ones and there is no way for a user/developer to implement a piece of code and extend the platform (without diving into the open source implementation and find an extension point). I would suggest the authors provide a tutorial for semantic web practitioners that want to implement new metrics.

Finally, the online documentation can be improved. The README gives information on how to deploy the application with docker but no information is provided on how to access and use the application. I think that the tool would benefit of a step-by-step tutorial.

Review #3
By Enrico Daga submitted on 20/Oct/2022
Suggestion:
Major Revision
Review Comment:

The article describes a system for automatically computing ontology metrics to support ontology engineering in quality assessment and analysis.

The article lacks a clear narrative from the user's standpoint. The first part mainly covers literature on ontology quality but the scenario described only explores some aspects related to ontology evolution (how certain metrics have changed over time). A section describing what user tasks are supported is needed, and/or clarifies the role of metrics in relation to the tasks.
The authors motivate (2.3) the work by referring to a gap in a common understanding of how to evaluate ontologies but the scenario described does not seem to answer that. I think there must be a case study (two?) precisely on quality and a description of how the tool helps address specific quality issues.

Why the results are queryable with GraphQL and not in SPARQL?

It would help to have n overview of all the metrics currently implemented, with references to the methods/papers from which they come from.

The description of the tool starts by referring to repositories, a concept that was not introduced before in the paper. By reverse-engineering the scenario, which refers to commits, I derive that the input to the tool is a git repository but this needs to be described. In addition, a description of the tool at http://neontometrics.informatik.uni-rostock.de/#/ is necessary. Regarding the tool, I suggest moving the input field at the top in the Calculation engine tab, and making the "Already calculated" list more prominent as that is the actual showcase of the tool, it is currently quite hidden.
However, I could not reach a view where the actually computed ontology metrics are presented. In the next iteration, I strongly recommend reviewing the usability of the tool and eventually considering a user study to show the benefits of the system and its value to users.

Minor issues:
The expression "computational actors" probably deserves a citation.
2. Gold-Standard -> gold standard
2. Why some sentences are italics?
2.1 Vrandecic[12] -> add space before citation
2.2 … the tool suffers from the same issues as the framework [9] -> this sentence is a bit opaque maybe repeat which issues you refer to
Figure 2 is not very insightful