Review Comment:
I thank the authors for their response and their revised submission. However, the main points I raised in my original review have not been addressed adequately. Specifically, these concerns pertain to:
1. the ontology's general description and purpose,
2. the ontology's technical presentation, and
3. the ontology's evaluation,
I will provide more detailed explanations of these concerns and elucidate why I find the authors' response and revisions insufficient in resolving the issues I previously highlighted.
# The Ontology's General Description
In my initial review, I asserted that the presentation of the ontology lacked precision and scientific rigor. Specifically, I wrote that "there are no concrete statements that could be tested or verified."
It remains dubious to me
- what kind of anomalies can be detected,
- what kind of analysis is supposed to be done in terms of root causes,
- and how exactly the ontology would help with either of these tasks.
The authors' response, unfortunately, does little to alleviate this ambiguity. The authors responded to this point by writing
> "Expanding on the use of the ontology for anomaly detection, the principle is that it sometimes enables easier detection through queries on the graph when integrating heterogeneous data directly provides a 'complete' description of the anomaly (which was not possible without integration)."
This response exemplifies the pervasive lack of clarity and specificity throughout the paper. The gist of what the authors write is that 'the ontology enables anomaly detection for heterogeneous data through queries.' However, they fail to substantiate this claim with concrete examples or detailed explanations of the underlying mechanisms. Given the ontology's relatively simple structure, comprising 225 mostly basic RDFS axioms, it should be feasible to offer illustrative examples of such queries to demonstrate the ontology's intended use. Yet, no concrete argument is made and virtually no details are provided in terms of how the ontology enables anomaly detection through queries.
In the same vein, the paper is full of similarly verbose formulations that ultimately have little substance and are not supported by convincing arguments. For instance, the opening sentence of the conclusion reads
> "we have presented NORIA-O, an ontology for representing network infrastructures, incidents and maintenance operations on networks [...]."
The main keywords here are "network infrastructures", "incidents", and "maintenance operations". However, the term "maintenance operation" appears only twice in the entire paper: once in the abstract and once in the conclusion. Even searching for the term "maintenance" results in only four search results: two of these are due to "maintenance operation", as just described, and two more are due to "corrective maintenance actions". Needless to say, the expression "corrective maintenance action" is also not defined or described in a more specific way --- nor is the relation between the two the expressions made clear.
Similar shortcomings pervade other essential terms, rendering the paper incomprehensible even with the most charitable interpretations of the terminology employed. So, in its current form, the paper falls short in terms of illustration, clarity, and readability --- namely the main aspects of an ontology description submission at SWJ. These shortcomings are particularly surprising considering the relative simplicity of the ontology provided.
# 2 Ontology's Technical Presentation
The authors write in their response that
> "R2 argues that we haven't axiomatized enough, [...]"
This interpretation is not correct. I asked for a detailed description of the ontology in terms of its axioms. Furthermore, I asked for explanations and clarifications of specific axioms and modelling choices that appear questionable to me. Unfortunately, the authors did not offer any clarifications or explanations, and the revised submission still does not include a sufficiently detailed description of the ontology in terms of its axioms (I have seen the additional paragraph on page 6. However, stating that three commonly used OWL/RDFS constructors are used in NORIA-O does not adequately address the issue). I reiterate that this is crucial since the authors assert that the ontology enables "reasoning" throughout the paper, namely in the sections 1. Introduction, 2. Related Work, 3. Methodology, 4. Formalization and Implementation, and 7. Conclusion. My inquiry about what kind of "reasoning" the ontology enables and how this plays a role in practice has also not been discussed.
The authors' response continues
> "[...] so it seems complicated for R2 to understand how the ontology works (how it helps solve the problems of root cause analysis and incident management) [...]"
This is also not quite right. Even though the ontology's design is not sufficiently described in the paper, the ontology is publicly available allowing me to inspect it. The issue at hand is not that the ontology is complicated; rather, it is about how a relatively small ontology --- predominanlty built using basic RDFS constructs, many of which are used in a questionable manner --- can effectively contribute to addressing a seemingly involved task of "Anomaly Detection and Incident Management in ICT Systems" (quoted from the title). So, I asked for more tangible arguments and concrete statements about both the design of the ontology (in terms of its axioms) as well as design intentions. More specifically, I inquired about six concrete modeling choices, such as the rationale behind not classifying a "CommunicationDevice" as a subclass of "Device." None of my general or more specific questions have been answered in by the authors' response and the new submission does not offer any insights in this regard either. Consequently, I remain skeptical about the ontology's use in practice.
# 3 Ontology's Evaluation
The assertion that
> "[R2] finds that validation by Authoring Tests (ATs) is too weak of evidence of validity"
is a misrepresentation of my main point. However, the authors are correct insofar as that I also have doubts about the provided evidence of validity --- a point I will come back to later.
My central point was the absence of any form of evaluation in the paper, despite a dedicated section labeled "Evaluation." I reiterate that "testing" and "evaluation" are not synonymous --- neither in software engineering nor in ontology engineering. While tests can validate certain aspects of a design specification, an "evaluation" should demonstrate a) the ontology's relevance within its intended context, and b) the satisfactory fulfillment of its design goals. The paper presents *no evidence* for either a) or b) and should not include a misleading section titled "Evaluation".
Coming back to my reservations regarding "validity". The authors did not address my observation that "the ontology only meets about 60% of the formal design requirements derived from the formulated competency questions." I continue to believe that it is important to clarify why this is acceptable and why the ontology would still be relevant and useful for its intended purpose.
# Overall Impression
The new submission only includes minor changes to the original submission. Unfortunately, however, the changes in the new submission do not address any of the shortcomings I pointed out in my original review. Moreover, the authors opted to stay silent on virtually all of my specific technical inquiries concerning peculiarities within the published ontology and its design.
Overall, I am not convinced that the authors have any intention of improving their work w.r.t. the concerns I raised. Consequently, I recommend the paper to be rejected.
|