Review Comment:
This is a revised version of a previous submission. The previous submission was submitted as a full paper but was actually a combined survey and ontology paper, which has now been split into two papers, where the survey is the one under review here. I think that the authors have done a good job at splitting the paper, extending the survey part with examples etc. So overall I would suggest that the paper is accepted. However, there are a few minor things that could be improved for the final version.
First of all I really appreciate the additional examples in the paper, which really helps both to illustrate the differences between the ontologies and to tie the various parts of the survey together. I do think, however, that some final improvement could be made by better explaining how the examples of the various sections can be tied together to solve an overall modelling problem. As it is now it is not entirely clear. Especially for the building topology example, it is not entirely clear why we need to describe rooms and floors in order to perform/record/analyse observations and actuations. Maybe a short description of the overall "running example" could be put into section 1, in connection with the discussion of DUL relations, where all the example sentences are introduced together with a short explanation how they work together to solve a particular problem.
Further, I appreciate the discussion on the survey scope, and criteria, listed in section 2. However, the authors there say that not all criteria are mandatory and few ontologies fulfil all of them, which again makes it a bit unclear how the selection was made. If some criteria were considered mandatory, while others were optional, please state that. I assume for example that accessibility is a mandatory criteria, while evidence of its use seems not to be mandatory since several ontologies in the survey lacks any such evidence. Moreover, the authors state that LOV was the main source for finding ontologies, but also research paper databases were used. Were the same keywords used for search in the databases? How was the selection made there? A very small note: I would suggest to change the tempus in this section to be past or present tense, rather than future, since this sounds a bit like the survey was not done yet.
In 3.1 an additional set of criteria for ontologies is then presented (ODP-based, modular, model-based, and reengineering based) and used for the survey of observation and actuation ontologies. These criteria are then used as the value in one of the columns of Table 1. My objection here is that the criteria are not mutually exclusive, but rather, several of them are related or even overlapping. For instance, being ODP-based usually also implies being modular (although not necessarily), and reengineering could be true together with any of the other criteria. For instance, SOSA/SSN is definitely both ODP-based AND modular. Hence, I would suggest that each ontology can have several values in this column, or that each criteria gets its own column, and a "yes/no" entry, like in the other tables. An additional question is why these criteria are only used for the observation/actuation ontologies, as far as I understand they could just as well be applied to all the ontologies of the survey.
Regarding sections 3.1.1 and 3.1.2, I think that the differences between the two SSN versions could be described a bit better. There is one very important difference in how they treat observations, which changed from being a dul:Situation into being considered an event (aligned to dul:Event) in the SOSA model, which should definitely be relevant to this paper's discussion (also related to the DUL discussion in section 1) but is not so clearly mentioned in these sections now. This could also be worth mentioning in the summary section, 3.1.10.
It is also the case that not all the criteria of each ontology are mentioned in the text, but some things are inly presented in the tables. It would be better if each criteria is both mentioned in the text AND in the tables. For instance, there is no mention of SSNx use in section 3.1.1 but in Table 2 there is a Yes in that column. Also the license is mostly mentioned when it is not present.
Throughout the paper the terms ontology and vocabulary are used without further mentioning of their meaning. While I think that ontology does not require further discussion, the term "vocabulary" is not used in the same way by all communities. While the linked data community usually equals ontology with vocabulary, in this paper I get the impression that maybe vocabulary means something more like a code list, or am I misinterpreting it?
Further in Section 3.1.2 it is not clear why this ontology suddenly gets a longer example (and so does 3.1.5), that includes more than just covering the example sentence at the start of the section? Is there a certain point that needs to be proven by this? Should the longer example then be used for all the ontologies instead of the short one? Also, I am not sure that the turtle notation in the example is correct: as far as I know you can only use lists in the predicate and object position, not for the subjects of a triple, as in the last two statements.
Is the alternative modelling mentioned at the end of 3.1.2 to model the location of the sensor? Or something else?
Section 3.1.3: isn't it a bit strange to think about a sensor as a process? Or maybe th reader is supposed to consider here the sensing process rather than the physical sensor? Then the naming in the example seems a bit misleading.
Regarding section 3.2 I think that it should be made much more clear that this is a different kind of survey than in the other two parts. As I interpret it this is more an overview of the main standards/most used ontologies to represent contextual information, rather than really a survey (using the same selection criteria as listed in section 2). I think that this section is still very useful, and a good addition that completes the paper and makes it even more useful to the reader. However, it needs to be clear that there is a difference between the sections.
In table 2 it is stated that SmartEnv is missing alignments, while I think that it is both aligned to SOSA/SSN and DUL in some parts.
Section 3.2.1: are there any (popular) alternatives to OWL Time?
The example in section 3.2.2 is missing a period at the end.
The discussion on BIM vs. IoT data in section 3.3 (including footnote 48) is still not so clear to me.
Why is the development tool of ifcOWL relevant (in section 3.3.1 footnote 54)? Development tools are not mentioned for any other ontology.
Example in section 3.3.8 is in the opposite order compared to the other ones, i.e. the room first, why?
When listing the building ontologies I recently came across one that might be worth adding to the list; the RealEstateCore (see [1], [2] and a forthcoming accepted resource paper at ISWC2019). Since this is quite new I did not expect the authors to consider it, but just as an information, and if possible to include in the final version of the paper.
The paragraph about vocabularies in the discussion section (section 4) is not clear to me.
There are still a few typos throughout the paper, but they can be fixed with some proof reading.
Finally, summing up and relating to the criteria of survey papers submitted for SWJ: (1) Suitability as introductory text, targeted at researchers, PhD students, or practitioners, to get started on the covered topic. (2) How comprehensive and how balanced is the presentation and coverage. (3) Readability and clarity of the presentation. (4) Importance of the covered material to the broader Semantic Web community. I find that this paper does a very good job at (1), (2), and (3), as already mentioned above. Regarding (4) the material is not targeted at a broader Semantic Web community, but rather towards a specific domain, which I think is also fine, especially since the paper was submitted to the special issue of sensor observations, where it fits very well.
[1] https://www.realestatecore.io/download
[2] https://doc.realestatecore.io/2.3/core/index-en.html
|