Ontologies for Observations and Actuations in Buildings: A Survey

Tracking #: 2285-3498

Iker Esnaola-Gonzalez
Jesús Bermúdez
Izaskun Fernandez
Aitor Arnaiz

Responsible editor: 
Guest Editors Sensors Observations 2018

Submission type: 
Survey Article
A wise option promoted by recent approaches is to design networks of complementary ontologies. However, different points of view are possible and such diversity could lead to interoperability problems. This article advocates for a networked ontology infrastructure conceived on principled basis guided by documented judicious conceptualizations. In this regard, this survey points towards ontologies involved in conceptualizations of observations and actuations, where the utility of that conceptualization arises when some features of interest need to be observed or acted upon. Spaces and elements in the buildings environment have emerged as platforms where materializations of such observations and actuations promise to be very profitable. For each of the reviewed ontology, their fundamentals are described, their potential advantages and shortcomings are highlighted, and the use cases where these ontologies have been used are indicated. Additionally, use case examples are annotated with different ontologies in order to illustrate their capabilities and showcase the differences between reviewed ontologies. Finally, this article tries to answer two research questions: Is there a firm basis, broadly admitted by the community, for the development of such a networked ontology infrastructure? What ontologies may be considered helpful towards that goal?
Full PDF Version: 

Minor Revision

Solicited Reviews:
Click to Expand/Collapse
Review #1
By Michel Böhms submitted on 16/Sep/2019
Review Comment:

All my previous comments more more or less taken into account. Great paper, fully accepted now.

Review #2
By Eva Blomqvist submitted on 09/Oct/2019
Review Comment:

As this is the third time I review this submission, I do not go into details on its suitability as a survey article etc. since I have already in earlier reviews confirmed that those criteria are fulfilled. At this point I mainly want to confirm that the authors have sufficiently addressed all my previous comments, hence I would suggest that the article is now accepted for publication.

Only some small unclear points/language issues remain, hence, in order to further improve the impression of the article, the authors may choose to do some small editorial changes in their final version:

In the abstract: each of the reviewed ontology -> each of the reviewed ontologies

At the bottom of page 1 the term "the Web of Things" is used, while not explained. How does that differ from the Internet of Things, as mentioned earlier in the same paragraph.

Middle of page 2: "It has been proved that..." - Proved is too strong word, it has not been formally proven, I suggest to say it has been shown instead.

Unclear sentence on page 2: Moreover, it is hardly expected that these ontologies share the conceptualization of the core elements.

On page 3: It is not clear if "An actuation can be similarly defined as an event or activity, the result of which is a change of state of a
quality of a feature of interest, achieved using a specific procedure." also refers to things that are defined in the ISO standard, or optional extensions.

Further on page 3: "To begin with, any observation is a dul:Event..." maybe this is too strong a statement? I agree that many ontologies model it in this way, and most people would agree to this conceptualisation, but in the text it could be made more clear that this maybe an opinion of the authors supported by some consensus in the community perhaps?

Page 3 again: "(that xsd:_ value is a structured data including numbers and strings as required)." - what do you mean with "is a structured data"?

Table 1: CQ9: What is a floor? is a quite ambiguous CQ. There could be many "correct answers" to this CQ, e.g. is it the type in the ontology, e.g. owl:Individual, that would be the answer? From the text later on it is clear that you are after the class/type of this floor instance from the ontology, and you envision that there should be an explicit representation of a floor there. However, at this stage in the paper the question just seems strange.

Page 4: "procedure used by sensor sensor01." either "procedure used by sensor01" or "procedure used by the sensor sensor01"

First paragraph of section 2: "deserve" is maybe not the right term, since it implies some subjective valuation of the properties. I would say something more like those features of interest that are commonly required to be observed.

Second bullet on page 5: should it be OSRD? Following the name in parenthesis the order would be ORSD, but maybe they've switched it?

Third bullet in the same list: "Being the guidelines..." is a strange start of a sentence.

Section 3.1.1: in all the other cases you use the name/acronym of the ontology as the prefix. Following this convention you should use ssno: as the prefix here and not oldssn:

I am not sure I completely follow the discussion on CQ3 and 4 regarding the SSNO, and similar for SOSA/SSN. To me this still seems to be a requirement of a specific modelling style, that is only applied by SEAS, but that also brings some downsides. I think this could be still clarified more in the paper.

Bottom of page 8: "Neither are covered related..." should it be "Neither are covering related..."?

Page 12 and page 20, WSN is introduced twice, but never used after that.

UCUM example on page 15: isn't it the case that one can use the hasSimpleResult directly on the observation? Why having the result instance at all here?

Time Series Databases are mentioned on page 16 without any reference, link or explanation.

Section 4, first paragraph: "...such problems appear to be too much frequent." is not good english, maybe: "...such problems appear to be very frequent."

Following sentence: "that it happens so often" - what happens so often? The means for correct download?

Last paragraph of the discussion/conclusion: "worth being discussed" sounds like the others are not worth discussing even, a bit harsh. And in the next sentence "the most adequate ontologies..." - according to what criteria? Maybe better phrased like that the authors would like to suggest that these are the ontologies for continued development.

Review #3
By Ana Roxin submitted on 26/Nov/2019
Review Comment:

According to the journal guidelines, given the fact that this manuscript was submitted as 'Survey Article', it will be reviewed along the following dimensions:

(1) Suitability as introductory text, targeted at researchers, PhD students, or practitioners, to get started on the covered topic.
- Introduction was not re-written, as requested by the reviewer. Only the abstract and the keywords were updated.
- The target audience and benefits (goals) were copied/pasted from reviewer guidelines : " researchers, PhD students, or practitioners, to get started on the covered topic". It is unclear how this aligns with the "non expert people" that are also targeted with this article (mentioned in the response to reviewers).
- Covered topic should be related to observations and actuations in buildings, but it's not defined nor stated clearly. The related research issues aren't exposed. Context is addressed (but quite marginally) and without a thorough justification/discussion regarding why context is important and what context elements are pertaining for observations and actuations.

(2) How comprehensive and how balanced is the presentation and coverage.
- Low level of comprehensiveness : This is not really a survey article, it's merely an extended related work section of the initial submission. The overall scenario that serves as a "grouding" for the whole article is the one used for introducing the EEPSA ontology (1st version of the paper). Thus, authors want to underline the fact that existing ontologies fail in addressing that specific scenario. While it may hold for the EEPSA ontology (still the justification was not solid enough in the 1st version), such specific scenario involving specific competency questions isn't suitable for a survey. The article thus does not deliver a "critical view" : the code examples illustrating how the considered use case can be modeled with the ontologies evaluated doesn't bring much to the overall survey. The code examples are in very close relation with the EEPSA ontology, which is no longer addressed nor mentioned in the paper at hand.
- The article is not balanced. It is a mix of contributions made in several domains (BIM, IoT and context-aware computing) which misses a clear rationale. The ontologies gathered approximately. Still unclear how the 5W** questions help the understanding (mentioned at the beginning regarding DUL, then only referred to when discussing context ontologies - why ?). The framework used for reviewing the ontologies isn't formal nor explicit, and more important biased by authors' final goal which is the EEPSA ontology.

(3) Readability and clarity of the presentation.
- Three revisions helped improving the readability
- Still for a computer-science survey article, the article should hold clear and specific facts, measures. The final output should allow someone, from the community targeted at, to easily choose among the existing ontologies e.g. which ontology for which use case. The article at hand does not allow that.
- Clarity needs improvement. Addressing all comments made by the reviewer could've helped e.g. re-writing the introduction.

(4) Importance of the covered material to the broader Semantic Web community.
- A few ontologies were added to the list of the ones described, in this 3rd version of the paper. Still they bring in little novelty e.g. SmartEnv is mentioned but authors add it models the considered use case with the same triples as SOSA/SSN.
- The list of competency questions is derived from the needs behind the EEPSA ontology and does not pertain to a common research issue. As such, the results are only useful for justifying the existence of EEPSA, they will not help the broader Semantic Web community. The reviews provided for the considered ontologies do not bring novel findings or insights that someone from the targeted community cannot get from what's existing now on the Web.
- Methodology still not formal nor explicit : a) conceptualizations for observations and actuations are missing, b) no an exhaustive list of CQs derived from analyzing the research domain targeted at (it's really the CQs that justify EEPSA), c) no details about alignments/mappings, d) not the same framework used for evaluating all the ontologies e.g. license information is missing from the discussion about SmartEnv.