Review Comment:
Overall evaluation
Select your choice from the options below and write its number below.
== 3 strong accept
X 2 accept
== 1 weak accept
== 0 borderline paper
== -1 weak reject
== -2 reject
== -3 strong reject
Reviewer's confidence
Select your choice from the options below and write its number below.
== 5 (expert)
X 4 (high)
== 3 (medium)
== 2 (low)
== 1 (none)
Interest to the Knowledge Engineering and Knowledge Management Community
Select your choice from the options below and write its number below.
== 5 excellent
X 4 good
== 3 fair
== 2 poor
== 1 very poor
Novelty
Select your choice from the options below and write its number below.
== 5 excellent
X 4 good
== 3 fair
== 2 poor
== 1 very poor
Technical quality
Select your choice from the options below and write its number below.
== 5 excellent
X 4 good
== 3 fair
== 2 poor
== 1 very poor
Evaluation
Select your choice from the options below and write its number below.
== 5 excellent
== 4 good
X 3 fair
== 2 poor
== 1 not present
Clarity and presentation
Select your choice from the options below and write its number below.
== 5 excellent
X 4 good
== 3 fair
== 2 poor
== 1 very poor
Review:
The paper entitled 'The uComp Protege Plugin: Crowdsourcing Enabled Ontology Engineering' presents a plugin to the Protege ontology editor that enables the use of some common crowdsourcing strategies by directly embedding them (i.e., making them deployable) in the leading ontology engineering tool. The topic is interesting, important, timely, and a good match to the EKAW 2014 call. I would not rate this a classical research paper but rather a mix of a survey and a tools & systems paper. This is not so much an issue for the EKAW-only track but needs to be carefully considered for a potentially extended Semantic Web journal version.
The paper is well written and structured. It is easy to follow and the provided tables and figures are clear and contribute to the understanding of the material. One can argue whether figure 1 is really necessary but this is just a minor detail.
The paper is well motivated and provides a good overview of the related work. Section 2 is excellent and the table 1 gives a good overview of genres and addressed tasks. For a potential journal version, this part could be substantially extended to better work out the differences between some of the approaches. For section 2.1., I would like to see a comparison to tasks that cannot easily be crowdsourced. Again, this is more relevant for a potential future version. I will discuss some problems with the current presentation of 'ontology engineering' below.
Section 3 is the weakest part of the paper. While I admit that writing about the functionality and interface of a tool is difficult, this section should be reworked. A walk-through style with one consistent example may have been more appropriate. This part also begs the question whether a tool like uComp should be embedded into Protege at all. I would like to see a better argumentation here.
Section 4 provides the evaluation across the four dimensions time, cost, quality, and usability. While the evaluation is detailed and conceptually well worked out, it is also naive. Who are the 8 mentioned ontology engineers and what domain are they working on? The presented examples are toy ontologies at best, what do we learn from them about ontology engineering in-the-wild? Which test population rated the usability, the engineers?
My main concern with the paper and the evaluation, however, is the lack of a critical perspective. The crowdsourcing approach may work well for a 'default-meaning' such as 'Physics is a Science' or 'Car hasPart Engine' but it is very unlikely to work for more complex domains (and the required axioms). Even more, we would not need ontologies if we could simply agree on a canonical, simplistic, and realism-like definition of ontological constructs. [If so, we could simply hard-code them]. How can we expect the crowd to understand terms such as /Biodiversity/ (to stay with the author's climate change example) or /Deforestation/ if they differ within and between domains. The term /Forest/, for instance, has several hundred legally binding definitions in different countries and communities. Asking anonymous crowd-workers from all over the world will only lead to an semantically empty compromise. Ironically, the 'quality' evaluation in the paper points to exactly this problem. The inter-rater agreement between the experts is low, while it is high between the crowd-workers. This should have really raised red flags among the authors. In fact, this is a very common phenomenon among ontology engineers and even more so among domain experts. Likewise, it is ironic that the authors selected climate change as an example, which is among the most debated and misunderstood terms in science and society.
The question is how important this is for the overall quality of the paper. I would go as far as claiming that the examples provided in the paper could have been worked out using WordNet. On the other hand, this is a very nice tool and a solid paper on an emerging and important topic. Therefore, I would rate the paper as an accept for EKAW and propose to discuss with the authors how a potential SWJ version could look like (there is certainly potential for an extended version).
|