Review Comment:
First of all, thanks to the authors for considerably improving the readability of the paper. In principle all my concerns have been addressed, only a few details remain (or stood out now that the rest of the paper was more clear and more easy to read). Therefore I suggest that the paper is accepted, since the authors just need to fix a few small details that would not require a new round of reviews to be included.
I have the following minor comments:
- Since you have a quite detailed comparison with RIO at the end, you could refer to it already from section 2.
- In the grammar notation in equation 2 the arrow points to the left, shouldn't it be pointing to the right, i.e. read out as that Ca can be replaced by any of the things to the right? Maybe this is just an insignificant detail, but to me the current direction reads as that you can replace a complex thing with a C in one step, but what you are after is actually the "recursive" construction of a complex structure, but I may be overly picky here (or even wrong).
- I still have a bit of a problem with some of the definitions, or rather with how they are presented. In definition 3.1 and 3.2 Q is an axiom pattern (singular), while in 3.3 Q is a SET of axiom patterns. If one is a single pattern and the other is a set of patterns, why reuse the same letter for that? Also, in 3.1 and 3.2 the superscript alpha represents the "of an axiom alpha" part of the definition, i.e. alpha is a possible instantiation of the pattern. Whereas in 3.3 the Q has a superscript CF that I assume stand for class frame, but that is not mentioned inside the definitions (merely earlier on that page), so then I assume that you mean that the class fram pattern is a pattern "of a class frame CF"? Previously in the text you then have a subscript A on the CF, which denotes the class, but here in 3.3 this now appears on the Q instead. Next, Q appears again in definition 3.5, as an ontology pattern - why? You already have OP to denote that. Nothing here is major, it is just that it is still not entirely clear and straight forward to read these definitions, although it is much better than in the previous version.
- I do understand that the authors want to focus on the things that their algorithm indeed does, and not what it cannot do. Nevertheless I still find the discussion in sections 5.4, question 4 under 6.1, and 6.6 a bit weak in the sense of not mentioning anything about what was not detected and why (focusing on the documented patterns). A few documented patterns were indeed detected, and that is good news that should of course constitute the major part of the description. However, it can say a lot about the nature of the detection to also take just one or a few negative examples, i.e. of a documented pattern that you can identify manually in the ontology with the help of the paper that describes it, but that was not detected by the algorithm. It should probably be quite obvious from a manual inspection why such a pattern was not detected, and could give the reader some valuable insights into the limitations of this approach.
- Thanks for adding the running example of Table 1, it helps a lot. However, it is still not entirely clear throughout the paper that this is what is used. You only refer to Table 1 in a couple of places, and often not to the exact axiom in one of the ontologies that you reuse for the example. I suggest to go through all the examples and make sure you refer back to Table 1, and also mention the # of the axiom (as you do in some cases) wherever applicable. This will make the use of the example more consistent.
- Figures 5 and 7 are still quite "anonymous" and do not use the running example. Is this for readability reasons? What would happen if you would use "real" node names also here? It is not crucial, but it would be easier to follow, than keeping track of a, b, c:s.
- It is great that you now discuss the various pattern types towards the end of the paper. However, this raises additional questions: You mention that you can detect some logical and alignment patterns as well, so do you have some examples where this happened, from your experimental results? Or at least where it could happen?
- Figure 14 is still a bit hard to read. Would it fit better as a table instead (at least part b) since it has so many subparts within?
Overall a nice paper that I am happy to have reviewed!
|