Review Comment:
This review is a discussion of the differences between the first submission of the paper and the current version, and arguments as to whether or not they address the concerns raised by the reviewers in the first round.
In summary, whereas there are improvements, this paper does not seem ready for publication – and it should be at this stage of the process. Some of the added text does not seem to serve the SWJ audience, but rather to ease some of the reviewer concerns, and as such it fails at doing so.
It is clear that the paper has improved since the first iteration. However, it's important to note that the first version was suffering from a couple of serious issues that severely hindered readability, so the relative improvement as such is not the only indicator to be taken into consideration.
Some of the added paragraphs are indeed new text, but it can be questioned to what extent they add value to the text.
In particular, the added text on page 9 has multiple issues; it reads more as a defense to certain reviewer comments rather than providing an added value to the reader. Generic statements on higher recall and runtime are not backed up by evidence; furthermore, that paragraph is repeated twice. The "another question" is in essence the age-old problem of federated querying, which comes down to source selection. All in all, this reads like a hastily added block of text that might diminish the value of the manuscript rather than improving it.
Whereas most of the language issues of the original seem to be addressed, added texts also introduce numerous new errors. That by itself is not a blocking problem, but might indicate an insufficient quality barrier. At this stage, the manuscript is expected to be in near-final state, and language is just one of the many aspects.
Several pieces of evidence seem to be more anecdotal rather than structural, and as such the conclusions should be more nuanced than they are. For example:
> wimuQ+ReLOD is able to retrieve at least one resultset for 87 % of the overall 415 queries, which 11% more results thanks to the ReLOD approach. The results clearly shows [sic] that combining different query processing engines into a single SPARQL query execution framework lead [sic] towards more complete resultset retrieval.
I'm quite weary of self-fulfilling prophecies and the lack of repeatability in presence of statements such as "usability study, where we conduct the study with seven PhD students from our research group"; it's a piece of questionable qualitative evidence in an otherwise quantitatively driven study. The authors seem a bit lost here as to what they want to prove and how. I'm also not sure what to make of statements such as "the resulting scores from the usability study was better than we had expected", and what they are supposed to mean for the reader.
The "promises" in the Conclusion section are quite out of place. Future work is intended to explain how other researchers can build on top of your work, but rather it seems to be a list of shortcomings that the authors aim to address, which is not helpful to the reader. "We will make a better assessment" is especially unacceptable in this regard; this assessment should have been in this journal paper.
|