Review Comment:
This manuscript was submitted as 'Tools and Systems Report' and should be reviewed along the following dimensions: (1) Quality, importance, and impact of the described tool or system (convincing evidence must be provided). (2) Clarity, illustration, and readability of the describing paper, which shall convey to the reader both the capabilities and the limitations of the tool.
This is a re-submission of a major revision that describes the SPARQLES system for monitoring the maturity of public SPARQL endpoints. I have read through the new manuscript as well as the attached response to reviews. I am happy with most of the responses, and it is particularly good to see the system live again and be able to test the described functions.
It is particularly good to see the more detailed impact descriptions and about the system itself. However, I still feel the submission could benefit from some very minor clarifications.
In my original review I raised the question about the origin of the 4 chosen criteria. This has been much better clarified in the resubmission. However, I feel the the response to review has given an even more honest answer to this question and readers could benefit from those information, particularly the following paragraphs
- That said, while the resulting dimensions were not the result of an empirical study or a user survey, we feel that these four dimensions provide a comprehensive overview of the maturity of a given public SPARQL endpoint.
- While we consider these dimensions comprehensive and useful, we do not claim that they – nor the tests we perform to partially quantify them – are complete. We do believe, however, that they are useful and important for the community to be aware of. We also remain open to suggestions from the community with respect to new types of aspects or analytics to perform.
They do not need to be included as they are, but it is good for the readers to be aware of the open-end nature and inspire future research on this topic.
Secondly, in the sustainability section, the authors could add some more discussions about the sustainability of the service/system as well as the code base. It seems clear about the code base which is open accessible. But how about the SPARQLES system itself? What happens if funding runs out to host the system? Is there anything to prevent anyone to host a mirror of the system? Is there a robust mirroring system to sustain its availability?
Finally, as a reviewer who has read through the paper both as a user and a developer, I think there is still a lot of room for more usability studies as well as functionality studies. The analytics data presented in section 7.1 could be biased, because the “availability” pane is the first one presented in the front page, which may naturally lead to more traffic. The colored icons are helpful, but there is no way to order the endpoints in any columns. A way to combine these criteria to search for or prioritise alternative endpoints could also be very useful. The system could provide a fruitful playground for rigourous HCI studies to provide a tool truly useful to the community. But do the authors have the capacity in their future work plan? The authors mentioned that feedback are managed in the open issue tracker. But without any knowledge about future funding, it would be good if the authors could expand a bit more on future plans in making the tool more usable. The authors could also consider including some the relevant feedback from the users as the appendix in the manuscript, in order to the make the argument more complete.
|