SPARQLES: Monitoring Public SPARQL Endpoints

Tracking #: 1005-2216

Authors: 
Pierre-Yves Vandenbussche
Juergen Umbrich
Aidan Hogan
Carlos Buil-Aranda

Responsible editor: 
Jens Lehmann

Submission type: 
Tool/System Report
Abstract: 
We describe SPARQLES: an online system that monitors the health of public SPARQL endpoints on the Web by probing them with custom-designed queries at regular intervals. We present the architecture of SPARQLES and the variety of analytics that it runs over public SPARQL endpoints, categorised by availability, discoverability, performance and interoperability. To motivate the system, we gives examples of some key questions about the health and maturation of public SPARQL endpoints that can be answered by the data it has collected in the past year(s). We also detail the interfaces that the system provides for human and software agents to learn more about the recent history and current state of an individual SPARQL endpoint or about overall trends concerning the maturity of all endpoints monitored by the system.
Full PDF Version: 
Tags: 
Reviewed

Decision/Status: 
Major Revision

Solicited Reviews:
Click to Expand/Collapse
Review #1
By Ivan Ermilov submitted on 16/Apr/2015
Suggestion:
Major Revision
Review Comment:

This manuscript was submitted as 'Tools and Systems Report' and should be reviewed along the following dimensions: (1) Quality, importance, and impact of the described tool or system (convincing evidence must be provided). (2) Clarity, illustration, and readability of the describing paper, which shall convey to the reader both the capabilities and the limitations of the tool.

Submission type:
Tool/System Report

This submission presents SPARQLES, a system for monitoring public SPARQL endpoints. The motivation for creating such a tool was to measure availability, discoverability, performance and interoperability of SPARQL endpoints. This work is based upon previous works of the authors, one describing the prototype of SPARQLES system (ISWC demo, citation [18] in the paper) and other one describing the measurement of above-mentioned issues as once-off experiment (in ISWC’2013, citation [6] in the paper). This paper describes SPARQLES system in finer detail, providing more insights into architecture of the application as well as considering analytics over the collected data. Also it considers limitations of the analytics in the context of collected data.

As the first version of the system is running since February 2011 (though measuring only availability until November 2013), SPARQLES can be considered as a mature system. The public version of SPARQLES software is available since June 16, 2013, although we can distinguish two main active periods of development [1]: Jun-Dec 2013 and Oct 2014-Feb 2015 (overall 12 months), which again contributes to the maturity of the system.
On the other hand, the build/deployment process for the tool is not documented. The only statement related to the documentation is: “Build using maven.” The problems, which I run into when trying to build the project using mvn:
-- pom.xml had one outdated dependency for jena-arq (I created a pull-request to fix this issue)
-- there is a special maven plugin appassembler, which generates ./bin folder with sparqles script, but it seems to be adapted for cygwin console (this should be reflected in documentation)
-- I wasn’t able to start nodejs server (i.e. front-end), it throws an error message on requesting index page (although MongoDB seems to be populated with test data). I submitted an issue to github tracker: https://github.com/pyvandenbussche/sparqles/issues/36
Although, by investing more of my time I can figure out how the project works and be able to deploy it on my own, I think, this should be part of documentation in repository (or wiki). There should be at least having a brief list of instructions to help setting up development environment on a local machine (given that user has knowledge of Java, NodeJS, Maven and MongoDB of course).
Also, high-level architecture (Figure 1) can be extended. As I understood from the repository, there are two distinct tools: front-end written in JS/NodeJS (and running with ExpressJS) and SPARQL analytics written in Java (and running from command line?). What connect those two tools together is MongoDB, which stores analytical data from SPARQL analytics tool. I may be wrong though, because there is no explanation on this topic in the paper.

The web portal provides up to date information, however “performance” page (http://sparqles.ai.wu.ac.at/performance) seems to be broken. It shows only the list without any stats:
https://dl.dropboxusercontent.com/u/4882345/sparqles-performance.png

Are there any reported use cases of the tool by third-parties? This question has to be clarified.

In general, the paper fits the call and is easy to read. However, it needs a major revision to fix the mentioned issues. Please, see more comments below for particular sections.

Major edits:
Introduction
“Hundreds of Linked Datasets have been made publicly available in recent years”. According to LODStats (as of 06.04.2015), we can talk about thousands of Linked Datasets. Governments started publishing quite a lot of RDF data. LODStats includes the data from publicdata.eu (European government data) and data.gov (U.S. government data) portals, which results in a number bigger than declared for LOD Cloud (which only counts datahub.io portal)
Citation [13] by Jentzsch et. al is outdated. The last state of LOD cloud was published in 2014. Needs to be updated.
Analytics
page 4: “... we filter dead endpoints from consideration and focus on 344 live endpoints”. Needs to be clarified how many exactly were filtered out (or how many remained in percents).

Minor edits:
Abstract
To motivate the system, we gives… → To motivate the system, we give…
Introduction
“The system has been online for the past year…” → it is necessary to add “since … (date here)” to clear this point out.
3.4 Interoperability
page. 10 the list after “we group these queries according to their purpose, as follows:” → Tests should be lower-case. Comma is missing between MIN and SUM.
Misc
Figures 4, 5, and 6 are not readable if printed on B/W printer. In particular, in Fig. 4 ASKs is not distinguishable from ASKo (other figure got the same problem). In general figures should be checked for readability in Black&White.
Figure 9 is too low-quality. Fonts are hard to read.
In Figure 10 discoverability, endpoint boxes are hard to read, when printed on B/W printer.

[1] https://github.com/pyvandenbussche/sparqles/graphs/contributors

Review #2
Anonymous submitted on 20/Apr/2015
Suggestion:
Major Revision
Review Comment:

This manuscript was submitted as 'Tools and Systems Report' and should be reviewed along the following dimensions: (1) Quality, importance, and impact of the described tool or system (convincing evidence must be provided). (2) Clarity, illustration, and readability of the describing paper, which shall convey to the reader both the capabilities and the limitations of the tool.

This submission describes a public service for monitoring the health, discoverabilities, and other key features of public SPARQL endpoints. The service has been set up running for over a year’s time and it seems to have received reasonable interests from the community, with ~500 unique visitors per month. As far as the reviewer is aware, this is a unique service that provides comprehensive monitoring of public SPARQL endpoints.

The paper is generally well written and very easy to read. However, as a tool paper, the major issue of the paper is the lack of evaluations. The authors proposed a set of dimensions for monitoring SPARQL services, but how sufficient they are, for what kind of users, and what kind of needs. There was no UI screenshot in the submission, and the reviewer was unable to access the public service at the time of review. How easy is it to use the UI for target users, to find information that they need? Are these limitations of the current development? Finally, how robust and scalable are the APIs provided by SPARQLES? At the time of review, the system (as well as the one hosted at the alternative URL) was not accessible. So it naturally leads to the question regarding how robust the whole system is and what kind of mechanism there is or will be in place to ensure its stability? All these questions put the quality of the presented tool under doubts.

Another major issue with the submission is that it does not provide sufficient details as a tool/system paper. The reviewer found it hard to have a full grasp on how the system can be used or how the UI can be interacted. Are the example questions given in section 3 the sort of questions that a user expects to be supported by the system? The paper still reads a bit like a mixture of a research paper and a tool paper. The reviewer thinks that the content needs to be better balanced, and it may benefit from a restructure, particularly for section 3.

Apart from the major problems, the paper also has some minor issues:

1. If this is submitted as a tool or system paper, should there be a justification of where the list of features came from? Were they based on a survey on users’ need or an empirical study? The work could have been better motivated.

2. The topic of monitoring web services has been extensively studied by the Web service community. Although the authors can argue that some unique features of SPARQL services do require a new monitoring system, I think this could have been better described and argued in the paper. This could particularly impact on the range of dimensions considered by the system.

More detailed comments below:

1.Page2 (section 1) The reviewer thinks that each factor (like availability, discoverability) could benefit from a clear, explicit definition, to show the kind of computation used to measure each factor.

2. Page3 (section 3.1) Why was the availability of a service monitored at an hourly basis? Is it not too frequent and would it not pose too much stress on the storage?

3. Page3 (Section 3.1) why was a SPARQL query needed to test the availability of a service, not a simpler mechanism, like ping?

4. Page4 (Section 3.1) I found the purpose of the set of research questions used in each section 3.* a bit confusing. Where do these questions come from? Can users find answers to these questions through the current public system? Why would users not expect other sets of questions?

5. Section 2 and 3 need a bit restructuring. Part of section 3 also covers how the data was collected. Would this not be a better fit for describing the system implementation? Currently section 3 is a mixture of what your system managed to collect and what it can manage to analyse. It’s not just about Analytics, as what the section title suggests.

6. Page4 (section 3.2) I can understand why the authors chose VoID and SD as their monitoring target. However, should this part of work focus more on the discovery capability that is desired by target stakeholders rather than two specific vocabularies? What are the justifications for this approach? Again, a clear definition of discoverability could help.

7. Page7 (section 3.3) In terms of performance evaluation, have the authors considered throughput of services? Why not?

8.I found the description about the various interfaces provided by SPARQLES inadequate, both in terms of section 4.2 and 4.3. It is hardly possible to understand how to use the tool or UI with the current text in the manuscript. I think these two sections need to be much more expanded.

Review #3
Anonymous submitted on 22/Apr/2015
Suggestion:
Reject
Review Comment:

The paper describes SPARQL Endpoint Status (SPARQLES), a monitoring tool for publicly available SPARQL endpoints. SPARQLES monitors four aspects of an endpoint: availability, discoverability, performance and interoperability. After illustrating the architecture of the service the authors present various analyses conducted with the gathered data, and give (at the same time) insights how the data was collected. First, they look at the evolution of the availability of the SPARQL endpoints. Next, they examine if the services are adding meta descriptions (VoID and SPARQL 1.1 Service Description) to their endpoint. Another aspect analyzed in this paper is the performance of typical SPARQL queries (atomic lookup, streaming results, joining) with different types and parameters. At last, they evaluate the development of the support of SPARQL 1.1 features. After that, an overview of the storage, the API and the UI of SPARQLES is given. Before concluding the paper the authors discuss the impact, limitations and sustainability of the monitoring system.

The paper is organized around the extended studies of previous work, which are interesting, but it does not concentrate on describing the tool. The authors might resubmit it as extended work (with some adjustments, e.g., related work). If not, the paper must be restructured to focus on the tool (e.g. by extracting system descriptions in the analysis section). Furthermore, some basic statistics about the tool and the gathered data should be added. Additionally, it should be made more explicit, how many users are using the system and how many requests the system served in the past compared to now to show that the user base of the service grows. Further, the authors should add some discussion about other SPARQL monitoring systems (or point out that the service is unique). A screenshot of the UI would help the reader to get an idea of the user interface.

Regarding the tool itself, I have the following remarks:

- The availability of the SPARQL endpoints is observed using only a single server. The authors should consider to setup a distributed system to reduce the impact of loosing the network of the monitoring server. A note should be added to the paper explaining how long SPARQLES waits for a response from the endpoint before considering the service as unreachable.
- The SPARQLES service (at least the UI and API) was offline for two weeks while this review, and is again not reachable (proxy error) while writing this review. Your tool should itself be highly available to observe the availability of other services.
- Concerning the performance monitoring the authors do not comment on how the service handles the network latency between the monitoring server and the SPARQL endpoint server, which can be located on a different continent. The system queries the endpoints once a day, if the point in time is fixed other cronjobs may bias the measurements. The streaming performance of an endpoint is determined using different limits (~3,000 - 100,001) but I doubt that these values reflect typical limit queries in real world scenarios.
(Regarding the REST interface of the service I expected URIs like /API/ENDPOINTS and /API/ENPOINTS/.)

To summarize, the paper describes a useful tool (in my point of view) for the community and extends previous work by adding a temporal aspect, but it focuses too much on the analysis and does not describe the tool in detail. Furthermore, some design choices of the monitoring service should be reconsidered.

Some minor remarks:

Before concluding that the performance did not improve dramatically, the authors should first test, if the measured performance differences are statistically significant. Please also add a comment why the median drops between Nov. 2013 and Dec 2013 from 80,000 to ~20,000 for LIMIT_100001 in Figure 6.

- figures with legends: use a smaller font size and add a caption to the legend
- p. 2: switch "full" name of the service with the abbreviation, should be "SPARQL Endpoint Status (SPARQLES)"
- p. 4/5: Figures 2 and 3: remove fill
- p. 4: footnote 8: remove link (one link should be enough, see footnote 6)
- p. 7: Please add ticks (some values are mentioned in the text and should be visible in the figure)
- p. 8: Fig 5: use three different shapes


Comments

We would like to let the reviewers know that due to some abnormal traffic/attacks involving SPARQLES, the host, OKFN, has unfortunately had to disable the service. We think that perhaps some external researchers were accessing the service in an impolite way, causing overloading on the OKFN servers. We're working hard to migrate the service and will report back once ready (hopefully early next week). We apologise for any inconvenience caused but this was truly an unforeseeable event on our end.

We just wish to note that the SPARQLES system is back online at a new home: http://sparqles.ai.wu.ac.at/. The old URL permanently redirects to the new location.

We apologise sincerely for the downtime but it was something outside of our control. OKFN servers were hit by an Elasticsearch vulnerability [1] (in which we believe SPARQLES was not involved) and the admins decided it was best to cut external services. Hence we needed to organise a new server with sufficient memory and back-ups to comfortably host the service at short notice.

Finally, we want to note that the system may take a while to warm back up. We have no readings during the downtime hence, for example, current availability measures for specific endpoints will tend to be 0% or 100% until the system has been up for a few days.

We thank the reviewers for their time and apologise once again.

[1] http://bouk.co/blog/elasticsearch-rce/