Reproducible Query Performance Assessment of Scalable RDF Storage Solutions

Tracking #: 1784-2997

This paper is currently under review
Authors: 
Dieter De Witte
Dieter De Paepe
Laurens De Vocht
Jan Fostier
Ruben Verborgh
Erik Mannens
Hans Constandt
Kenny Knecht
Filip Pattyn

Responsible editor: 
Guest Editors Benchmarking Linked Data 2017

Submission type: 
Full Paper
Abstract: 
Applications in the biomedical domain rely on Linked Data spanning multiple datasets for an increasing number of use cases. Choosing a strategy for running federated queries over Big Linked Data is however a challenging task. Given the abundance of Linked Data storage solutions and benchmarks, it is not straightforward to make an informed choice between platforms. This can be addressed by releasing an updated review of the state-of-the-art periodically and by providing tools and methods to make these more (easily) reproducible. Running a custom benchmark tailored to a specific use case becomes more feasible by simplifying deployment, configuration, and post-processing. In this work we present in-depth results of an extensive query performance benchmark we conducted. The focus lies on comparing scalable RDF systems and to iterate over different hardware options and engine configurations. Contrary to most benchmarking efforts, comparisons are made across different approaches to Linked Data querying, conclusions can be drawn by comparing the actual benchmark costs. Both artificial tests and a real case with queries from a biomedical search application are analyzed. In analyzing the performance results, we discovered that single-node triple stores benefit greatly from vertical scaling and proper configuration. Results show that horizontal scalability is still a real challenge to most systems. Semantic Web storage solutions based on federation, compression, or Linked Data Fragments still lag by an order of magnitude in terms of performance. Furthermore we demonstrate the need for careful analysis of contextual factors influencing query runtimes: server load, availability, caching effects, and query completeness all perturb the benchmark results. With this work we offer a reusable methodology to facilitate comparison between existing and future query performance benchmarks. We release our results in a rich event format ensuring reproducibility while also leaving room for serendipity. This methodology facilitates the integration with future benchmark results.
Full PDF Version: 
Tags: 
Under Review

Comments