SML-Bench -- A Benchmarking Framework for Structured Machine Learning

Tracking #: 1810-3023

Patrick Westphal
Lorenz Bühmann
Simon Bin
Hajira Jabeen
Jens Lehmann

Responsible editor: 
Guest Editors Benchmarking Linked Data 2017

Submission type: 
Full Paper
The availability of structured data has increased significantly over the past decade and several approaches to learn from structured data have been proposed. These logic-based, inductive learning methods are often conceptually similar, which would allow a comparison among them even if they stem from different research communities. However, so far no efforts were made to define an environment for running learning tasks on a variety of tools, covering multiple knowledge representation languages. With SML-Bench, we propose a benchmarking framework to run inductive learning tools from the ILP and semantic web communities on a selection of learning problems. In this paper, we present the foundations of SML-Bench, discuss the systematic selection of benchmarking datasets and learning problems, and showcase an actual benchmark run on the currently supported tools.
Full PDF Version: 


Solicited Reviews:
Click to Expand/Collapse
Review #1
Anonymous submitted on 30/Jan/2018
Review Comment:

This is an updated version (major revision) of a previously submitted paper, which I also reviewed. In it, the authors present a benchmarking framework for structured machine learning, called SML-Bench. The paper is a result of a substantial amount of effort, given the extensive literature scanning for identifying suitable datasets for SML-Bench, along with the data conversion efforts. However, even though the methodology is sound, the results have several explicit drawbacks, acknowledged by the authors.

Otherwise, the authors provide a clear goal at the beginning of the paper. The paper is very well written, is quite clear and easy to follow.

All concerns and questions raised from my side for the first version have been resolved successfully. The quality of the paper has further improved in this updated version.

I do not have further remarks and suggest the paper be accepted.

Review #2
Anonymous submitted on 06/Feb/2018
Review Comment:

I do not have any further comments.

Review #3
By George Papastefanatos submitted on 26/Feb/2018
Review Comment:

The authors have addressed most of the comments noted in the initial submission. As such I am fine for accepting the paper.
I would suggest a proof read of the paper to correct minor typos and syntactic issues. E.g.,
-Section 4, p.5, 2nd par. "...since some of them implement *anytime* algorithms.". What do you mean anytime algorithms?
-Section 4, p.5, 2nd par. "...when systems could not *finished* within the...". Correct to *finish*
Maybe there are some other typos that need your attention.