MQALD: Evaluating the impact of modifiers in Question Answering over Knowledge Graphs

Tracking #: 2785-3999

Lucia Siciliani
Pierpaolo Basile
Pasquale Lops
Giovanni Semeraro

Responsible editor: 
Harald Sack

Submission type: 
Dataset Description
Question Answering (QA) over Knowledge Graphs (KG) aims to develop a system that is capable of answering users' questions using the information coming from one or multiple Knowledge Graphs, like DBpedia, Wikidata, and so on. Question Answering systems need to translate the user's question, written using natural language, into a query formulated through a specific data query language that is compliant with the underlying KG. This translation process is already non-trivial when trying to answer simple questions that involve a single triple pattern. It becomes even more troublesome when trying to cope with questions that require modifiers in the final query, i.e., aggregate functions, query forms, and so on. The attention over this last aspect is growing but has never been thoroughly addressed by the existing literature. Starting from the latest advances in this field, we want to further step in this direction. This work aims to provide a publicly available dataset designed for evaluating the performance of a QA system in translating articulated questions into a specific data query language. This dataset has also been used to evaluate three QA systems available at the state of the art.
Full PDF Version: 


Solicited Reviews:
Click to Expand/Collapse
Review #1
By Simon Gottschalk submitted on 12/May/2021
Review Comment:

I want to thank the authors for addressing my comments; things are clear now. In particular, I appreciate the commitment to the integration into GERBIL and the release of a new version of the dataset. Having said that, I do not have any further comments for this revision.
My only remaining concern is that I do not really see that the third reviewer's sensible comment about Listing 2 was regarded in the revised version.

Review #2
By Ricardo Usbeck submitted on 18/May/2021
Review Comment:

# Summary/Description

The revised article describes MQALD, version 4 (01/04/2021) of a modified QALD dataset and a newly generated dataset that focuses on SPARQL operation modifiers.
Thanks to the authors again for answering our questions in the cover letter. The authors also managed to rework larger parts of the remarks. I also want to acknowledge the excellent work of the other two reviewers.
The remarks by R1 about the QA system capability are understandable, but so is the answer by the authors. I see this analysis as a minor research contribution outside of this resource paper, i.e., which QA system can answer what types of queries. This paper makes even more sense in the light of recent publications, such as CBench (

# Short facts

URL: , (updated April 1, 2021)

Version date and number: 3.0., May 21, 2020

Licensing: GNU General Public License v3.0

Availability: guaranteed

Topic coverage: not applicable

Source for the data: The existing QALD - benchmark series

Purpose and method of creation and maintenance: By extracting SPARQL queries containing modifiers and adding 100 novel questions.

Reported usage: From Zenodo - 109 (was 75 at the time of the last review) views and 36 (was 23) downloads at the time of review (18.05.2021)

Metrics and statistics on external and internal connectivity: None.

Use of established vocabularies (e.g., RDF, OWL, SKOS, FOAF): QALD JSON plus extension

Language expressivity: English, Italian, French, Spanish.

Growth: Small, based on community feedback.

5 star-data?: no.

# Quality and stability of the dataset - evidence must be provided

The dataset opens future research questions which are beneficial to the community. The dataset seems stable, given its availability via a PID.
There is an open issue at GERBIL QA ( for integrating MQALD. Thus, we can assume that the dataset will be available to the community.
The dataset is still relatively small but highly diverse and thus arguable valid to test but not to train a KGQA system.

# Usefulness of the dataset, which should be shown by corresponding third-party uses - evidence must be provided.

The dataset has already proven its usefulness for the KGQA community by evaluating three SOTA systems thoroughly.

# Clarity and completeness of the descriptions.

The paper is well-written, and the description is clear, which enables replication.

# Overall impression and Open Questions

MQALD can become a cornerstone for future research. Its description is helpful and will bring the KGQA community forwards.

# Minor issues

P5,l,25, something is wrong with the Listing numbering
Note, there is now a publication for TeBaQA:

Review #3
Anonymous submitted on 22/May/2021
Review Comment:

Thank you for your reply and addressing my comments. The paper is in a better state now, than the previous versions.