Automatic evaluation of complex alignments: an instance-based approach

Tracking #: 2649-3863

Authors: 
Elodie Thieblin
Ollivier Haemmerlé
Cassia Trojahn dos Santos

Responsible editor: 
Jens Lehmann

Submission type: 
Full Paper
Abstract: 
Ontology matching is the task of generating a set of correspondences (i.e., an alignment) between the entities of different ontologies. While most efforts on alignment evaluation have been dedicated to the evaluation of simple alignments (i.e., those linking one single entity of a source ontology to one single entity of a target ontology), the emergence of matchers providing complex alignments (i.e., those composed of correspondences involving logical constructors or transformation functions) requires new strategies for addressing the problem of automatically evaluating complex alignments. This paper proposes i) a benchmark for complex alignment evaluation composed of an automatic evaluation system that relies on queries and instances, and ii) a dataset about conference organisation. This dataset is composed of populated ontologies and a set of competency questions for alignment as SPARQL queries. State-of-the-art alignments are evaluated and a discussion on the difficulties of the evaluation task is provided
Full PDF Version: 
Tags: 
Reviewed

Decision/Status: 
Accept

Solicited Reviews:
Click to Expand/Collapse
Review #1
By Jérôme Euzenat submitted on 27/Apr/2021
Suggestion:
Accept
Review Comment:

My previous comments and concerns have been properly addressed.

Some suggestions:
- p4, col1, task-oriented. Since we are in related work, that would be good to add one reference.
I am thinking of https://doi.org/10.1007/978-3-540-68234-9_31 or https://doi.org/10.1007/978-3-642-04930-9_53 which was among of the first ones, but that may be others.
- p7, col 2: thus should be "An intance-based comparison", like the others.
- p10, To fully address my previous concern, at the beginning of 5.1, I suggest to indicate that "the population stage is very important as the chosen instances may influence the result of the evaluation".

Review #2
Anonymous submitted on 29/Apr/2021
Suggestion:
Accept
Review Comment:

This article has now gone through several rounds of revision. Reviewing all the comments and changes, I do see incremental improvement of the paper after each round. At the same time, I think some of the intrinsic limitations of the research remain, namely the fundamental difficulty of the instance-based evaluation approach that ultimately requires the instance set to be regular and easily comparable. While the authors did make an effort to provide a synthetic instance dataset in the framework of OAEI and their own evaluation, and have clarified the related approach, the paper remains vague on the usability of the instance-based approach over real-life use cases (e.g. when instance matching is hard) and realistic instance data. I consider this a weakness that could have been addressed by performing real-world case studies.

Nevertheless, acknowledging the contributions and the efforts by the authors through one major and two minor revision rounds, I suggest accepting the end result.

Review #3
By Shu-Chuan Chu submitted on 23/May/2021
Suggestion:
Minor Revision
Review Comment:

This paper has presented an evaluation benchmark for ontology so that complex correspondences can be evaluated. The presentation is clear with fine analysis. I recommend the acceptance of this paper after minor revision.

1. Computational complexity is an important evaluation criterion for complex alignments, the authors may describle more in this item.
2. One book and one paper are based on the ontology matching techniques, the authors may describe those ideas in the introduction section.
2.1 Xingsi Xue, Junfeng Chen, Jeng-Shyang Pan, "Evolutionary Algorithm based Ontology Matching Technique", Beijing: Science Press, 2018
2.2 Xingsi Xue, Jeng-Shyang Pan, “A segment-based approach for large-scale ontology matching”, Knowl. Inf. Syst. 52(2): 467-484, 2017