Rule-driven inconsistency resolution for knowledge graph generation rules

Tracking #: 2064-3277

Pieter Heyvaert
Anastasia Dimou
Ben De Meester
Ruben Verborgh

Responsible editor: 
Guest Editors Knowledge Graphs 2018

Submission type: 
Full Paper
Knowledge graphs, which contain annotated descriptions of entities and their interrelations, are often generated using rules that apply semantic annotations to certain data sources. (Re)using ontology terms without adhering to the axioms defined by their ontologies results in inconsistencies in these graphs, affecting their quality. Methods and tools were proposed to detect and resolve inconsistencies, the root causes of which include rules and ontologies. However, these either require access to the complete knowledge graph, which is not always available in a time-constrained situation, or assume that only generation rules can be refined but not ontologies. In the past, we proposed a rule-driven method for detecting and resolving inconsistencies without complete knowledge graph access, but it requires a predefined set of refinements to the rules and does not guide users with respect to the order the rules should be inspected. We extend our previous work with a rule-driven method, called Resglass, that considers refinements for generation rules as well as ontologies. In this article, we describe Resglass, which includes a ranking to determine the order with which rules and ontology elements should be inspected, and its implementation. The ranking is evaluated by comparing %through expert comparisons, the manual ranking of experts to our automatic ranking. The evaluation shows that our automatic ranking achieves an overlap of 80% with experts ranking, reducing this way the effort required during the resolution of inconsistencies in both rules and ontologies.
Full PDF Version: 


Solicited Reviews:
Click to Expand/Collapse
Review #1
Anonymous submitted on 19/Dec/2018
Review Comment:

Authors have succesfully addressed my concerns and suggestions. In fact, this new version of the manuscript is much more robust now. Therefore, I upgrade my recommendation.

Review #2
By Robert Andrei Buchmann submitted on 14/Jan/2019
Minor Revision
Review Comment:

Significant revisions have been made in the new version of the paper and they address adequately the issues that I raised about the initial draft.

An experts-driven evaluation procedure is detailed in section 6.1 and is anchored to a quantified hypothesis. The running example is more consolidated now, with a richly detailed motivating use case. The discussion on sources of inconsistencies is more comprehensive and ambiguity was significantly reduced in the problem statement. The rule clustering principle is explained in more detail (although narratively).

I find the paper acceptable in the current form, with only one minor request for disambiguation: when explaining the rule clustering principle, it would help to have a more precise explanation of statements such as "rules x,y are related to z", "type of record to which a rule contributes". Similarity always relies on some metric, therefore it would be preferable to formulate a metric instead of saying "the similarity is determined by the type of record".

Review #3
By Meng Zhao submitted on 11/Feb/2019
Review Comment:

The authors have proposed in the paper a methodology to address a quintessential task in real-world KB applications: maintenance. The reviewer seems great potentials from the demonstrated approaches toward semi-automatic KB resolutions from a practitioner's standpoint. Minor suggestions are as follows:
1. It might be interesting to see a comparison between the rule-based reasoning system and inference-based systems like autoencoder for inconsistency detection.
2. While budget and attention impose great limitations, a human benchmarking of only 3 experts might not be enough, statistically speaking. Crowd-sourcing, if not golden dataset, might be work consideration.