OG: A Generic Framework for Knowledge Graph Embedding with Ontology Guided Relational Constrains

Tracking #: 3078-4292

Authors: 
Tengwei Song
Jie Luo
Xiangyu Chen

Responsible editor: 
Dagmar Gromann

Submission type: 
Full Paper
Abstract: 
Many knowledge graph (KG) embedding models have been proposed for knowledge acquisition tasks and have achieved high performance on common evaluation metrics. However, many current KG embedding models have only limited capability for complex implicit information reasoning and may derive results that contradict the ontology of the KG. To tackle this problem, we propose an ontology-guided joint embedding framework to incorporate the constraints specified in the ontology into the representation learned by KG embedding models through a joint loss function, which is defined on positive and negative instances derived from two sets of ontology axioms. Furthermore, we propose two additional reasoning capability evaluation metrics for measuring the capability of models to correctly predict relations or links deduced from the KG and ontology, and avoid miss-predictions. The experimental results demonstrated that models with our framework performed better in most cases across tasks and datasets, and performed significantly better for reasoning capability evaluation metrics in many cases.
Full PDF Version: 
Tags: 
Reviewed

Decision/Status: 
Reject

Solicited Reviews:
Click to Expand/Collapse
Review #1
Anonymous submitted on 26/Apr/2022
Suggestion:
Major Revision
Review Comment:

The paper proposes a new approach for incorporating ontology constraints into knowledge graph embeddings. The basic idea is to infer new positive triples or negative triples based on the constraints of the ontology and then use different loss functions for each of this cases. Then, a joint loss function with different co-efficients on the two new loss functions is used for the optimization process. The authors also propose a new Hits@k metric to take into consideration these new derived positive or negative triples. The presented results show that the proposed framework can improve the performance. Although the paper is interesting there are still some deficiencies, especially in the evaluation. Below are the more detailed comments:

- There are some articles missing from the related work:
[A] . Meng Qu, Jian Tang: Probabilistic Logic Neural Networks for Reasoning. NeurIPS 2019: 7710-7720
[B]. Zoi Kaoudi, Abelardo Carlos Martinez Lorenzo, Volker Markl: Towards Loosely-Coupling Knowledge Graph Embeddings and Ontology-based Reasoning. CoRR abs/2202.03173 (2022)

- The related work does not contrast with the proposed framework but simply outlines the other approaches. What are the differences of the proposed approach to the related works?

- What is it that makes the inclusion of other axioms, such as assertion/class axioms to the framework, hard to add in this paper? Why focusing only on relations? A discussion on this is missing.

- The definition of the loss functions in 4.2 is a bit confusing: For the L_C, the authors state that I^- is replaced with null but in the equation 4.1 there are triples taken from that set. Similarly, in equation 4.2, what does it mean when a positive triple is taken from I_v?

- The evaluation results are missing a comparison with certain related work on ontology-guided knowledge graph embeddings, such as KALE [37], pLogicNet (see [A] above) and Iter [39]. How the proposed approach compares with them.

- I assume that the training time results presented in Table A6 do not include the time to derive the sets D+ and D-. Is that correct? If yes, these times should also be reported because they are part of the training process.

Review #2
Anonymous submitted on 25/May/2022
Suggestion:
Major Revision
Review Comment:

This paper presents a simply way to inject ontology into KG embeddings. Briefly, it divides the inferred axioms by the ontology logics into two kinds: C-set (positive axioms) and V-set (negative axioms), and then uses these two additional sets to train a KG embedding model. The technical contribution of this paper is only incremental. The idea is quite simple.

The evaluation is short of key baselines. No baselines on ontology and KG joint embedding are considered. It's unfair to compare the proposed method to TransE, ConvKB, QuatE and KBGAT which do not consider the ontology.

The original NELL KG has ontological schema. Why not use or extract NELL's own ontology? The currently manual relation alignment approach for constructing ontologies for FB15K-237 and NELL-95 would lead to quality issues. Wikidata also has schema information and could be used as the benchmark.

Considering the issues in benchmarks and baselines, I think many new experiments are required to show the effectiveness of this method.

The paper is easy to follow but there are still many minor issues.

Review #3
Anonymous submitted on 26/Jun/2022
Suggestion:
Reject
Review Comment:

Summary of the paper:
This paper proposes a generic framework FOG for knowledge graph embedding with ontology guided relational constraints. FOG classifies ontology axioms into two types, the conformance set that could infer new positive triples and the violation set that could infer new negative triples. Given a knowledge graph with ontology axioms, FOG first infers new positive/negative triples according to axioms, and construct a conformance loss and violation loss for new positive and negative triples, respectively. Since the axiom inference is conducted symbolically, FOG is adaptive to a wide range of KGEs.

Comments on various dimensions:
(1) Originality: The key method proposed to incorporate ontology axioms through symbolic reasoning is similar to what did in previous work such as RUGE and IterE. Since there are such works and more detailed problems/challenges to address are not proposed in this paper except for incorporate ontology axioms. The overall novelty of this paper is limited.
(2) Significance of the results: Experimental results show that with FOG, the link/relation prediction results of KGEs could be improved. This is expected since KGE-FOG utilized more information than KGE(base) and KGE(aug). Since baselines capable of using axioms are missing, It's hard to judge the significance of results.
(3) Quality of writing: The paper is well organized and easy to be understood.