DegreEmbed: incorporating entity embedding into logic rule learning for knowledge graph reasoning

Tracking #: 2992-4206

Authors: 
Yuliang Wei
Haotian Li
Yao Wang
Guodong Xin
Hongri Liu

Responsible editor: 
Pascal Hitzler

Submission type: 
Full Paper
Abstract: 
Knowledge graphs (KGs), as structured representations of real world facts, are intelligent databases incorporating human knowledge that can help machine imitate the way of human problem solving. However, due to the nature of rapid iteration as well as incompleteness of data, KGs are usually huge and there are inevitably missing facts in KGs. Link prediction for knowledge graphs is the task aiming to complete missing facts by reasoning based on the existing knowledge. Two main streams of research are widely studied: one learns low-dimensional embeddings for entities and relations that can capture latent patterns, and the other gains good interpretability by mining logical rules. Unfortunately, previous studies rarely pay attention to heterogeneous KGs. In this paper, we propose DegreEmbed, a model that combines embedding-based learning and logic rule mining for inferring on KGs. Specifically, we study the problem of predicting missing links in heterogeneous KGs that involve entities and relations of various types from the perspective of the degrees of nodes. Experimentally, we demonstrate that our DegreEmbed model outperforms the state-of-the-art methods on real world datasets. Meanwhile, the rules mined by our model are of high quality and interpretability.
Full PDF Version: 
Tags: 
Reviewed

Decision/Status: 
Major Revision

Solicited Reviews:
Click to Expand/Collapse
Review #1
Anonymous submitted on 16/Feb/2022
Suggestion:
Reject
Review Comment:

The authors propose an approach for rule induction from Knowledge Graphs. The proposed DegreEmbed approach builds on the Neural LP framework and introduces a degree embedding layer for the KG entities. Despite the reported performance, the presentation of the approach and its novelty is not yet convincing. I suggest the following to improve the paper quality.

Related work: This section requires a deeper analysis of the presented approaches. For example, the authors need to make a clear distinction between the approaches of link prediction and knowledge graph completion versus the approaches of rule induction. Despite the similarities and the fact that both can be used for KG completion, the related work section needs to better define the scope of the paper.

Results: The authors focus on presenting only the results where their approach outperforms the compared methods. However, Table 4 shows that other methods outperform DegreEmbed on FB15K-237. It would be useful to present insights on why this is the case and how the method can be improved.

Mined rules: One of the major claims in the paper is the high quality and interpretability of the mined rules. In order to make this claim more convincing, I encourage the authors, not only to list some of the rules mined by DegreEmbed but also some of the rules that were missed by other methods and vice-versa.

Ablation study: Finally, the paper is missing an ablation study that would help establish the originality of the DegreEmbed method. For example, how would the performance of the model look when changing the degree embedding layer with other pre-computed embeddings of the entities. How would varying the rank T or other hyperparameters of the model affect the performance, etc.

Editing:
Page 1 Line 42L: and examined by human. -> and examined by humans.
Page 2 Line 21L: and can poorly be understood by human. -> and can poorly
be understood by humans.
Page 2 Line 21L: which is a common pain for most deep learning models. -> replace "pain" with issue/challenge/drawback ..
Page 2 Line 28R: prospective -> perspective
Page 2 Line 49R: A RNN -> An RNN
Page 3 Line 44L: Hadmard product -> Hadamard product
Page 4 Line 39R: no longer that L -> no longer than L
Page 7 Line 10L: from the prospective of the type -> from the perspective of the type
Page 8 Line 49L: nephewOf(Steve, -> nephewOf(Mike,
Page 9 Line 40L: listed in App. 2. -> listed in Table 2.
Page 9 Line 28R: cause -> because
Page 9 Line 45R: we can see an about -> we can see about

Review #2
By Aaron Eberhart submitted on 28/Feb/2022
Suggestion:
Major Revision
Review Comment:

Summary:
This paper presents a new method for embedding horn rules in a knowledge graph as queries to improve link-prediction.

Discussion:
The paper presents an initially very interesting idea, but very quickly loses its motivation with a great deal of quite dense definitions and methodology that seems mostly ad-hoc. Accompanied with positive yet mediocre results this extreme level of seemingly arbitrary formalism does not feel warranted, and makes it seem more like the experiment was mostly designed through iterated trial-and-error until better numbers were achieved rather than the actual pursuit of a new method. Specifically sections 3.2 - 4 are concerning. It may be the case that all steps were deliberate and designed to test an overarching idea but this is not apparent from the writing. Additionally, though none of the errors were significant enough to cause concern, there are many very small language errors in the text that don't take away from the idea but do interrupt reading the paper. Lastly, Table 3 feels disingenuous and it should be changed in the case where DegreEmbed is shown to be optimal when it is in fact a tie, or at least more precision should be included to show it is not a tie. If the authors are able to motivate the work better, streamline some of the definitions, edit the tables a bit, and correct the many minor English errors then I believe it may be good enough to accept, though this is not certain.

Evaluation:
Accept only with major revisions

Notes ([#,#,#] refers to page, column, line number of note):
Too many very minor English errors were found to reproduce all of them. None felt problematic but it should be thoroughly checked to make sure it reads smoothly.

It may be better not to speculate on the causes of incompleteness for KGs in the abstract and introduction. This is certainly an issue but it is not certain exactly why this happens in any particular case, and it may even be unavoidable.

The notion of path accompanied by a max length seems highly susceptible to influence by the presence of reflexive triples, which may not exist in the studied KGs but are probably allowed (depending on the source some may even be implicit) and could massively increase the number of possible paths with unhelpful and redundant information.

In Definition 2 is almost like you are computing precision, in which case it may be easier to redefine the reasoning as a prediction task with TPs etc

Given the previous comment, it almost feels like Definition 4 could be modified to represent a recall analog and you could you get an F1 score which would not need any definition.

3.2 all the way to the end of 4 either needs some motivation to explain why so many complex equations are presented so that readers can connect them to the broader idea, or at the very least it can be changed to an overview that shows why certain methods were chosen with the equations moved to a separate section. It is not that they are incorrect, as far as I can tell, but rather it is quite difficult to understand why these particular methods are chosen. To simplify this comment: the "how" of the method is abundantly clear, the "why" is not obvious to me and possibly absent entirely.

[7,2,3] "Transferred" should be clarified, since my understanding is that the model must be retrained on other sources, even if it can still work, and the wording makes this ambiguous.

Table 3 has scores that are not bold but equal to the DegreEmbed results. This could be interpreted as a dishonest presentation unless it is clarified.

Table 4 RotatE WN18 MME is missing a decimal

Tables 5-7 are quite large and could be reduced to a few select examples, reproducing so many results explicitly is not helping show the overall performance of the method.