Neural Axiom Network for Knowledge Graph Reasoning

Tracking #: 3173-4387

Juan Li
Xiangnan Chen
Hongtao Yu
Jiaoyan Chen
Wen Zhang

Responsible editor: 
Freddy Lecue

Submission type: 
Full Paper
Knowledge graphs (KGs) generally suffer from incompleteness and incorrectness problems due to the automatic and semi-automatic construction process. Knowledge graph reasoning aims to infer new knowledge or detect noises, which is essential for improving the quality of knowledge graphs. In recent years, various KG reasoning techniques, such as symbolic- and embedding-based methods, have been proposed and shown strong reasoning ability. Symbolic-based reasoning methods infer missing triples according to predefined rules or ontologies. Although rules and axioms have proven to be effective, it is difficult to obtain them. While embedding-based reasoning methods represent entities and relations of a KG as vectors, and complete the KG via vector computation. However, they mainly rely on structural information, and ignore implicit axiom information that are not predefined in KGs but can be reflected from data. That is, each correct triple is also a logically consistent triple, and satisfies all axioms. In this paper, we propose a novel NeuRal Axiom Network (NeuRAN) framework that combines explicit structural and implicit axiom information. It only uses existing triples in KGs without introducing additional ontologies. Specifically, the framework consists of a knowledge graph embedding module that preserves the semantics of triples, and five axiom modules that encode five kinds of implicit axioms using entities and relations in triples. These axioms correspond to five typical object property expression axioms defined in OWL2, including ObjectPropertyDomain, ObjectPropertyRange, DisjointObjectProperties, IrreflexiveObjectProperty and AsymmetricObjectProperty. The knowledge graph embedding module and axiom modules respectively compute the scores that the triple conforms to the semantics and the corresponding axioms. Evaluations on KG reasoning tasks show the efficiency of our method. Compared with knowledge graph embedding models and CKRL, our method achieves comparable performance on noise detection and triple classification, and achieves significant performance on link prediction. Compared with TransE and TransH, our method improves the link prediction performance on the Hit@1 metric by 22.4% and 21.2% on WN18RR-10% dataset respectively.
Full PDF Version: 

Minor Revision

Solicited Reviews:
Click to Expand/Collapse
Review #1
Anonymous submitted on 30/Jun/2022
Minor Revision
Review Comment:

I would suggest to add the key quantitative results of the experiments in the Introduction - that would help positioning the impact of the approach, and clearly state where is the improvement.
--> OK

I would suggest to position the work against past authors' work [a] as concept of consistent knowledge is also captured, and a comparison would be appreciated to better position the work
--> OK

On the methodology, I would suggest to add a textual description, with annotation on the picture to better follow the flow diagram - may be a paragraph in the caption will help. This is a nice picture but textual description need to be added to bring value to the understanding of the approach.
--> OK

Could you extend the framework to other embeddings approaches - you mentioned "Our framework considers two translation-based embedding models TransE and TransH as basic KG embedding models.". It would be nice to understand if this is a limitation. In my understanding this is not but better to be clear.
--> OK

In the experiments I would suggest the authors to explain why FB15K237 and WN18RR have been considered. What is making them suitable for evaluating your work. Please also clearly state why other dataset won't be eligible for your evaluation e.g., what about ConceptNet?
--> I could not see any comments on ConceptNet

Review #2
By Simon Halle submitted on 15/Jul/2022
Minor Revision
Review Comment:

The dataset was added on github, but the code is still missing, so I would only accept if the code is available as they state in their comment "We will add the datasets and readme file soon, and add the code later."
Please correct formatting on page 10: "CKRL(TransH)" is outside the column space.

Review #3
Anonymous submitted on 12/Aug/2022
Review Comment:

Compared to the previous version I think the authors made a number of significant changes in the good direction