Weight-aware Tasks for Evaluating Knowledge Graph Embeddings

Tracking #: 3320-4534

This paper is currently under review
Weikun Kong
Xin Liu
Teeradaj Racharak
Guanqun Sun
Qiang Ma
Le-Minh Nguyen

Responsible editor: 
Agnieszka Lawrynowicz

Submission type: 
Full Paper
Knowledge graph embeddings widely participate in solving many problems together with deep learning, such as natural language understanding and named entity recognition. The quality of knowledge graph embeddings highly affects the performance of the models on many knowledge-involved tasks. Link prediction (LP) and triple classification (TC) are widely adopted to evaluate the performance of knowledge graph embeddings. Link prediction is to predict the missing entity that completes a triple, which represents a fact in knowledge graphs, while triple classification is to determine whether the unknown triple is true or not. Both link prediction and triple classification can intuitively reflect the performance of the knowledge graph embedding model; but it treats every triple equally, which is not capable of evaluating the performance of the embedding models on knowledge graphs that offer the weight information on the triples. As a consequence, this paper originally introduces two weight-aware extended tasks for LP and TC, called weight-aware link prediction (WaLP) and weight-aware triple classification (WaTC), respectively, aiming to better evaluate the performance of the embedding models on weighed knowledge graphs. WaLP and WaTC emphasize the ability of the embeddings to predict and classify triples with high weights, respectively. Lastly, we respond to the newly introduced tasks by proposing a general method WaExt to extend existing knowledge graph embedding models to weight-aware extensions. We test WaExt on four knowledge graph embedding models, achieving competitive performance than the baselines. The code is available at: https://github.com/Diison/WaExt.
Full PDF Version: 
Under Review