Abstract:
Many knowledge graph (KG) embedding models have been proposed for knowledge acquisition tasks and have achieved high performance on common evaluation metrics. However, many current KG embedding models have only limited capability for complex implicit information reasoning and may derive results that contradict the ontology of the KG. To tackle this problem, we propose an ontology-guided joint embedding framework to incorporate the constraints specified in the ontology into the representation learned by KG embedding models through a joint loss function, which is defined on positive and negative instances derived from two sets of ontology axioms. Furthermore, we propose two additional reasoning capability evaluation metrics for measuring the capability of models to correctly predict relations or links deduced from the KG and ontology, and avoid miss-predictions. The experimental results demonstrated that models with our framework performed better in most cases across tasks and datasets, and performed significantly better for reasoning capability evaluation metrics in many cases.