Abstract:
In a world where information is exchanged at an increasing pace, knowledge becomes quickly outdated. Formal
constructs that capture human knowledge, such as knowledge graphs and ontologies, need to be updated and evaluated to stay relevant and functioning. However, manually updating and evaluating existing knowledge models is labour intensive and prone to errors. This study addresses the challenge of evaluating changes in existing knowledge graphs. In this work, syntactic and semantic metrics tailored for change evaluation are introduced. The metrics are implemented and tested through experiments on knowledge graphs across various domains. In these experiments, real-world changes are simulated by removing concepts and introducing faulty ones before measuring the quality with the syntactic and semantic metrics. The hypothesis is that such changes decrease scores: removing concepts influences syntactic qualities such as the structure of the model, while adding faulty concepts affects semantic qualities like model consistency. The results confirm the hypothesis, showing that the extent and nature of the changes influence the scores. Additionally, size and degree of specialisation of the graph affect the scores. Overall, this study presents a set of evaluation metrics and provides empirical evidence of their efficacy in assessing modifications to knowledge graphs from different domains.