Entity Linking with Out-of-Knowledge-Graph Entity Detection and Clustering using only Knowledge Graphs

Tracking #: 3539-4753

Authors: 
Cedric Moeller
Ricardo Usbeck

Responsible editor: 
Guest Editors KG Gen 2023

Submission type: 
Full Paper
Abstract: 
Entity Linking is crucial for numerous downstream tasks, such as question answering, knowledge graph population, and general knowledge extraction. A frequently overlooked aspect of entity linking is the potential encounter with entities not yet present in a target knowledge graph. Although some recent studies have addressed this issue, they primarily utilize full-text knowledge bases or depend on external information. However, these resources are not available in most use cases. In this work, we solely rely on the information within a knowledge graph and assume no external information is accessible. To investigate the challenge of identifying and disambiguating entities absent from the knowledge graph, we introduce a comprehensive silver-standard benchmark dataset that covers texts from 1999 to 2022. Based on our novel dataset, we develop an approach using pre-trained language models and knowledge graph embeddings without the need for a parallel full-text corpus. Moreover, by assessing the influence of knowledge graph embeddings on the given task, we show that implementing a sequential entity linking approach, which considers the whole sentence, can outperform clustering techniques that handle each mention separately in specific instances.
Full PDF Version: 
Tags: 
Reviewed

Decision/Status: 
Reject

Solicited Reviews:
Click to Expand/Collapse
Review #1
Anonymous submitted on 09/Nov/2023
Suggestion:
Major Revision
Review Comment:

The paper describes an approach to perform: Entity Linking with Out-of-KG Entity Detection and Clustering Using Only KGs. The issue of out-of-KG entities is indeed problematic in that not all real-world objects are already recorded in KGs. The presented work investigates the task of identifying and disambiguating entities absent from the KG and also provides a dataset for benchmarking purposes.

Some details of my feedback are provided below:
- In general, it seems like a paradox in performing entity linking in the case that there is no entity to be linked in the KG? Maybe, I might miss some assumptions or preconditions so that the problem to be tackled would make better sense?
- In the abstract: "Although some recent studies have addressed this issue, they primarily utilize full-text knowledge bases or depend on external information. However, these resources are not available in most use cases. In this work, we solely rely on the information within a knowledge graph and assume no external information is accessible." -> How feasible is this assumption with respect to real-world scenarios (that is, is this problem often happening in reality)?
- What are the criteria to decide whether an entity is new and not the same with respect to existing entities? Are there any heuristics for this?
- On Page 2 ".. entity linker might link it to a different (likely coronavirus-related) and thus incorrect entity .." -> To me, this approximative approach is actually the best the entity linking system/approach could do when there are no exact entities to be linked? It might link to incorrect, but relevant entities (still).
- In the contributions text: "A sequential Entity Linking method .." -> This can be motivated further: What makes a sequential method promising?
- Any relation of the work to the open information extraction paradigm? The works seems to have a relation to it. See, e.g., https://en.wikipedia.org/wiki/Open_information_extraction
- It is rather unclear to me if the proposed solution makes use of the triple content of entities within a KG (so not just alias and description -- in a way that perhaps the alias and description might have some limitations in describing the existing entities)? If that is the case, would the inclusion of the triple content help improve the proposed approach (to figure out whether suspected out-of-entities are indeed not related at all with the existing entities)?
- The reliance of pre-trained language models might involve, say, external resources, in the sense that the out-of-KG entities might be mentioned in the corpus upon which the language models are founded?
- What might be of interest: Is there any performance difference compared to approaches relying on external information? If so, how far is the difference (of course, this must be interpreted within the context that your approach does not rely on external information, so a lower performance is fine actually)?
- Regarding the silver-standard dataset in the paper: As the dataset is semi-automatically generated, how good is the (annotation) quality of the dataset? Any manual evaluation?
- On the result of "(abstract) .. we show that implementing a sequential entity linking approach, which considers the whole sentence, can outperform clustering techniques that handle each mention separately in specific instances." -> What insights can be taken from this? Does this hold in general, or are there cases for which the clustering techniques would be better (or at least, have the same performance as the sequential entity linking approach)?
- On Page 2 (within the context of the whole paper): "We developed an integrated method that can identify and cluster out-of-KG entities" -> What could be the next steps after out-of-KG entities have been identified and clustered? Any actions might be made pertaining to the KG?
- In Sec. 2 Method: "For mentions referring to entities not in the KG, the aim is to associate them with one another." -> Could this be clarified further? The wording at the moment is a bit difficult to understand to me.
- I'd suggest to add a running example for Sec. 2. This would greatly improve the readability.
- A potential issue with the problem definition is that: It might be the case that the mapping is not just from mentions to entities, but also from the context of the mentions to entities. The current definition IMO does not support that (the inclusion of context) yet despite its importance in the mappings.
- Sec. 2.2. "Candidate Generation" is at the moment a bit unclear, whether it would be suitable for in-KG entities, out-of-KG entities, or both?
- (Minor) Sec. 2.3.2: On the mention of "schema: description", there should be no whitespace after :.
- Regarding the architecture as shown in Fig. 1: Which parts of the architecture are novel, and which parts are the state-of-the-art or inspired by related work?
- On Page 4 Line 26-32: The discussion of Paragraph "out-of-KG decision" can be added with more motivation/reasons, and the differences to the existing state-of-the-art.
- In Section 2.4: Again, it would be better if motivations/rationales could be added.
- In Sec. 2.6 (and also other parts where similar cases occur): There seem to be parameters and hyperparameters involved, such as the number of beams in the beam search, and also the window size. Such selections of best (hyper)-parameters could be further explained.

Overall:
In its current form, while the paper has its potentials merits, it still needs major revisions.

Review #2
Anonymous submitted on 13/Nov/2023
Suggestion:
Reject
Review Comment:

The submission proposes two main contributions: a dataset for Entity Linking with out-of-KG references,

and an entity linking approach which detects out-of-kg entities and clusters them to

automatically create entity descriptions for these entities not yet present in the KG.

While the first contribution is a nice-to-have (as there is already existing NILK and other datasets), I will focus

in the review on the second contribution.

On Section 2.2:

what kind of aliases do you mean when using them for an entity dictionary?

what does that mean for your specific evaluation use case? how can alternative labels be extracted/generated for random KGs?

The authors mention here that they generate a candidate set of size 100. Is this the maximum size or do they always retrieve 100 candidates for each mention?

I do not understand the sentence "our method does not rely on a parallel text corpus"?

On 2.3.2:

The entity definition is defined as the schema:description information of the Wikidata KG. This makes the authors' approach very specific to Wikidata.

What about entity definitions in other KGs?

On 2.3.3:

Apparently, the ranking method includes a popularity measure using the outdegree of the entity within the KG.

In network/graph theory, nodes with a high outdegree are hubs -> nodes with a high distribution level and very low specificity.

Taking this into account, the outdegree is not a metric to measure the popularity of a node. Rather, the indegree should be used.

The main contribution of the submission is described here briefly:

the decision if an entity is out-of-KG is dependent on the similarity of the entity candidate definition and the input context. This is it. What does it mean, that the similarity is not enough?

What if the context is not enough? In this case, this approach would often detect out-of-KG entity candidates.

This simple decision is not convincing for all-purpose scenarios.

Or for KGs of specific domains where entities might be very similar and the differentiation of in-KG entities and out-of-KG entities might be too small.

On 2.4:

Here, the authors describe the clustering of out-of-KG entity embeddings. Unfortunately, they only mention the use of DBSCAN and do not discuss different clustering approaches.

DBSCAN is a density-based clustering approach. When this approach is applied on embeddings very close embeddings are clustered together which makes sense in this case.

But, embeddings are not semantic. Here again I see challenges/difficulties for domains with a very specific domain where the vocabulary is very restricted and similarities are more common than in all-purpose KGs as Wikidata.

On the experiments:

On entity linking:

The authors argument that they do not compare their approach on entity linking to others because they first want to examine different parameter settings.

I don't think this is a legible argument. Why don't compare the results for the best parameter setting to competitive approaches?

At the end, the performance of their overall approach is the most interesting part and the lack of sota comparisons is clearly one of the main drawbacks of the submission.

On out-of-KG detection:

Here again, the authors only show a survey of how different parameter ratios are affecting the quality of the approach.

How is there contribution performing compared to others? The discussion regarding the reasons for the different achieved recall and precision results is basic and trivial.

On the out-of-KG entity clustering:

The section would benefit from a repeated brief introduction which of the shown approaches in Table 6 are the ones of the authors. Also, the different approaches should be named constantly within the text and both tables.

In this comparative analysis, the author show that their approach might be competitive to sota approaches, but the contribution overall and results are not convincing enough to be published in a journal.

Overall, I think the authors work on an interesting topic, but the submission leaves too many questions unanswered and the performance of the proposed approach is not compared to competitive approaches, at least only in parts.

At this stage of the work, the submission is not eligible to be published in SWJ.

Review #3
By Sanju Tiwari submitted on 17/Jan/2024
Suggestion:
Major Revision
Review Comment:

This manuscript was submitted as 'full paper' and should be reviewed along the usual dimensions for research contributions which include (1) originality, (2) significance of the results, and (3) quality of writing. Please also assess the data file provided by the authors under “Long-term stable URL for resources”. In particular, assess (A) whether the data file is well organized and in particular contains a README file which makes it easy for you to assess the data, (B) whether the provided resources appear to be complete for replication of experiments, and if not, why, (C) whether the chosen repository, if it is not GitHub, Figshare or Zenodo, is appropriate for long-term repository discoverability, and (4) whether the provided data artifacts are complete. Please refer to the reviewer instructions and the FAQ for further information.

Review #4
Anonymous submitted on 06/Feb/2024
Suggestion:
Major Revision
Review Comment:

This manuscript was submitted as 'full paper' and should be reviewed along the usual dimensions for research contributions which include (1) originality, (2) significance of the results, and (3) quality of writing. Please also assess the data file provided by the authors under “Long-term stable URL for resources”. In particular, assess (A) whether the data file is well organized and in particular contains a README file which makes it easy for you to assess the data, (B) whether the provided resources appear to be complete for replication of experiments, and if not, why, (C) whether the chosen repository, if it is not GitHub, Figshare or Zenodo, is appropriate for long-term repository discoverability, and (4) whether the provided data artifacts are complete. Please refer to the reviewer instructions and the FAQ for further information.