HiHo: A Hierarchical and Homogenous Subgraph Learning Model for Knowledge Graph Relation Prediction

Tracking #: 3784-4998

Authors: 
Jiangtao Ma
Yuke Ma
Fan Zhang1
Yanjun Wang
Xiangyang Luo
Chenliang Li
Yaqiong Qiao

Responsible editor: 
Guest Editors KG Gen from Text 2023

Submission type: 
Full Paper
Abstract: 
Relation prediction in Knowledge Graphs (KGs) aims to anticipate the connections between entities. While both transductive and inductive models are incorporated for context comprehension, we need to focus on two primary issues. First, these models only collate relations at each layer of the subgraph, overlooking the potential sequential relationship between different layers. Second, these methods overlook the homogeneity of subgraphs, thus impeding their ability to effectively learn the importance of relationships within the subgraphs. To address this challenge, we propose a hierarchical and homogenous subgraph learning model for knowledge graph relation prediction (HiHo). Specifically, we adopt a subgraph-to-sequence mechanism (S2S) to learn the potential semantic associations between layers in the subgraph of a single entity, and thus model the hierarchy of the subgraph. Then, we implement a common preference inference mechanism (CPI) that assigns higher weights to co-occurrence relations while learning the importance of each relation in the subgraphs of two entities, and thus model the homogeneity of the subgraph. In our study, we sequentially employ induction on each layer of subgraphs pertaining to the two entities for relation prediction. To assess the efficacy of our method, we perform experiments on five publicly available datasets. The results of our experiments demonstrate that our method surpasses the current state-of-the-art baselines in both transductive and inductive settings.
Full PDF Version: 
Tags: 
Reviewed

Decision/Status: 
Accept

Solicited Reviews:
Click to Expand/Collapse
Review #1
Anonymous submitted on 21/Mar/2025
Suggestion:
Accept
Review Comment:

Dear authors,
Sorry for the delay to review again this nice piece of research work, due to health reasons.

I really appreciate the time and the efforts made to go through my comments and suggestions, and give clear and precise proposals.
Although it would have been good to make the code also available, I can understand your comments on this specific point.

The paper is now self-contained with clear references.
Reading again the paper, I can admire the work and I recommend it to be accepted for publication.

Review #2
Anonymous submitted on 29/Mar/2025
Suggestion:
Accept
Review Comment:

This manuscript was submitted as 'full paper' and should be reviewed along the usual dimensions for research contributions which include (1) originality, (2) significance of the results, and (3) quality of writing. Please also assess the data file provided by the authors under “Long-term stable URL for resources”. In particular, assess (A) whether the data file is well organized and in particular contains a README file which makes it easy for you to assess the data, (B) whether the provided resources appear to be complete for replication of experiments, and if not, why, (C) whether the chosen repository, if it is not GitHub, Figshare or Zenodo, is appropriate for long-term repository discoverability, and (4) whether the provided data artifacts are complete. Please refer to the reviewer instructions and the FAQ for further information.

Review #3
Anonymous submitted on 27/Apr/2025
Suggestion:
Accept
Review Comment:

The paper proposes a novel approach for knowledge graph relation prediction by modeling both hierarchical and homogeneous subgraph structures. The integration of the Subgraph-to-Sequence (S2S) mechanism using Bi-GRU and the Common Preference Inference (CPI) mechanism is original and differentiates this work from existing models. The method extends the modeling capacity of subgraphs beyond local neighbor aggregation and simple intersection-based methods.
- The originality is clearly shown.
The results presented are significant and demonstrate clear over baselines on five public datasets. HiHo achieves higher MRR and HIT@K scores compared to both transductive and inductive baselines. The method's ability to generalize to unseen entities addresses a key limitation of many traditional KG completion models.
The results appear to be well-supported through experiments.
The revised manuscript is clear and well-organized. Key methodological details, including Bi-GRU processing, state sequence transformation, subgraph preparation, and CPI weighting, are now fully explained. Mathematical formulations are properly contextualized. Algorithm 1 is presented in an understandable format. A few minor typos remain, but they do not impact readability. The quality of writing meets publication standards.
Assessment of Resources:
The data files are well-organized, and a README is provided explaining dataset splits, training procedures, and reproduction instructions.
Scripts for preprocessing, training, and evaluation are included. The material appears to be sufficient to replicate the experiments. Critical components, datasets and source code are included.
The paper is original, presents significant results, is clearly written, and provides complete resources for reproducibility. Minor editorial polishing could further enhance the manuscript, but from a scientific perspective, it is ready for publication
In multiple places, small typos like “the-th layer” (should be "the i-th layer").
Misspellings:
"commited" → should be "committed" (in Introduction).
"neighbor relations is" → should be "neighbor relations are" (Section 4.1 Subgraph Preparation).
“choose Bi-GRU to encode the history and future information” → should be "to encode historical and future information" .
In equations and explanations, sometimes lowercase "hop" is mixed with uppercase "Hop" ("Lhop"). It’s better to consistently capitalize or lowercase it
-time series prediction method" — slightly awkward. It would be better to say "a sequence modeling method based on GRU"