Network representation learning method embedding linear and nonlinear network structures

Tracking #: 2660-3874

This paper is currently under review
Hu Zhang

Responsible editor: 
Guest Editors DeepL4KGs 2021

Submission type: 
Full Paper
With the rapid development of neural networks, more attention has focused on network embedding for complex network data, which aims to learn low-dimensional embedding of nodes in the network and how to effectively apply learned network representations to various graph-based analytical tasks. Two typical models are the shallow random walk network representation method and deep learning models like graph convolution networks (GCNs). The former can be used to capture the linear structure of the network using depth-first search (DFS) and width-first search (BFS). Hierarchical GCN (HGCN) is an unsupervised graph embedding that can be used to describe the global nonlinear structure of the network by aggregating node information. However, the two existing kinds of models cannot capture the nonlinear and linear structure information of nodes simultaneously. Therefore, the nodal characteristics of nonlinear and linear structures were examined in this study, and an unsupervised representation method based on HGCN that joins learning of shallow and deep models is proposed. Experiments on node classification and dimension reduction visualization were carried out on citation, language, and traffic networks. The results show that, compared with the existing shallow network representation model and deep network model, the proposed model achieves better performance in terms of micro-F1, macro-F1, and accuracy scores.
Full PDF Version: 
Under Review