Skip to main content
eScholarship
Open Access Publications from the University of California

MVRACE: Multi-view Graph Contrastive Encoding for Graph Neural Network Pre-training

Abstract

Graph neural networks (GNNs) have become a defacto paradigm for graph representation learning. Generally, GNNs are trained in an end-to-end manner with supervision, requiring considerable task-specific labeled data. To reduce the labeling burden, recent works leverage self-supervised tasks to pre-train an expressive GNN model on abundant unlabeled data and finetune the trained model on downstream datasets with only a few labels. However, existing GNN pre-training approaches only concentrate on a single view for graph self-supervised learning while ignoring the rich semantic information in graphs, leading to the lack of sample utilization efficiency during the pre-training process. To tackle such challenges, we propose a multi-view graph contrastive encoding for graphs during GNN pre-training, called MVRACE. The critical insight is that we construct node and graph-level views to capture local attribute information and global structure in a graph. Concretely, the node-level view utilizes graph centrality and encodes the $r$-ego network to capture the local-whole relationship in a graph. The graph-level view aims to encode graph pairs to explore different graph structures and empower the discrimination ability of the GNN encoder. In addition, we combine multi-views with a joint contrastive loss function to integrate node- and graph-semantic information simultaneously. Comprehensive experiments on multiple domain datasets demonstrate that our approach can significantly yield competitive performance compared to state-of-the-art methods.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View