DeGNN: Characterizing and Improving Graph Neural Networks with Graph Decomposition
- Xupeng Miao ,
- Nezihe Merve Gürel ,
- Wentao Zhang ,
- Zhichao Han ,
- Bo Li ,
- Wei Min ,
- Xi Rao ,
- Hansheng Ren ,
- Yinan Shan ,
- Yingxia Shao ,
- Yujie Wang ,
- Fan Wu ,
- Hui Xue ,
- Yaming Yang ,
- Zitao Zhang ,
- Yang Zhao ,
- Shuai Zhang ,
- Yujing Wang ,
- Bin Cui ,
- Ce Zhang
KDD 2021 |
Despite the wide application of Graph Convolutional Network (GCN), one major limitation is that it does not benefit from the increasing depth and suffers from the oversmoothing problem. In this work, we first characterize this phenomenon from the information-theoretic perspective and show that under certain conditions, the mutual information between the output after $l$ layers and the input of GCN converges to 0 exponentially with respect to $l$. We also show that, on the other hand, graph decomposition can potentially weaken the condition of such convergence rate, which enabled our analysis for GraphCNN. While different graph structures can only benefit from the corresponding decomposition, in practice, we propose an automatic connectivity-aware graph decomposition algorithm, DeGNN, to improve the performance of general graph neural networks. Extensive experiments on widely adopted benchmark datasets demonstrate that DeGNN can not only significantly boost the performance of corresponding GNNs, but also achieves the state-of-the-art performances.