Graph Neural Networks Inspired by Classical Iterative Algorithms

Published on Mar 10, 2021in arXiv: Learning
Yongyi Yang1
Estimated H-index: 1
,
Tang Liu1
Estimated H-index: 1
+ 6 AuthorsDavid Wipf33
Estimated H-index: 33
Sources
Abstract
Despite the recent success of graph neural networks (GNN), common architectures often exhibit significant limitations, including sensitivity to oversmoothing, long-range dependencies, and spurious edges, e.g., as can occur as a result of graph heterophily or adversarial attacks. To at least partially address these issues within a simple transparent framework, we consider a new family of GNN layers designed to mimic and integrate the update rules of two classical iterative algorithms, namely, proximal gradient descent and iterative reweighted least squares (IRLS). The former defines an extensible base GNN architecture that is immune to oversmoothing while nonetheless capturing long-range dependencies by allowing arbitrary propagation steps. In contrast, the latter produces a novel attention mechanism that is explicitly anchored to an underlying end-toend energy function, contributing stability with respect to edge uncertainty. When combined we obtain an extremely simple yet robust model that we evaluate across disparate scenarios including standardized benchmarks, adversarially-perturbated graphs, graphs with heterophily, and graphs involving long-range dependencies. In doing so, we compare against SOTA GNN approaches that have been explicitly designed for the respective task, achieving competitive or superior node classification accuracy.
References59
Newest
#1Meiqi ZhuH-Index: 6
#2Xiao WangH-Index: 34
Last. Peng CuiH-Index: 48
view all 5 authors...
Graph Neural Networks (GNNs) have received considerable attention on graph-structured data learning for a wide variety of tasks. The well-designed propagation mechanism which has been demonstrated effective is the most fundamental part of GNNs. Although most of GNNs basically follow a message passing manner, litter effort has been made to discover and analyze their essential relations. In this paper, we establish a surprising connection between different propagation mechanisms with a unified opt...
#1Jiong ZhuH-Index: 6
#2Ryan A. RossiH-Index: 13
Last. Danai KoutraH-Index: 26
view all 7 authors...
Graph Neural Networks (GNNs) have proven to be useful for many different practical applications. However, most existing GNN models have an implicit assumption of homophily among the nodes connected in the graph, and therefore have largely overlooked the important setting of heterophily. In this work, we propose a novel framework called CPGNN that generalizes GNNs for graphs with either homophily or heterophily. The proposed framework incorporates an interpretable compatibility matrix for modelin...
Graph convolutional networks (GCNs) have achieved promising performance on various graph-based tasks. However they suffer from over-smoothing when stacking more layers. In this paper, we present a quantitative study on this observation and develop novel insights towards the deeper GCN. First, we interpret the current graph convolutional operations from an optimization perspective and argue that over-smoothing is mainly caused by the naive first-order approximation of the solution to the optimiza...
Sep 14, 2020 in NeurIPS (Neural Information Processing Systems)
#1Fangda Gu (University of California, Berkeley)H-Index: 5
#2Heng Chang (THU: Tsinghua University)H-Index: 5
Last. Laurent El Ghaoui (University of California, Berkeley)H-Index: 46
view all 5 authors...
Graph Neural Networks (GNNs) are widely used deep learning models that learn meaningful representations from graph-structured data. Due to the finite nature of the underlying recurrent structure, current GNN methods may struggle to capture long-range dependencies in underlying graphs. To overcome this difficulty, we propose a graph learning framework, called Implicit Graph Neural Networks (IGNN), where predictions are based on the solution of a fixed-point equilibrium equation involving implicit...
Aug 23, 2020 in KDD (Knowledge Discovery and Data Mining)
#1Meng Liu (A&M: Texas A&M University)H-Index: 6
#2Hongyang Gao (A&M: Texas A&M University)H-Index: 13
Last. Shuiwang Ji (A&M: Texas A&M University)H-Index: 46
view all 3 authors...
Graph neural networks have shown significant success in the field of graph representation learning. Graph convolutions perform neighborhood aggregation and represent one of the most important graph operations. Nevertheless, one layer of these neighborhood aggregation methods only consider immediate neighbors, and the performance decreases when going deeper to enable larger receptive fields. Several recent studies attribute this performance deterioration to the over-smoothing issue, which states ...
Source
Aug 23, 2020 in KDD (Knowledge Discovery and Data Mining)
#1Junteng Jia (Cornell University)H-Index: 10
#2Austin R. Benson (Cornell University)H-Index: 23
A graph neural network transforms features in each vertex's neighborhood into a vector representation of the vertex. Afterward, each vertex's representation is used independently for predicting its label. This standard pipeline implicitly assumes that vertex labels are conditionally independent given their neighborhood features. However, this is a strong assumption, and we show that it is far from true on many real-world graph datasets. Focusing on regression tasks, we find that this conditional...
Source
Jul 12, 2020 in ICML (International Conference on Machine Learning)
#1Ming Chen (RUC: Renmin University of China)H-Index: 4
#2Zhewei Wei (RUC: Renmin University of China)H-Index: 15
Last. Yaliang Li (Alibaba Group)H-Index: 24
view all 5 authors...
Graph convolutional networks (GCNs) are a powerful deep learning approach for graph-structured data. Recently, GCNs and subsequent variants have shown superior performance in various application areas on real-world datasets. Despite their success, most of the current GCN models are shallow, due to the {\em over-smoothing} problem. In this paper, we study the problem of designing and analyzing deep graph convolutional networks. We propose the GCNII, an extension of the vanilla GCN model with two ...
Jan 1, 2020 in NeurIPS (Neural Information Processing Systems)
#1Jiong Zhu (UM: University of Michigan)H-Index: 6
#2Yujun Yan (UM: University of Michigan)H-Index: 7
Last. Danai Koutra (UM: University of Michigan)H-Index: 26
view all 6 authors...
We investigate the representation power of graph neural networks in the semi-supervised node classification task under heterophily or low homophily, i.e., in networks where connected nodes may have different class labels and dissimilar features. Many popular GNNs fail to generalize to this setting, and are even outperformed by models that ignore the graph structure (e.g., multilayer perceptrons). Motivated by this limitation, we identify a set of key designs -- ego- and neighbor-embedding separa...
Jun 15, 2020 in NeurIPS (Neural Information Processing Systems)
#1Xiang Zhang (Harvard University)H-Index: 177
#2Marinka Zitnik (Stanford University)H-Index: 27
Deep learning methods for graphs achieve remarkable performance on many tasks. However, despite the proliferation of such methods and their success, recent findings indicate that small, unnoticeable perturbations of graph structure can catastrophically reduce performance of even the strongest and most popular Graph Neural Networks (GNNs). Here, we develop GNNGuard, a general defense approach against a variety of training-time attacks that perturb the discrete graph structure. GNNGuard can be str...
#1Guohao LiH-Index: 9
#2Chenxin XiongH-Index: 1
Last. Bernard GhanemH-Index: 49
view all 4 authors...
Graph Convolutional Networks (GCNs) have been drawing significant attention with the power of representation learning on graphs. Unlike Convolutional Neural Networks (CNNs), which are able to take advantage of stacking very deep layers, GCNs suffer from vanishing gradient, over-smoothing and over-fitting issues when going deeper. These challenges limit the representation power of GCNs on large-scale graphs. This paper proposes DeeperGCN that is capable of successfully and reliably training very ...
Cited By2
Newest
#1Jiuhai ChenH-Index: 1
#2Jonas MuellerH-Index: 15
Last. David WipfH-Index: 33
view all 7 authors...
For supervised learning with tabular data, decision tree ensembles produced via boosting techniques generally dominate real-world applications involving iid training/test sets. However for graph data where the iid assumption is violated due to structured relations between samples, it remains unclear how to best incorporate this structure within existing boosting pipelines. To this end, we propose a generalized framework for iterating boosting with graph propagation steps that share node/sample i...
Much of the recent progress made in node classification on graphs can be credited to the careful design on graph neural networks (GNN) and label propagation algorithms. However, in the literature, in addition to improvements to the model architecture, there are a number of improvements either briefly mentioned as implementation details or visible only in source code, and these overlooked techniques may play a pivotal role in their practical use. In this paper, we first summarize a collection of ...
This website uses cookies.
We use cookies to improve your online experience. By continuing to use our website we assume you agree to the placement of these cookies.
To learn more, you can find in our Privacy Policy.