Graph Structure of Neural Networks

From statwiki
Revision as of 19:19, 10 November 2020 by Y27weng (talk | contribs) (Major Conclusions (Section 5 in the paper))
Jump to: navigation, search

Presented By

Xiaolan Xu, Robin Wen, Yue Weng, Beizhen Chang


We develop a new way of representing a neural network as a graph, which we call relational graph. Our key insight is to focus on message exchange, rather than just on directed data flow. As a simple example, for a fixedwidth fully-connected layer, we can represent one input channel and one output channel together as a single node, and an edge in the relational graph represents the message exchange between the two nodes (Figure 1(a)).

Relational Graph

Parameter Definition

(1) Clustering Coefficient

(2) Average Path Length

Experimental Setup (Section 4 in the paper)

Major Conclusions (Section 5 in the paper)

1. graph structure of neural networks matters

2. a “sweet spot” of relational graphs lead to neural networks with significantly improved predictive performance

3. neural network’s performance is approximately a smooth function of the clustering coefficient and average path length of its relational graph

4. our findings are consistent across many different tasks and datasets

5. top architectures can be identified efficiently

6. well-performing neural networks have graph structure surprisingly similar to those of real biological neural networks