Graph Structure of Neural Networks: Difference between revisions
Jump to navigation
Jump to search
Line 24: | Line 24: | ||
= Major Conclusions (Section 5 in the paper) = | = Major Conclusions (Section 5 in the paper) = | ||
== 1. graph structure of neural networks matters == | |||
== 2. a “sweet spot” of relational graphs lead to neural networks with significantly improved predictive performance == | |||
== 3. neural network’s performance is approximately a smooth function of the clustering coefficient and average path length of its relational graph == | |||
== 4. our findings are consistent across many different tasks and datasets == | |||
== 5. top architectures can be identified efficiently == | |||
== 6. well-performing neural networks have graph structure surprisingly similar to those of real biological neural networks== | |||
= Critique = | = Critique = |
Revision as of 18:19, 10 November 2020
Presented By
Xiaolan Xu, Robin Wen, Yue Weng, Beizhen Chang
Introduction
We develop a new way of representing a neural network as a graph, which we call relational graph. Our key insight is to focus on message exchange, rather than just on directed data flow. As a simple example, for a fixedwidth fully-connected layer, we can represent one input channel and one output channel together as a single node, and an edge in the relational graph represents the message exchange between the two nodes (Figure 1(a)).
Relational Graph
Parameter Definition
(1) Clustering Coefficient
(2) Average Path Length