DETECTING STATISTICAL INTERACTIONS FROM NEURAL NETWORK WEIGHTS
DETECTING STATISTICAL INTERACTIONS FROM NEURAL NETWORK WEIGHTS
Introduction
Within several areas, regression analysis is essential. However, due to complexity, the only tool left for practitioners are some simple tools based on linear regression. Growth in computational power available, practitioners are now able to use complicated models. Nevertheless, now the problem is not complexity: Interpretability. Neural network mostly exhibits superior predictable power compare to other traditional statistical regression methods. However, it's highly complicated structure simply prevent users to understand the results. In this paper, we are going to present one way of implementing interpretability in neural network.
Note that in this paper, we only consider one specific types of neural network, Feed-Forward Neural Network. Based on the methodology discussed here, we can build interpretation methodology for other types of networks also.
Notations
Before we dive in to methodology, we are going to define a few notations here. Most of them will be trivial.
1. Vector: Vectors are defined with bold-lowercases, v, w
2. Matrix: Matrice are defined with blod-uppercases, V, W
3. Interger Set: For some interger p \in Z, we define [p] := {1,2,3,...,p}