STAT946F17/ Learning Important Features Through Propagating Activation Differences
This is a summary of ICML 2017 paper [1].
Introduction
Deep neuron network is purported for its "black box" nature which is a barrier to adoption in applications where interpretability is essential. Also, the "black box" nature brings difficulty for analyzing and improving the structure of the model. In our topic paper, DeepLIFT method is presented to decompose the output of a neural network on a specific input by backpropagating the contributions of all neurons in the network to every feature of the input. This is a form of sensitivity analysis and helps understand the model better.
Sensitivity Analysis
to be done
Failure of traditional methods
to be done
DeepLIFT scheme
to be done
Numerical results
to be done
References
[1]