STAT946F17/ Learning Important Features Through Propagating Activation Differences

From statwiki
Revision as of 23:56, 26 October 2017 by H4lyu (talk | contribs) (Created page with "This is a summary of ICML 2017 paper [1]. == Introduction == Deep neuron network is purported for its "black box" nature which is a barrier to adoption in applications where...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search
The printable version is no longer supported and may have rendering errors. Please update your browser bookmarks and please use the default browser print function instead.

This is a summary of ICML 2017 paper [1].

Introduction

Deep neuron network is purported for its "black box" nature which is a barrier to adoption in applications where interpretability is essential. Also, the "black box" nature brings difficulty for analyzing and improving the structure of the model. In our topic paper, DeepLIFT method is presented to decompose the output of a neural network on a specific input by backpropagating the contributions of all neurons in the network to every feature of the input. This is a form of sensitivity analysis and helps understand the model better.

Sensitivity Analysis

to be done

Failure of traditional methods

to be done

DeepLIFT scheme

to be done

Numerical results

to be done

References

[1]