User contributions for Sverneka
Jump to navigation
Jump to search
12 November 2017
- 15:1015:10, 12 November 2017 diff hist +506 "Why Should I Trust You?": Explaining the Predictions of Any Classifier No edit summary
- 06:2106:21, 12 November 2017 diff hist +88 "Why Should I Trust You?": Explaining the Predictions of Any Classifier No edit summary
- 06:2006:20, 12 November 2017 diff hist +96 "Why Should I Trust You?": Explaining the Predictions of Any Classifier No edit summary
- 06:1906:19, 12 November 2017 diff hist +131 f17Stat946PaperSignUp →Paper presentation
- 06:0606:06, 12 November 2017 diff hist +50 "Why Should I Trust You?": Explaining the Predictions of Any Classifier No edit summary
- 06:0106:01, 12 November 2017 diff hist +51 "Why Should I Trust You?": Explaining the Predictions of Any Classifier No edit summary
- 05:5005:50, 12 November 2017 diff hist 0 N File:algorithm2.png No edit summary current
- 05:4905:49, 12 November 2017 diff hist 0 N File:algorithm1.png No edit summary
- 05:4905:49, 12 November 2017 diff hist +3,811 "Why Should I Trust You?": Explaining the Predictions of Any Classifier No edit summary
- 05:3705:37, 12 November 2017 diff hist 0 "Why Should I Trust You?": Explaining the Predictions of Any Classifier →Sparse Linear Explanations
- 05:3505:35, 12 November 2017 diff hist +604 "Why Should I Trust You?": Explaining the Predictions of Any Classifier No edit summary
- 05:2305:23, 12 November 2017 diff hist +2,146 "Why Should I Trust You?": Explaining the Predictions of Any Classifier No edit summary
- 05:1705:17, 12 November 2017 diff hist 0 N File:LIME.jpg No edit summary current
- 05:1105:11, 12 November 2017 diff hist 0 N File:inception example.png No edit summary current
- 05:1105:11, 12 November 2017 diff hist 0 N File:decision boundary.png No edit summary current
- 04:5904:59, 12 November 2017 diff hist +7,256 "Why Should I Trust You?": Explaining the Predictions of Any Classifier No edit summary
- 04:2204:22, 12 November 2017 diff hist +2,549 "Why Should I Trust You?": Explaining the Predictions of Any Classifier No edit summary
- 02:2402:24, 12 November 2017 diff hist 0 File:explanation example.png Sverneka uploaded a new version of File:explanation example.png current
- 02:2202:22, 12 November 2017 diff hist 0 File:explanation example.png Sverneka uploaded a new version of File:explanation example.png
- 02:2102:21, 12 November 2017 diff hist +423 "Why Should I Trust You?": Explaining the Predictions of Any Classifier No edit summary
- 02:1902:19, 12 November 2017 diff hist 0 File:LIME.png Sverneka uploaded a new version of File:LIME.png current
- 02:1702:17, 12 November 2017 diff hist 0 File:LIME.png Sverneka uploaded a new version of File:LIME.png
- 02:1602:16, 12 November 2017 diff hist 0 File:LIME.png Sverneka uploaded a new version of File:LIME.png
- 02:1202:12, 12 November 2017 diff hist +356 "Why Should I Trust You?": Explaining the Predictions of Any Classifier No edit summary
- 02:0002:00, 12 November 2017 diff hist −17 "Why Should I Trust You?": Explaining the Predictions of Any Classifier No edit summary
- 01:5901:59, 12 November 2017 diff hist +44 "Why Should I Trust You?": Explaining the Predictions of Any Classifier No edit summary
- 01:4701:47, 12 November 2017 diff hist +34 "Why Should I Trust You?": Explaining the Predictions of Any Classifier No edit summary
- 01:4601:46, 12 November 2017 diff hist +363 N File:explanation example.png Figure 2: Explaining individual predictions of competing classifiers trying to determine if a document is about “Christianity” or “Atheism”. The bar chart represents the importance given to the most relevant words, also highlighted in the text....
- 01:4501:45, 12 November 2017 diff hist +1,644 "Why Should I Trust You?": Explaining the Predictions of Any Classifier No edit summary
- 01:4001:40, 12 November 2017 diff hist +2,274 "Why Should I Trust You?": Explaining the Predictions of Any Classifier No edit summary
- 01:3001:30, 12 November 2017 diff hist +20 "Why Should I Trust You?": Explaining the Predictions of Any Classifier No edit summary
- 01:2901:29, 12 November 2017 diff hist 0 N File:LIME.png No edit summary
- 01:2601:26, 12 November 2017 diff hist 0 "Why Should I Trust You?": Explaining the Predictions of Any Classifier →Introduction
11 November 2017
- 20:0520:05, 11 November 2017 diff hist +1,199 "Why Should I Trust You?": Explaining the Predictions of Any Classifier No edit summary
- 12:0512:05, 11 November 2017 diff hist +1,724 N "Why Should I Trust You?": Explaining the Predictions of Any Classifier Created page with "==Introduction== Understanding why machine learning models behave the way they do empowers both system designers and end-users in many ways: in model selection, feature engin..."
- 11:5111:51, 11 November 2017 diff hist −70 f17Stat946PaperSignUp →Paper presentation
- 11:3611:36, 11 November 2017 diff hist −64 f17Stat946PaperSignUp →Paper presentation
9 November 2017
- 13:1713:17, 9 November 2017 diff hist +2 STAT946F17/Cognitive Psychology For Deep Neural Networks: A Shape Bias Case Study →Conclusion, Future Work and Open questions
- 13:1613:16, 9 November 2017 diff hist +6 STAT946F17/Cognitive Psychology For Deep Neural Networks: A Shape Bias Case Study →Conclusion, Future Work and Open questions
- 13:1613:16, 9 November 2017 diff hist +106 STAT946F17/Cognitive Psychology For Deep Neural Networks: A Shape Bias Case Study →References
- 13:1413:14, 9 November 2017 diff hist +254 STAT946F17/Cognitive Psychology For Deep Neural Networks: A Shape Bias Case Study →Conclusion, Future Work and Open questions
- 12:4612:46, 9 November 2017 diff hist +1 STAT946F17/Cognitive Psychology For Deep Neural Networks: A Shape Bias Case Study →Introduction
- 00:5900:59, 9 November 2017 diff hist +229 FeUdal Networks for Hierarchical Reinforcement Learning →Conclusion
- 00:4300:43, 9 November 2017 diff hist +54 FeUdal Networks for Hierarchical Reinforcement Learning →Introduction
- 00:3000:30, 9 November 2017 diff hist +251 Learning the Number of Neurons in Deep Networks →Critique
- 00:2100:21, 9 November 2017 diff hist +292 Learning the Number of Neurons in Deep Networks →Related Work
- 00:1100:11, 9 November 2017 diff hist −1 m Learning the Number of Neurons in Deep Networks →Related Work
8 November 2017
- 22:0422:04, 8 November 2017 diff hist +2 m Learning the Number of Neurons in Deep Networks →Introduction
7 November 2017
- 01:5601:56, 7 November 2017 diff hist +489 Convolutional Sequence to Sequence Learning Added future developments
- 01:2601:26, 7 November 2017 diff hist +231 Understanding the Effective Receptive Field in Deep Convolutional Neural Networks →Critique