User contributions for Sverneka
Jump to navigation
Jump to search
13 November 2017
- 02:2202:22, 13 November 2017 diff hist +8 "Why Should I Trust You?": Explaining the Predictions of Any Classifier →Local Interpretable Model-Agnostic Explanations (LIME)
- 02:0802:08, 13 November 2017 diff hist +4 "Why Should I Trust You?": Explaining the Predictions of Any Classifier No edit summary
- 02:0402:04, 13 November 2017 diff hist +118 "Why Should I Trust You?": Explaining the Predictions of Any Classifier No edit summary
- 00:5200:52, 13 November 2017 diff hist +348 "Why Should I Trust You?": Explaining the Predictions of Any Classifier No edit summary
12 November 2017
- 23:2623:26, 12 November 2017 diff hist +106 "Why Should I Trust You?": Explaining the Predictions of Any Classifier →References
- 22:4822:48, 12 November 2017 diff hist +62 "Why Should I Trust You?": Explaining the Predictions of Any Classifier No edit summary
- 22:3222:32, 12 November 2017 diff hist −340 "Why Should I Trust You?": Explaining the Predictions of Any Classifier →Desired Characteristics for Explainers
- 22:2522:25, 12 November 2017 diff hist −752 "Why Should I Trust You?": Explaining the Predictions of Any Classifier →The Case for Explanations
- 22:1422:14, 12 November 2017 diff hist −107 "Why Should I Trust You?": Explaining the Predictions of Any Classifier →Introduction
- 22:0422:04, 12 November 2017 diff hist +342 "Why Should I Trust You?": Explaining the Predictions of Any Classifier No edit summary
- 19:0119:01, 12 November 2017 diff hist +1,579 "Why Should I Trust You?": Explaining the Predictions of Any Classifier No edit summary
- 18:1518:15, 12 November 2017 diff hist +308 "Why Should I Trust You?": Explaining the Predictions of Any Classifier No edit summary
- 18:0518:05, 12 November 2017 diff hist 0 N File:choose classifier.png No edit summary current
- 18:0518:05, 12 November 2017 diff hist 0 N File:table1.png No edit summary
- 18:0518:05, 12 November 2017 diff hist +2,426 "Why Should I Trust You?": Explaining the Predictions of Any Classifier No edit summary
- 15:5915:59, 12 November 2017 diff hist 0 N File:recall2.png No edit summary current
- 15:5915:59, 12 November 2017 diff hist 0 N File:recall1.png No edit summary current
- 15:5615:56, 12 November 2017 diff hist +217 "Why Should I Trust You?": Explaining the Predictions of Any Classifier No edit summary
- 15:5315:53, 12 November 2017 diff hist 0 N File:toy example.png No edit summary current
- 15:5215:52, 12 November 2017 diff hist +7,711 "Why Should I Trust You?": Explaining the Predictions of Any Classifier No edit summary
- 15:1015:10, 12 November 2017 diff hist +506 "Why Should I Trust You?": Explaining the Predictions of Any Classifier No edit summary
- 06:2106:21, 12 November 2017 diff hist +88 "Why Should I Trust You?": Explaining the Predictions of Any Classifier No edit summary
- 06:2006:20, 12 November 2017 diff hist +96 "Why Should I Trust You?": Explaining the Predictions of Any Classifier No edit summary
- 06:1906:19, 12 November 2017 diff hist +131 f17Stat946PaperSignUp →Paper presentation
- 06:0606:06, 12 November 2017 diff hist +50 "Why Should I Trust You?": Explaining the Predictions of Any Classifier No edit summary
- 06:0106:01, 12 November 2017 diff hist +51 "Why Should I Trust You?": Explaining the Predictions of Any Classifier No edit summary
- 05:5005:50, 12 November 2017 diff hist 0 N File:algorithm2.png No edit summary current
- 05:4905:49, 12 November 2017 diff hist 0 N File:algorithm1.png No edit summary
- 05:4905:49, 12 November 2017 diff hist +3,811 "Why Should I Trust You?": Explaining the Predictions of Any Classifier No edit summary
- 05:3705:37, 12 November 2017 diff hist 0 "Why Should I Trust You?": Explaining the Predictions of Any Classifier →Sparse Linear Explanations
- 05:3505:35, 12 November 2017 diff hist +604 "Why Should I Trust You?": Explaining the Predictions of Any Classifier No edit summary
- 05:2305:23, 12 November 2017 diff hist +2,146 "Why Should I Trust You?": Explaining the Predictions of Any Classifier No edit summary
- 05:1705:17, 12 November 2017 diff hist 0 N File:LIME.jpg No edit summary current
- 05:1105:11, 12 November 2017 diff hist 0 N File:inception example.png No edit summary current
- 05:1105:11, 12 November 2017 diff hist 0 N File:decision boundary.png No edit summary current
- 04:5904:59, 12 November 2017 diff hist +7,256 "Why Should I Trust You?": Explaining the Predictions of Any Classifier No edit summary
- 04:2204:22, 12 November 2017 diff hist +2,549 "Why Should I Trust You?": Explaining the Predictions of Any Classifier No edit summary
- 02:2402:24, 12 November 2017 diff hist 0 File:explanation example.png Sverneka uploaded a new version of File:explanation example.png current
- 02:2202:22, 12 November 2017 diff hist 0 File:explanation example.png Sverneka uploaded a new version of File:explanation example.png
- 02:2102:21, 12 November 2017 diff hist +423 "Why Should I Trust You?": Explaining the Predictions of Any Classifier No edit summary
- 02:1902:19, 12 November 2017 diff hist 0 File:LIME.png Sverneka uploaded a new version of File:LIME.png current
- 02:1702:17, 12 November 2017 diff hist 0 File:LIME.png Sverneka uploaded a new version of File:LIME.png
- 02:1602:16, 12 November 2017 diff hist 0 File:LIME.png Sverneka uploaded a new version of File:LIME.png
- 02:1202:12, 12 November 2017 diff hist +356 "Why Should I Trust You?": Explaining the Predictions of Any Classifier No edit summary
- 02:0002:00, 12 November 2017 diff hist −17 "Why Should I Trust You?": Explaining the Predictions of Any Classifier No edit summary
- 01:5901:59, 12 November 2017 diff hist +44 "Why Should I Trust You?": Explaining the Predictions of Any Classifier No edit summary
- 01:4701:47, 12 November 2017 diff hist +34 "Why Should I Trust You?": Explaining the Predictions of Any Classifier No edit summary
- 01:4601:46, 12 November 2017 diff hist +363 N File:explanation example.png Figure 2: Explaining individual predictions of competing classifiers trying to determine if a document is about “Christianity” or “Atheism”. The bar chart represents the importance given to the most relevant words, also highlighted in the text....
- 01:4501:45, 12 November 2017 diff hist +1,644 "Why Should I Trust You?": Explaining the Predictions of Any Classifier No edit summary
- 01:4001:40, 12 November 2017 diff hist +2,274 "Why Should I Trust You?": Explaining the Predictions of Any Classifier No edit summary
- 01:3001:30, 12 November 2017 diff hist +20 "Why Should I Trust You?": Explaining the Predictions of Any Classifier No edit summary
- 01:2901:29, 12 November 2017 diff hist 0 N File:LIME.png No edit summary
- 01:2601:26, 12 November 2017 diff hist 0 "Why Should I Trust You?": Explaining the Predictions of Any Classifier →Introduction
11 November 2017
- 20:0520:05, 11 November 2017 diff hist +1,199 "Why Should I Trust You?": Explaining the Predictions of Any Classifier No edit summary
- 12:0512:05, 11 November 2017 diff hist +1,724 N "Why Should I Trust You?": Explaining the Predictions of Any Classifier Created page with "==Introduction== Understanding why machine learning models behave the way they do empowers both system designers and end-users in many ways: in model selection, feature engin..."
- 11:5111:51, 11 November 2017 diff hist −70 f17Stat946PaperSignUp →Paper presentation
- 11:3611:36, 11 November 2017 diff hist −64 f17Stat946PaperSignUp →Paper presentation
9 November 2017
- 13:1713:17, 9 November 2017 diff hist +2 STAT946F17/Cognitive Psychology For Deep Neural Networks: A Shape Bias Case Study →Conclusion, Future Work and Open questions
- 13:1613:16, 9 November 2017 diff hist +6 STAT946F17/Cognitive Psychology For Deep Neural Networks: A Shape Bias Case Study →Conclusion, Future Work and Open questions
- 13:1613:16, 9 November 2017 diff hist +106 STAT946F17/Cognitive Psychology For Deep Neural Networks: A Shape Bias Case Study →References
- 13:1413:14, 9 November 2017 diff hist +254 STAT946F17/Cognitive Psychology For Deep Neural Networks: A Shape Bias Case Study →Conclusion, Future Work and Open questions
- 12:4612:46, 9 November 2017 diff hist +1 STAT946F17/Cognitive Psychology For Deep Neural Networks: A Shape Bias Case Study →Introduction
- 00:5900:59, 9 November 2017 diff hist +229 FeUdal Networks for Hierarchical Reinforcement Learning →Conclusion
- 00:4300:43, 9 November 2017 diff hist +54 FeUdal Networks for Hierarchical Reinforcement Learning →Introduction
- 00:3000:30, 9 November 2017 diff hist +251 Learning the Number of Neurons in Deep Networks →Critique
- 00:2100:21, 9 November 2017 diff hist +292 Learning the Number of Neurons in Deep Networks →Related Work
- 00:1100:11, 9 November 2017 diff hist −1 m Learning the Number of Neurons in Deep Networks →Related Work
8 November 2017
- 22:0422:04, 8 November 2017 diff hist +2 m Learning the Number of Neurons in Deep Networks →Introduction
7 November 2017
- 01:5601:56, 7 November 2017 diff hist +489 Convolutional Sequence to Sequence Learning Added future developments
- 01:2601:26, 7 November 2017 diff hist +231 Understanding the Effective Receptive Field in Deep Convolutional Neural Networks →Critique
- 01:2001:20, 7 November 2017 diff hist +175 Understanding the Effective Receptive Field in Deep Convolutional Neural Networks →Dropout, Subsampling, Dilated Convolution and Skip-Connections
- 00:3100:31, 7 November 2017 diff hist +1 Understanding the Effective Receptive Field in Deep Convolutional Neural Networks →Why is RF important?
- 00:3100:31, 7 November 2017 diff hist +120 Understanding the Effective Receptive Field in Deep Convolutional Neural Networks →Why is RF important?
- 00:2200:22, 7 November 2017 diff hist +7 m Understanding the Effective Receptive Field in Deep Convolutional Neural Networks →Why is RF important?
6 November 2017
- 23:1023:10, 6 November 2017 diff hist +282 meProp: Sparsified Back Propagation for Accelerated Deep Learning with Reduced Overfitting →Results: Added future consideration.
- 22:4522:45, 6 November 2017 diff hist 0 m meProp: Sparsified Back Propagation for Accelerated Deep Learning with Reduced Overfitting →Related Work
- 22:4322:43, 6 November 2017 diff hist −6 meProp: Sparsified Back Propagation for Accelerated Deep Learning with Reduced Overfitting →Related Work
- 22:4122:41, 6 November 2017 diff hist 0 m meProp: Sparsified Back Propagation for Accelerated Deep Learning with Reduced Overfitting →Related Work
- 22:2622:26, 6 November 2017 diff hist −7 m meProp: Sparsified Back Propagation for Accelerated Deep Learning with Reduced Overfitting →Introduction
31 October 2017
- 03:5003:50, 31 October 2017 diff hist +180 Incremental Boosting Convolutional Neural Network for Facial Action Unit Recognition →Boosting Method
- 03:4103:41, 31 October 2017 diff hist +115 Incremental Boosting Convolutional Neural Network for Facial Action Unit Recognition →Conclusion
- 03:1503:15, 31 October 2017 diff hist +121 Incremental Boosting Convolutional Neural Network for Facial Action Unit Recognition →References
30 October 2017
- 23:3523:35, 30 October 2017 diff hist +46 Incremental Boosting Convolutional Neural Network for Facial Action Unit Recognition →Introduction
- 23:2723:27, 30 October 2017 diff hist +152 m Incremental Boosting Convolutional Neural Network for Facial Action Unit Recognition Added a basic reference to boosting
2 October 2017
- 14:1114:11, 2 October 2017 diff hist −4 f17Stat946PaperSignUp No edit summary
- 14:1014:10, 2 October 2017 diff hist +2 f17Stat946PaperSignUp No edit summary
- 14:1014:10, 2 October 2017 diff hist +94 f17Stat946PaperSignUp No edit summary
- 14:0514:05, 2 October 2017 diff hist +102 f17Stat946PaperSignUp No edit summary
- 14:0414:04, 2 October 2017 diff hist +1 f17Stat946PaperSignUp No edit summary
- 14:0414:04, 2 October 2017 diff hist +69 f17Stat946PaperSignUp No edit summary
- 14:0314:03, 2 October 2017 diff hist +15 f17Stat946PaperSignUp Summary