User contributions for Sverneka
Jump to navigation
Jump to search
12 November 2017
- 19:0119:01, 12 November 2017 diff hist +1,579 "Why Should I Trust You?": Explaining the Predictions of Any Classifier No edit summary
- 18:1518:15, 12 November 2017 diff hist +308 "Why Should I Trust You?": Explaining the Predictions of Any Classifier No edit summary
- 18:0518:05, 12 November 2017 diff hist 0 N File:choose classifier.png No edit summary current
- 18:0518:05, 12 November 2017 diff hist 0 N File:table1.png No edit summary
- 18:0518:05, 12 November 2017 diff hist +2,426 "Why Should I Trust You?": Explaining the Predictions of Any Classifier No edit summary
- 15:5915:59, 12 November 2017 diff hist 0 N File:recall2.png No edit summary current
- 15:5915:59, 12 November 2017 diff hist 0 N File:recall1.png No edit summary current
- 15:5615:56, 12 November 2017 diff hist +217 "Why Should I Trust You?": Explaining the Predictions of Any Classifier No edit summary
- 15:5315:53, 12 November 2017 diff hist 0 N File:toy example.png No edit summary current
- 15:5215:52, 12 November 2017 diff hist +7,711 "Why Should I Trust You?": Explaining the Predictions of Any Classifier No edit summary
- 15:1015:10, 12 November 2017 diff hist +506 "Why Should I Trust You?": Explaining the Predictions of Any Classifier No edit summary
- 06:2106:21, 12 November 2017 diff hist +88 "Why Should I Trust You?": Explaining the Predictions of Any Classifier No edit summary
- 06:2006:20, 12 November 2017 diff hist +96 "Why Should I Trust You?": Explaining the Predictions of Any Classifier No edit summary
- 06:1906:19, 12 November 2017 diff hist +131 f17Stat946PaperSignUp →Paper presentation
- 06:0606:06, 12 November 2017 diff hist +50 "Why Should I Trust You?": Explaining the Predictions of Any Classifier No edit summary
- 06:0106:01, 12 November 2017 diff hist +51 "Why Should I Trust You?": Explaining the Predictions of Any Classifier No edit summary
- 05:5005:50, 12 November 2017 diff hist 0 N File:algorithm2.png No edit summary current
- 05:4905:49, 12 November 2017 diff hist 0 N File:algorithm1.png No edit summary
- 05:4905:49, 12 November 2017 diff hist +3,811 "Why Should I Trust You?": Explaining the Predictions of Any Classifier No edit summary
- 05:3705:37, 12 November 2017 diff hist 0 "Why Should I Trust You?": Explaining the Predictions of Any Classifier →Sparse Linear Explanations
- 05:3505:35, 12 November 2017 diff hist +604 "Why Should I Trust You?": Explaining the Predictions of Any Classifier No edit summary
- 05:2305:23, 12 November 2017 diff hist +2,146 "Why Should I Trust You?": Explaining the Predictions of Any Classifier No edit summary
- 05:1705:17, 12 November 2017 diff hist 0 N File:LIME.jpg No edit summary current
- 05:1105:11, 12 November 2017 diff hist 0 N File:inception example.png No edit summary current
- 05:1105:11, 12 November 2017 diff hist 0 N File:decision boundary.png No edit summary current
- 04:5904:59, 12 November 2017 diff hist +7,256 "Why Should I Trust You?": Explaining the Predictions of Any Classifier No edit summary
- 04:2204:22, 12 November 2017 diff hist +2,549 "Why Should I Trust You?": Explaining the Predictions of Any Classifier No edit summary
- 02:2402:24, 12 November 2017 diff hist 0 File:explanation example.png Sverneka uploaded a new version of File:explanation example.png current
- 02:2202:22, 12 November 2017 diff hist 0 File:explanation example.png Sverneka uploaded a new version of File:explanation example.png
- 02:2102:21, 12 November 2017 diff hist +423 "Why Should I Trust You?": Explaining the Predictions of Any Classifier No edit summary
- 02:1902:19, 12 November 2017 diff hist 0 File:LIME.png Sverneka uploaded a new version of File:LIME.png current
- 02:1702:17, 12 November 2017 diff hist 0 File:LIME.png Sverneka uploaded a new version of File:LIME.png
- 02:1602:16, 12 November 2017 diff hist 0 File:LIME.png Sverneka uploaded a new version of File:LIME.png
- 02:1202:12, 12 November 2017 diff hist +356 "Why Should I Trust You?": Explaining the Predictions of Any Classifier No edit summary
- 02:0002:00, 12 November 2017 diff hist −17 "Why Should I Trust You?": Explaining the Predictions of Any Classifier No edit summary
- 01:5901:59, 12 November 2017 diff hist +44 "Why Should I Trust You?": Explaining the Predictions of Any Classifier No edit summary
- 01:4701:47, 12 November 2017 diff hist +34 "Why Should I Trust You?": Explaining the Predictions of Any Classifier No edit summary
- 01:4601:46, 12 November 2017 diff hist +363 N File:explanation example.png Figure 2: Explaining individual predictions of competing classifiers trying to determine if a document is about “Christianity” or “Atheism”. The bar chart represents the importance given to the most relevant words, also highlighted in the text....
- 01:4501:45, 12 November 2017 diff hist +1,644 "Why Should I Trust You?": Explaining the Predictions of Any Classifier No edit summary
- 01:4001:40, 12 November 2017 diff hist +2,274 "Why Should I Trust You?": Explaining the Predictions of Any Classifier No edit summary
- 01:3001:30, 12 November 2017 diff hist +20 "Why Should I Trust You?": Explaining the Predictions of Any Classifier No edit summary
- 01:2901:29, 12 November 2017 diff hist 0 N File:LIME.png No edit summary
- 01:2601:26, 12 November 2017 diff hist 0 "Why Should I Trust You?": Explaining the Predictions of Any Classifier →Introduction
11 November 2017
- 20:0520:05, 11 November 2017 diff hist +1,199 "Why Should I Trust You?": Explaining the Predictions of Any Classifier No edit summary
- 12:0512:05, 11 November 2017 diff hist +1,724 N "Why Should I Trust You?": Explaining the Predictions of Any Classifier Created page with "==Introduction== Understanding why machine learning models behave the way they do empowers both system designers and end-users in many ways: in model selection, feature engin..."
- 11:5111:51, 11 November 2017 diff hist −70 f17Stat946PaperSignUp →Paper presentation
- 11:3611:36, 11 November 2017 diff hist −64 f17Stat946PaperSignUp →Paper presentation
9 November 2017
- 13:1713:17, 9 November 2017 diff hist +2 STAT946F17/Cognitive Psychology For Deep Neural Networks: A Shape Bias Case Study →Conclusion, Future Work and Open questions
- 13:1613:16, 9 November 2017 diff hist +6 STAT946F17/Cognitive Psychology For Deep Neural Networks: A Shape Bias Case Study →Conclusion, Future Work and Open questions
- 13:1613:16, 9 November 2017 diff hist +106 STAT946F17/Cognitive Psychology For Deep Neural Networks: A Shape Bias Case Study →References