User contributions for Sverneka
Jump to navigation
Jump to search
30 November 2017
- 12:1012:10, 30 November 2017 diff hist +124 STAT946F17/ Automated Curriculum Learning for Neural Networks →Source
- 12:0812:08, 30 November 2017 diff hist +183 STAT946F17/ Automated Curriculum Learning for Neural Networks →Critique
- 11:4911:49, 30 November 2017 diff hist +103 STAT946F17/Decoding with Value Networks for Neural Machine Translation →References
- 11:4611:46, 30 November 2017 diff hist +332 STAT946F17/Decoding with Value Networks for Neural Machine Translation →Conclusions and Future Work
- 11:4511:45, 30 November 2017 diff hist −334 STAT946F17/ Automated Curriculum Learning for Neural Networks →Conclusion
- 11:4511:45, 30 November 2017 diff hist +334 STAT946F17/ Automated Curriculum Learning for Neural Networks →Conclusion
28 November 2017
- 02:2902:29, 28 November 2017 diff hist −1 STAT946F17/Conditional Image Generation with PixelCNN Decoders →Critique
- 02:2702:27, 28 November 2017 diff hist +450 STAT946F17/Conditional Image Generation with PixelCNN Decoders →Critique
- 02:1302:13, 28 November 2017 diff hist +173 Hierarchical Question-Image Co-Attention for Visual Question Answering →Critique
- 02:1102:11, 28 November 2017 diff hist +159 Hierarchical Question-Image Co-Attention for Visual Question Answering →Reference
- 02:0602:06, 28 November 2017 diff hist +231 Hierarchical Question-Image Co-Attention for Visual Question Answering →Critique
23 November 2017
- 14:4614:46, 23 November 2017 diff hist +102 Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks →Critique
- 14:2414:24, 23 November 2017 diff hist +187 Unsupervised Domain Adaptation with Residual Transfer Networks →Critique
- 03:1003:10, 23 November 2017 diff hist +66 Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks →References
- 03:0903:09, 23 November 2017 diff hist +328 Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks →Conclusion
21 November 2017
- 03:3803:38, 21 November 2017 diff hist +476 Modular Multitask Reinforcement Learning with Policy Sketches →Conclusion & Critique
- 03:0503:05, 21 November 2017 diff hist +111 Universal Style Transfer via Feature Transforms →Evaluation
- 02:5702:57, 21 November 2017 diff hist +353 Deep Alternative Neural Network: Exploring Contexts As Early As Possible For Action Recognition →Conclusion
16 November 2017
- 13:4113:41, 16 November 2017 diff hist +104 LightRNN: Memory and Computation-Efficient Recurrent Neural Networks →Reference
- 13:3913:39, 16 November 2017 diff hist +337 LightRNN: Memory and Computation-Efficient Recurrent Neural Networks →Remarks
15 November 2017
- 04:1404:14, 15 November 2017 diff hist −10 "Why Should I Trust You?": Explaining the Predictions of Any Classifier No edit summary
14 November 2017
- 18:3518:35, 14 November 2017 diff hist −1,269 "Why Should I Trust You?": Explaining the Predictions of Any Classifier No edit summary
- 07:0407:04, 14 November 2017 diff hist −1 Imagination-Augmented Agents for Deep Reinforcement Learning →Motivation
- 07:0107:01, 14 November 2017 diff hist +78 Imagination-Augmented Agents for Deep Reinforcement Learning →Introduction
- 06:4906:49, 14 November 2017 diff hist +4 Imagination-Augmented Agents for Deep Reinforcement Learning →Introduction
- 04:4304:43, 14 November 2017 diff hist +763 Dialog-based Language Learning →Future work
- 04:0404:04, 14 November 2017 diff hist 0 STAT946F17/ Coupled GAN →References and Supplementary Resources
- 04:0304:03, 14 November 2017 diff hist +121 STAT946F17/ Coupled GAN →References and Supplementary Resources
- 04:0204:02, 14 November 2017 diff hist +455 STAT946F17/ Coupled GAN →Discussion and Summary
13 November 2017
- 20:5620:56, 13 November 2017 diff hist −3,563 Dialog-based Language Learning No edit summary
- 16:3216:32, 13 November 2017 diff hist +6 "Why Should I Trust You?": Explaining the Predictions of Any Classifier No edit summary
- 16:2816:28, 13 November 2017 diff hist +1 "Why Should I Trust You?": Explaining the Predictions of Any Classifier →Should I trust this prediction?
- 16:1716:17, 13 November 2017 diff hist −1,114 "Why Should I Trust You?": Explaining the Predictions of Any Classifier →Local Interpretable Model-Agnostic Explanations (LIME)
- 14:5114:51, 13 November 2017 diff hist 0 "Why Should I Trust You?": Explaining the Predictions of Any Classifier →Introduction
- 14:4914:49, 13 November 2017 diff hist +76 "Why Should I Trust You?": Explaining the Predictions of Any Classifier →Are explanations faithful to the model?
- 14:4114:41, 13 November 2017 diff hist −4 "Why Should I Trust You?": Explaining the Predictions of Any Classifier →The Case for Explanations
- 14:3914:39, 13 November 2017 diff hist +61 "Why Should I Trust You?": Explaining the Predictions of Any Classifier →Experiment Setup
- 14:2314:23, 13 November 2017 diff hist +204 "Why Should I Trust You?": Explaining the Predictions of Any Classifier →Simulated User Experiments
- 14:1414:14, 13 November 2017 diff hist −575 "Why Should I Trust You?": Explaining the Predictions of Any Classifier →Submodular Pick for Explaining Models
- 13:4713:47, 13 November 2017 diff hist −434 "Why Should I Trust You?": Explaining the Predictions of Any Classifier →Examples
- 07:0807:08, 13 November 2017 diff hist −672 "Why Should I Trust You?": Explaining the Predictions of Any Classifier →Local Interpretable Model-Agnostic Explanations (LIME)
- 06:5406:54, 13 November 2017 diff hist −75 "Why Should I Trust You?": Explaining the Predictions of Any Classifier →Can I trust this model?
- 06:5006:50, 13 November 2017 diff hist +1 "Why Should I Trust You?": Explaining the Predictions of Any Classifier →Conclusion and Future Work
- 06:5006:50, 13 November 2017 diff hist −946 "Why Should I Trust You?": Explaining the Predictions of Any Classifier →Conclusion and Future Work
- 06:3106:31, 13 November 2017 diff hist −1,919 "Why Should I Trust You?": Explaining the Predictions of Any Classifier →The Case for Explanations
- 05:4405:44, 13 November 2017 diff hist −1,055 "Why Should I Trust You?": Explaining the Predictions of Any Classifier →Introduction
- 02:3802:38, 13 November 2017 diff hist −3 f17Stat946PaperSignUp →Paper presentation
- 02:3002:30, 13 November 2017 diff hist +9 "Why Should I Trust You?": Explaining the Predictions of Any Classifier →Remarks and Critique
- 02:2802:28, 13 November 2017 diff hist −184 "Why Should I Trust You?": Explaining the Predictions of Any Classifier No edit summary
- 02:2602:26, 13 November 2017 diff hist +1 "Why Should I Trust You?": Explaining the Predictions of Any Classifier →Submodular Pick for Explaining Models
- 02:2202:22, 13 November 2017 diff hist +8 "Why Should I Trust You?": Explaining the Predictions of Any Classifier →Local Interpretable Model-Agnostic Explanations (LIME)
- 02:0802:08, 13 November 2017 diff hist +4 "Why Should I Trust You?": Explaining the Predictions of Any Classifier No edit summary
- 02:0402:04, 13 November 2017 diff hist +118 "Why Should I Trust You?": Explaining the Predictions of Any Classifier No edit summary
- 00:5200:52, 13 November 2017 diff hist +348 "Why Should I Trust You?": Explaining the Predictions of Any Classifier No edit summary
12 November 2017
- 23:2623:26, 12 November 2017 diff hist +106 "Why Should I Trust You?": Explaining the Predictions of Any Classifier →References
- 22:4822:48, 12 November 2017 diff hist +62 "Why Should I Trust You?": Explaining the Predictions of Any Classifier No edit summary
- 22:3222:32, 12 November 2017 diff hist −340 "Why Should I Trust You?": Explaining the Predictions of Any Classifier →Desired Characteristics for Explainers
- 22:2522:25, 12 November 2017 diff hist −752 "Why Should I Trust You?": Explaining the Predictions of Any Classifier →The Case for Explanations
- 22:1422:14, 12 November 2017 diff hist −107 "Why Should I Trust You?": Explaining the Predictions of Any Classifier →Introduction
- 22:0422:04, 12 November 2017 diff hist +342 "Why Should I Trust You?": Explaining the Predictions of Any Classifier No edit summary
- 19:0119:01, 12 November 2017 diff hist +1,579 "Why Should I Trust You?": Explaining the Predictions of Any Classifier No edit summary
- 18:1518:15, 12 November 2017 diff hist +308 "Why Should I Trust You?": Explaining the Predictions of Any Classifier No edit summary
- 18:0518:05, 12 November 2017 diff hist 0 N File:choose classifier.png No edit summary current
- 18:0518:05, 12 November 2017 diff hist 0 N File:table1.png No edit summary
- 18:0518:05, 12 November 2017 diff hist +2,426 "Why Should I Trust You?": Explaining the Predictions of Any Classifier No edit summary
- 15:5915:59, 12 November 2017 diff hist 0 N File:recall2.png No edit summary current
- 15:5915:59, 12 November 2017 diff hist 0 N File:recall1.png No edit summary current
- 15:5615:56, 12 November 2017 diff hist +217 "Why Should I Trust You?": Explaining the Predictions of Any Classifier No edit summary
- 15:5315:53, 12 November 2017 diff hist 0 N File:toy example.png No edit summary current
- 15:5215:52, 12 November 2017 diff hist +7,711 "Why Should I Trust You?": Explaining the Predictions of Any Classifier No edit summary
- 15:1015:10, 12 November 2017 diff hist +506 "Why Should I Trust You?": Explaining the Predictions of Any Classifier No edit summary
- 06:2106:21, 12 November 2017 diff hist +88 "Why Should I Trust You?": Explaining the Predictions of Any Classifier No edit summary
- 06:2006:20, 12 November 2017 diff hist +96 "Why Should I Trust You?": Explaining the Predictions of Any Classifier No edit summary
- 06:1906:19, 12 November 2017 diff hist +131 f17Stat946PaperSignUp →Paper presentation
- 06:0606:06, 12 November 2017 diff hist +50 "Why Should I Trust You?": Explaining the Predictions of Any Classifier No edit summary
- 06:0106:01, 12 November 2017 diff hist +51 "Why Should I Trust You?": Explaining the Predictions of Any Classifier No edit summary
- 05:5005:50, 12 November 2017 diff hist 0 N File:algorithm2.png No edit summary current
- 05:4905:49, 12 November 2017 diff hist 0 N File:algorithm1.png No edit summary
- 05:4905:49, 12 November 2017 diff hist +3,811 "Why Should I Trust You?": Explaining the Predictions of Any Classifier No edit summary
- 05:3705:37, 12 November 2017 diff hist 0 "Why Should I Trust You?": Explaining the Predictions of Any Classifier →Sparse Linear Explanations
- 05:3505:35, 12 November 2017 diff hist +604 "Why Should I Trust You?": Explaining the Predictions of Any Classifier No edit summary
- 05:2305:23, 12 November 2017 diff hist +2,146 "Why Should I Trust You?": Explaining the Predictions of Any Classifier No edit summary
- 05:1705:17, 12 November 2017 diff hist 0 N File:LIME.jpg No edit summary current
- 05:1105:11, 12 November 2017 diff hist 0 N File:inception example.png No edit summary current
- 05:1105:11, 12 November 2017 diff hist 0 N File:decision boundary.png No edit summary current
- 04:5904:59, 12 November 2017 diff hist +7,256 "Why Should I Trust You?": Explaining the Predictions of Any Classifier No edit summary
- 04:2204:22, 12 November 2017 diff hist +2,549 "Why Should I Trust You?": Explaining the Predictions of Any Classifier No edit summary
- 02:2402:24, 12 November 2017 diff hist 0 File:explanation example.png Sverneka uploaded a new version of File:explanation example.png current
- 02:2202:22, 12 November 2017 diff hist 0 File:explanation example.png Sverneka uploaded a new version of File:explanation example.png
- 02:2102:21, 12 November 2017 diff hist +423 "Why Should I Trust You?": Explaining the Predictions of Any Classifier No edit summary
- 02:1902:19, 12 November 2017 diff hist 0 File:LIME.png Sverneka uploaded a new version of File:LIME.png current
- 02:1702:17, 12 November 2017 diff hist 0 File:LIME.png Sverneka uploaded a new version of File:LIME.png
- 02:1602:16, 12 November 2017 diff hist 0 File:LIME.png Sverneka uploaded a new version of File:LIME.png
- 02:1202:12, 12 November 2017 diff hist +356 "Why Should I Trust You?": Explaining the Predictions of Any Classifier No edit summary
- 02:0002:00, 12 November 2017 diff hist −17 "Why Should I Trust You?": Explaining the Predictions of Any Classifier No edit summary
- 01:5901:59, 12 November 2017 diff hist +44 "Why Should I Trust You?": Explaining the Predictions of Any Classifier No edit summary
- 01:4701:47, 12 November 2017 diff hist +34 "Why Should I Trust You?": Explaining the Predictions of Any Classifier No edit summary
- 01:4601:46, 12 November 2017 diff hist +363 N File:explanation example.png Figure 2: Explaining individual predictions of competing classifiers trying to determine if a document is about “Christianity” or “Atheism”. The bar chart represents the importance given to the most relevant words, also highlighted in the text....
- 01:4501:45, 12 November 2017 diff hist +1,644 "Why Should I Trust You?": Explaining the Predictions of Any Classifier No edit summary
- 01:4001:40, 12 November 2017 diff hist +2,274 "Why Should I Trust You?": Explaining the Predictions of Any Classifier No edit summary