f17Stat946PaperSignUp: Difference between revisions
Jump to navigation
Jump to search
No edit summary |
Rahulniyer (talk | contribs) mNo edit summary |
||
Line 48: | Line 48: | ||
|Nov 7 || Omid Rezai|| 11 || Understanding the Effective Receptive Field in Deep Convolutional Neural Networks || [https://papers.nips.cc/paper/6203-understanding-the-effective-receptive-field-in-deep-convolutional-neural-networks.pdf Paper]|| [[Understanding the Effective Receptive Field in Deep Convolutional Neural Networks | Summary]] | |Nov 7 || Omid Rezai|| 11 || Understanding the Effective Receptive Field in Deep Convolutional Neural Networks || [https://papers.nips.cc/paper/6203-understanding-the-effective-receptive-field-in-deep-convolutional-neural-networks.pdf Paper]|| [[Understanding the Effective Receptive Field in Deep Convolutional Neural Networks | Summary]] | ||
|- | |- | ||
|Nov 7 || Rahul Iyer|| 12|| Convolutional Sequence to Sequence Learning || [https://arxiv.org/pdf/1705.03122.pdf Paper] || | |Nov 7 || Rahul Iyer|| 12|| Convolutional Sequence to Sequence Learning || [https://arxiv.org/pdf/1705.03122.pdf Paper] || [https://wiki.math.uwaterloo.ca/statwiki/index.php?title=Convolutional_Sequence_to_Sequence_Learning Summary] | ||
|- | |- | ||
|Nov 9 || ShuoShuo Liu ||13 ||Learning the Number of Neurons in Deep Networks|| [http://papers.nips.cc/paper/6372-learning-the-number-of-neurons-in-deep-networks.pdf Paper] || [[Learning the Number of Neurons in Deep Networks | Summary]] | |Nov 9 || ShuoShuo Liu ||13 ||Learning the Number of Neurons in Deep Networks|| [http://papers.nips.cc/paper/6372-learning-the-number-of-neurons-in-deep-networks.pdf Paper] || [[Learning the Number of Neurons in Deep Networks | Summary]] |
Revision as of 11:01, 31 October 2017
List of Papers
Record your contributions here:
Use the following notations:
P: You have written a summary/critique on the paper.
T: You had a technical contribution on a paper (excluding the paper that you present).
E: You had an editorial contribution on a paper (excluding the paper that you present).
Your feedback on presentations
Paper presentation
Date | Name | Paper number | Title | Link to the paper | Link to the summary |
Oct 12 (example) | Ri Wang | Sequence to sequence learning with neural networks. | Paper | Summary | |
Oct 26 | Sakif Khan | 1 | Improved Variational Inference with Inverse Autoregressive Flow | Paper | Summary |
Oct 26 | Amir-Hossein Karimi | 2 | Learning a Probabilistic Latent Space of Object Shapes via 3D Generative-Adversarial Modeling | Paper | Summary |
Oct 26 | Josh Valchar | 3 | Learning What and Where to Draw | [1] | Summary |
Oct 31 | Jimit Majmudar | 4 | Incremental Boosting Convolutional Neural Network for Facial Action Unit Recognition | Paper | Summary |
Oct 31 | 6 | ||||
Nov 2 | Prashanth T.K. | 7 | When can Multi-Site Datasets be Pooled for Regression? Hypothesis Tests, l2-consistency and Neuroscience Applications | Paper | Summary |
Nov 2 | |||||
Nov 2 | Haotian Lyu | 9 | Learning Important Features Through Propagating Activation Differences | Paper | summary |
Nov 7 | Dishant Mittal | 10 | meProp: Sparsified Back Propagation for Accelerated Deep Learning with Reduced Overfitting | Paper | Summary |
Nov 7 | Omid Rezai | 11 | Understanding the Effective Receptive Field in Deep Convolutional Neural Networks | Paper | Summary |
Nov 7 | Rahul Iyer | 12 | Convolutional Sequence to Sequence Learning | Paper | Summary |
Nov 9 | ShuoShuo Liu | 13 | Learning the Number of Neurons in Deep Networks | Paper | Summary |
Nov 9 | Aravind Balakrishnan | 14 | FeUdal Networks for Hierarchical Reinforcement Learning | [2] | |
Nov 9 | Varshanth R Rao | 15 | Cognitive Psychology for Deep Neural Networks: A Shape Bias Case Study | Paper | Summary |
Nov 14 | Avinash Prasad | 16 | Coupled GAN | [3] | |
Nov 14 | Nafseer Kadiyaravida | 17 | Dialog-based Language Learning | Paper | Summary |
Nov 14 | Ruifan Yu | 18 | Imagination-Augmented Agents for Deep Reinforcement Learning | Paper | |
Nov 16 | Hamidreza Shahidi | 19 | Teaching Machines to Describe Images via Natural Language Feedback | ||
Nov 16 | Sachin vernekar | 20 | Natural-Parameter Networks: A Class of Probabilistic Neural Networks | Paper | Summary |
Nov 16 | Yunqing He | 21 | LightRNN: Memory and Computation-Efficient Recurrent Neural Networks | [4] | Summary |
Nov 21 | Aman Jhunjhunwala | 22 | Curiosity-driven Exploration by Self-supervised Prediction | Paper | Summary |
Nov 21 | Michael Honke | 23 | Universal Style Transfer via Feature Transforms | Paper | Summary |
Nov 21 | Ashish Gaurav | 24 | Deep Exploration via Bootstrapped DQN | Paper | Summary |
Nov 23 | Venkateshwaran Balasubramanian | 25 | Large-Scale Evolution of Image Classifiers | Paper | |
Nov 23 | Ershad Banijamali | 26 | Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks | Paper | |
Nov 23 | Dylan Spicker | 27 | Unsupervised Domain Adaptation with Residual Transfer Networks | Paper | |
Nov 28 | Mike Rudd | 28 | Deep Transfer Learning with Joint Adaptation Networks | Paper | Summary |
Nov 28 | Shivam Kalra | 29 | Still deciding (putting my slot) | ||
Nov 28 | Aditya Sriram | 30 | Conditional Image Generation with PixelCNN Decoders | Paper | |
Nov 30 | Congcong Zhi | 31 | Dance Dance Convolution | Paper | |
Nov 30 | Jian Deng | 32 | Automated Curriculum Learning for Neural Networks | Paper | |
Nov 30 | Elaheh Jalalpour | 33 |