User contributions for Gsikri
Jump to navigation
Jump to search
29 November 2020
- 21:1721:17, 29 November 2020 diff hist −2 Functional regularisation for continual learning with gaussian processes →Introduction
- 21:1621:16, 29 November 2020 diff hist +1 Functional regularisation for continual learning with gaussian processes →Detecting Task Boundaries
- 21:1221:12, 29 November 2020 diff hist +836 Extreme Multi-label Text Classification →BOW Approaches
22 November 2020
- 19:5719:57, 22 November 2020 diff hist +1 Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations →Navier-Stokes with Pressure
- 19:5619:56, 22 November 2020 diff hist 0 Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations →Discrete-Time Models
- 19:5619:56, 22 November 2020 diff hist +9 Adversarial Fisher Vectors for Unsupervised Representation Learning →Conclusion
- 19:5419:54, 22 November 2020 diff hist 0 Adversarial Fisher Vectors for Unsupervised Representation Learning →Adversarial Fisher Vectors
- 19:5219:52, 22 November 2020 diff hist +4 Adversarial Fisher Vectors for Unsupervised Representation Learning →Introduction
- 19:3119:31, 22 November 2020 diff hist +44 SuperGLUE →SuperGLUE Tasks
- 19:3019:30, 22 November 2020 diff hist +153 SuperGLUE →SuperGLUE Tasks
- 19:2419:24, 22 November 2020 diff hist −2 SuperGLUE →SuperGLUE Tasks
- 19:2319:23, 22 November 2020 diff hist +270 SuperGLUE →SuperGLUE Tasks
- 19:2019:20, 22 November 2020 diff hist 0 N File:supergluetasks.png No edit summary current
- 19:0119:01, 22 November 2020 diff hist 0 THE LOGICAL EXPRESSIVENESS OF GRAPH NEURAL NETWORKS →Experiments
- 18:5918:59, 22 November 2020 diff hist +3 THE LOGICAL EXPRESSIVENESS OF GRAPH NEURAL NETWORKS →ACR-GNNs
- 18:5618:56, 22 November 2020 diff hist 0 THE LOGICAL EXPRESSIVENESS OF GRAPH NEURAL NETWORKS →Experiments
- 18:5618:56, 22 November 2020 diff hist +12 THE LOGICAL EXPRESSIVENESS OF GRAPH NEURAL NETWORKS →Experiments
- 18:5518:55, 22 November 2020 diff hist 0 N File:a227eq6.png No edit summary current
- 18:5318:53, 22 November 2020 diff hist +340 THE LOGICAL EXPRESSIVENESS OF GRAPH NEURAL NETWORKS →Experiments
- 18:5018:50, 22 November 2020 diff hist 0 N File:eq 6.PNG No edit summary current
15 November 2020
- 18:5218:52, 15 November 2020 diff hist +153 orthogonal gradient descent for continual learning →Results
- 18:5118:51, 15 November 2020 diff hist 0 N File:ogd.png No edit summary current
- 18:4618:46, 15 November 2020 diff hist −1 orthogonal gradient descent for continual learning →Results
- 18:3418:34, 15 November 2020 diff hist +61 orthogonal gradient descent for continual learning →Results
- 18:2918:29, 15 November 2020 diff hist +1 orthogonal gradient descent for continual learning →Previous Work
- 18:2418:24, 15 November 2020 diff hist −27 The Curious Case of Degeneration →Conclusion
- 18:2218:22, 15 November 2020 diff hist +128 The Curious Case of Degeneration →Conclusion
9 November 2020
- 01:5801:58, 9 November 2020 diff hist +266 stat940F21 →Paper presentation
8 November 2020
- 22:4522:45, 8 November 2020 diff hist +3,367 Breaking Certified Defenses: Semantic Adversarial Examples With Spoofed Robustness Certificates No edit summary
- 22:3722:37, 8 November 2020 diff hist 0 N File:crown ibp.png No edit summary current
- 22:3122:31, 8 November 2020 diff hist 0 N File:ran smoothing.png No edit summary current
- 21:2321:23, 8 November 2020 diff hist +2,893 Breaking Certified Defenses: Semantic Adversarial Examples With Spoofed Robustness Certificates No edit summary
- 21:0721:07, 8 November 2020 diff hist 0 N File:shadow attack.png No edit summary current
- 21:0221:02, 8 November 2020 diff hist 0 N File:pgd attack.png No edit summary current
- 20:4820:48, 8 November 2020 diff hist +880 Breaking Certified Defenses: Semantic Adversarial Examples With Spoofed Robustness Certificates No edit summary
- 20:4220:42, 8 November 2020 diff hist +1,510 Breaking Certified Defenses: Semantic Adversarial Examples With Spoofed Robustness Certificates No edit summary
- 20:3720:37, 8 November 2020 diff hist 0 N File:certified defense.png No edit summary current
- 20:3020:30, 8 November 2020 diff hist +58 Breaking Certified Defenses: Semantic Adversarial Examples With Spoofed Robustness Certificates No edit summary
- 20:2820:28, 8 November 2020 diff hist 0 N File:adversarial example.png No edit summary current
- 20:2020:20, 8 November 2020 diff hist +437 N Breaking Certified Defenses: Semantic Adversarial Examples With Spoofed Robustness Certificates Created page with " == Presented By == Gaurav Sikri == Background == Adversarial examples are inputs to machine learning or deep neural network models that an attacker intentionally designs to..."
- 18:2618:26, 8 November 2020 diff hist 0 Learning The Difference That Makes A Difference With Counterfactually-Augmented Data →Data Collection
- 18:2418:24, 8 November 2020 diff hist −14 Learning The Difference That Makes A Difference With Counterfactually-Augmented Data →Introduction
- 18:2018:20, 8 November 2020 diff hist +11 ALBERT: A Lite BERT for Self-supervised Learning of Language Representations →Effect of Network Depth and Width
- 18:1418:14, 8 November 2020 diff hist +1 ALBERT: A Lite BERT for Self-supervised Learning of Language Representations →Removing dropout
- 18:1118:11, 8 November 2020 diff hist −16 ALBERT: A Lite BERT for Self-supervised Learning of Language Representations →Removing dropout
- 18:0218:02, 8 November 2020 diff hist +1 Augmix: New Data Augmentation method to increase the robustness of the algorithm →Introduction
4 November 2020
- 23:4023:40, 4 November 2020 diff hist +18 stat940F21 →Paper presentation
1 November 2020
- 17:2517:25, 1 November 2020 diff hist +534 F21-STAT 940-Proposal No edit summary
9 October 2020
- 12:4712:47, 9 October 2020 diff hist +137 stat940F21 No edit summary
- 10:3710:37, 9 October 2020 diff hist +134 F21-STAT 940-Proposal No edit summary