User contributions for Gsikri
Jump to navigation
Jump to search
29 November 2020
- 20:1720:17, 29 November 2020 diff hist −2 Functional regularisation for continual learning with gaussian processes →Introduction
- 20:1620:16, 29 November 2020 diff hist +1 Functional regularisation for continual learning with gaussian processes →Detecting Task Boundaries
- 20:1220:12, 29 November 2020 diff hist +836 Extreme Multi-label Text Classification →BOW Approaches
22 November 2020
- 18:5718:57, 22 November 2020 diff hist +1 Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations →Navier-Stokes with Pressure
- 18:5618:56, 22 November 2020 diff hist 0 Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations →Discrete-Time Models
- 18:5618:56, 22 November 2020 diff hist +9 Adversarial Fisher Vectors for Unsupervised Representation Learning →Conclusion
- 18:5418:54, 22 November 2020 diff hist 0 Adversarial Fisher Vectors for Unsupervised Representation Learning →Adversarial Fisher Vectors
- 18:5218:52, 22 November 2020 diff hist +4 Adversarial Fisher Vectors for Unsupervised Representation Learning →Introduction
- 18:3118:31, 22 November 2020 diff hist +44 SuperGLUE →SuperGLUE Tasks
- 18:3018:30, 22 November 2020 diff hist +153 SuperGLUE →SuperGLUE Tasks
- 18:2418:24, 22 November 2020 diff hist −2 SuperGLUE →SuperGLUE Tasks
- 18:2318:23, 22 November 2020 diff hist +270 SuperGLUE →SuperGLUE Tasks
- 18:2018:20, 22 November 2020 diff hist 0 N File:supergluetasks.png No edit summary current
- 18:0118:01, 22 November 2020 diff hist 0 THE LOGICAL EXPRESSIVENESS OF GRAPH NEURAL NETWORKS →Experiments
- 17:5917:59, 22 November 2020 diff hist +3 THE LOGICAL EXPRESSIVENESS OF GRAPH NEURAL NETWORKS →ACR-GNNs
- 17:5617:56, 22 November 2020 diff hist 0 THE LOGICAL EXPRESSIVENESS OF GRAPH NEURAL NETWORKS →Experiments
- 17:5617:56, 22 November 2020 diff hist +12 THE LOGICAL EXPRESSIVENESS OF GRAPH NEURAL NETWORKS →Experiments
- 17:5517:55, 22 November 2020 diff hist 0 N File:a227eq6.png No edit summary current
- 17:5317:53, 22 November 2020 diff hist +340 THE LOGICAL EXPRESSIVENESS OF GRAPH NEURAL NETWORKS →Experiments
- 17:5017:50, 22 November 2020 diff hist 0 N File:eq 6.PNG No edit summary current
15 November 2020
- 17:5217:52, 15 November 2020 diff hist +153 orthogonal gradient descent for continual learning →Results
- 17:5117:51, 15 November 2020 diff hist 0 N File:ogd.png No edit summary current
- 17:4617:46, 15 November 2020 diff hist −1 orthogonal gradient descent for continual learning →Results
- 17:3417:34, 15 November 2020 diff hist +61 orthogonal gradient descent for continual learning →Results
- 17:2917:29, 15 November 2020 diff hist +1 orthogonal gradient descent for continual learning →Previous Work
- 17:2417:24, 15 November 2020 diff hist −27 The Curious Case of Degeneration →Conclusion
- 17:2217:22, 15 November 2020 diff hist +128 The Curious Case of Degeneration →Conclusion
9 November 2020
- 00:5800:58, 9 November 2020 diff hist +266 stat940F21 →Paper presentation
8 November 2020
- 21:4521:45, 8 November 2020 diff hist +3,367 Breaking Certified Defenses: Semantic Adversarial Examples With Spoofed Robustness Certificates No edit summary
- 21:3721:37, 8 November 2020 diff hist 0 N File:crown ibp.png No edit summary current
- 21:3121:31, 8 November 2020 diff hist 0 N File:ran smoothing.png No edit summary current
- 20:2320:23, 8 November 2020 diff hist +2,893 Breaking Certified Defenses: Semantic Adversarial Examples With Spoofed Robustness Certificates No edit summary
- 20:0720:07, 8 November 2020 diff hist 0 N File:shadow attack.png No edit summary current
- 20:0220:02, 8 November 2020 diff hist 0 N File:pgd attack.png No edit summary current
- 19:4819:48, 8 November 2020 diff hist +880 Breaking Certified Defenses: Semantic Adversarial Examples With Spoofed Robustness Certificates No edit summary
- 19:4219:42, 8 November 2020 diff hist +1,510 Breaking Certified Defenses: Semantic Adversarial Examples With Spoofed Robustness Certificates No edit summary
- 19:3719:37, 8 November 2020 diff hist 0 N File:certified defense.png No edit summary current
- 19:3019:30, 8 November 2020 diff hist +58 Breaking Certified Defenses: Semantic Adversarial Examples With Spoofed Robustness Certificates No edit summary
- 19:2819:28, 8 November 2020 diff hist 0 N File:adversarial example.png No edit summary current
- 19:2019:20, 8 November 2020 diff hist +437 N Breaking Certified Defenses: Semantic Adversarial Examples With Spoofed Robustness Certificates Created page with " == Presented By == Gaurav Sikri == Background == Adversarial examples are inputs to machine learning or deep neural network models that an attacker intentionally designs to..."
- 17:2617:26, 8 November 2020 diff hist 0 Learning The Difference That Makes A Difference With Counterfactually-Augmented Data →Data Collection
- 17:2417:24, 8 November 2020 diff hist −14 Learning The Difference That Makes A Difference With Counterfactually-Augmented Data →Introduction
- 17:2017:20, 8 November 2020 diff hist +11 ALBERT: A Lite BERT for Self-supervised Learning of Language Representations →Effect of Network Depth and Width
- 17:1417:14, 8 November 2020 diff hist +1 ALBERT: A Lite BERT for Self-supervised Learning of Language Representations →Removing dropout
- 17:1117:11, 8 November 2020 diff hist −16 ALBERT: A Lite BERT for Self-supervised Learning of Language Representations →Removing dropout
- 17:0217:02, 8 November 2020 diff hist +1 Augmix: New Data Augmentation method to increase the robustness of the algorithm →Introduction
4 November 2020
- 22:4022:40, 4 November 2020 diff hist +18 stat940F21 →Paper presentation
1 November 2020
- 16:2516:25, 1 November 2020 diff hist +534 F21-STAT 940-Proposal No edit summary
9 October 2020
- 11:4711:47, 9 October 2020 diff hist +137 stat940F21 No edit summary
- 09:3709:37, 9 October 2020 diff hist +134 F21-STAT 940-Proposal No edit summary