User contributions for Gsikri
Jump to navigation
Jump to search
22 November 2020
- 17:5317:53, 22 November 2020 diff hist +340 THE LOGICAL EXPRESSIVENESS OF GRAPH NEURAL NETWORKS →Experiments
- 17:5017:50, 22 November 2020 diff hist 0 N File:eq 6.PNG No edit summary current
15 November 2020
- 17:5217:52, 15 November 2020 diff hist +153 orthogonal gradient descent for continual learning →Results
- 17:5117:51, 15 November 2020 diff hist 0 N File:ogd.png No edit summary current
- 17:4617:46, 15 November 2020 diff hist −1 orthogonal gradient descent for continual learning →Results
- 17:3417:34, 15 November 2020 diff hist +61 orthogonal gradient descent for continual learning →Results
- 17:2917:29, 15 November 2020 diff hist +1 orthogonal gradient descent for continual learning →Previous Work
- 17:2417:24, 15 November 2020 diff hist −27 The Curious Case of Degeneration →Conclusion
- 17:2217:22, 15 November 2020 diff hist +128 The Curious Case of Degeneration →Conclusion
9 November 2020
- 00:5800:58, 9 November 2020 diff hist +266 stat940F21 →Paper presentation
8 November 2020
- 21:4521:45, 8 November 2020 diff hist +3,367 Breaking Certified Defenses: Semantic Adversarial Examples With Spoofed Robustness Certificates No edit summary
- 21:3721:37, 8 November 2020 diff hist 0 N File:crown ibp.png No edit summary current
- 21:3121:31, 8 November 2020 diff hist 0 N File:ran smoothing.png No edit summary current
- 20:2320:23, 8 November 2020 diff hist +2,893 Breaking Certified Defenses: Semantic Adversarial Examples With Spoofed Robustness Certificates No edit summary
- 20:0720:07, 8 November 2020 diff hist 0 N File:shadow attack.png No edit summary current
- 20:0220:02, 8 November 2020 diff hist 0 N File:pgd attack.png No edit summary current
- 19:4819:48, 8 November 2020 diff hist +880 Breaking Certified Defenses: Semantic Adversarial Examples With Spoofed Robustness Certificates No edit summary
- 19:4219:42, 8 November 2020 diff hist +1,510 Breaking Certified Defenses: Semantic Adversarial Examples With Spoofed Robustness Certificates No edit summary
- 19:3719:37, 8 November 2020 diff hist 0 N File:certified defense.png No edit summary current
- 19:3019:30, 8 November 2020 diff hist +58 Breaking Certified Defenses: Semantic Adversarial Examples With Spoofed Robustness Certificates No edit summary
- 19:2819:28, 8 November 2020 diff hist 0 N File:adversarial example.png No edit summary current
- 19:2019:20, 8 November 2020 diff hist +437 N Breaking Certified Defenses: Semantic Adversarial Examples With Spoofed Robustness Certificates Created page with " == Presented By == Gaurav Sikri == Background == Adversarial examples are inputs to machine learning or deep neural network models that an attacker intentionally designs to..."
- 17:2617:26, 8 November 2020 diff hist 0 Learning The Difference That Makes A Difference With Counterfactually-Augmented Data →Data Collection
- 17:2417:24, 8 November 2020 diff hist −14 Learning The Difference That Makes A Difference With Counterfactually-Augmented Data →Introduction
- 17:2017:20, 8 November 2020 diff hist +11 ALBERT: A Lite BERT for Self-supervised Learning of Language Representations →Effect of Network Depth and Width
- 17:1417:14, 8 November 2020 diff hist +1 ALBERT: A Lite BERT for Self-supervised Learning of Language Representations →Removing dropout
- 17:1117:11, 8 November 2020 diff hist −16 ALBERT: A Lite BERT for Self-supervised Learning of Language Representations →Removing dropout
- 17:0217:02, 8 November 2020 diff hist +1 Augmix: New Data Augmentation method to increase the robustness of the algorithm →Introduction
4 November 2020
- 22:4022:40, 4 November 2020 diff hist +18 stat940F21 →Paper presentation
1 November 2020
- 16:2516:25, 1 November 2020 diff hist +534 F21-STAT 940-Proposal No edit summary
9 October 2020
- 11:4711:47, 9 October 2020 diff hist +137 stat940F21 No edit summary
- 09:3709:37, 9 October 2020 diff hist +134 F21-STAT 940-Proposal No edit summary