Breaking Certified Defenses: Semantic Adversarial Examples With Spoofed Robustness Certificates

From statwiki
Revision as of 20:20, 8 November 2020 by Gsikri (talk | contribs) (Created page with " == Presented By == Gaurav Sikri == Background == Adversarial examples are inputs to machine learning or deep neural network models that an attacker intentionally designs to...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Presented By

Gaurav Sikri

Background

Adversarial examples are inputs to machine learning or deep neural network models that an attacker intentionally designs to deceive the model or to cause the model to make a wrong prediction. This is done by adding a little noise to the original image or perturbing an original image and creating an image that is not identified by the network and the model misclassifies the new image.