Countering Adversarial Images Using Input Transformations: Difference between revisions

From statwiki
Jump to navigation Jump to search
(Created page with "== Motivation ==")
 
No edit summary
Line 1: Line 1:
== Motivation ==
== Motivation ==
As the use of machine intelligence has increased , robustness has become a critical feature to guarantee the reliability of deployed machine-learning systems. However, recent research has shown that existing models are not robust to small , adversarial designed perturbations of the input.  Adversarial examples are inputs to Machine Learning models that an attacker has intentionally designed to cause the model to make a mistake.The adversarial examples are not specific to  Images , but also Malware, Text Understanding ,Speech.
Below example, a small perturbation when applied to original image, the prediction is changed.
Hence an urgent need for approaches that increase the robustness of learning systems to such examples

Revision as of 18:13, 14 November 2018

Motivation

As the use of machine intelligence has increased , robustness has become a critical feature to guarantee the reliability of deployed machine-learning systems. However, recent research has shown that existing models are not robust to small , adversarial designed perturbations of the input. Adversarial examples are inputs to Machine Learning models that an attacker has intentionally designed to cause the model to make a mistake.The adversarial examples are not specific to Images , but also Malware, Text Understanding ,Speech. Below example, a small perturbation when applied to original image, the prediction is changed.


Hence an urgent need for approaches that increase the robustness of learning systems to such examples