http://wiki.math.uwaterloo.ca/statwiki/api.php?action=feedcontributions&user=Z3chu&feedformat=atomstatwiki - User contributions [US]2023-06-02T02:14:18ZUser contributionsMediaWiki 1.28.3http://wiki.math.uwaterloo.ca/statwiki/index.php?title=One_pixel_attack_for_fooling_deep_neural_networks&diff=35740One pixel attack for fooling deep neural networks2018-03-27T18:38:01Z<p>Z3chu: /* Results */</p>
<hr />
<div>= Presented by = <br />
1. Ziheng Chu<br />
<br />
2. Minghao Lu<br />
<br />
3. Qi Mai<br />
<br />
4. Qici Tan<br />
<br />
= Introduction =<br />
Neural network first caught many people’s attention in imageNet contest in 2012. Neural network increased accuracy to 85% from 75%. The following year, it is increased to 89%. From no one used neural network to everyone uses the neural network. Today we have 97% accuracy in using deep neural network (DNN). So the problem of image recognition by are artificial intelligence is solved. However, there is one catch(Carlini.N,2017).<br />
<br />
The catch is that the DNN is really easy to be fooled. Here is an example. An image of a dog is classified as a hummingbird. Research studies by Google Brian, which is a deep learning artificial intelligence research team at Google, showed that any machine learning classifier can be tricked to give wrong predictions. The action of designing an input in a specific way to get the wrong result from the model is called an adversarial attack (Roman Trusov,2017). The input image is the adversarial image. This image is created by adding a tiny amount of perturbation, which is not so imperceptible to human eyes as shown in Figure 1. After zooming into Figure 1, as shown in Figure 2, a small amount of perturbation led to misclassify a dog as a hummingbird.<br />
<br />
[[File:abcd.png|thumb|center|400px|Figure 1 Adversarial Example]]<br />
<br />
[[File:zoom.jpg|thumb|center|400px|Figure 2 Adversarial Example (Zoomed)]]<br />
<br />
How is it possible? DNN models consist of transformation. Most of those transformations are sometimes very sensitive to a small change. Think of the DNN as a set of high-dimensional decision boundary. When an input is not perfect, if the decision boundary is too simple and linear, mostly it leads to misclassify. <br />
<br />
Harnessing this sensitivity is a way to better understand and product robust algorithm in AI security. This paper aims to demonstrate the vulnerability of DNN by presenting some extreme scenarios - one pixel attack. As shown in Figure3, only one pixel was perturbed, the classification was wrong in each image. Although, there is no profound defense to the attack as of current state, the investigation of one-pixel attack may shield lights on the behavior of DNN. Ultimately, it leads to the discussion of the security implications to future solution.<br />
<br />
[[File:one.jpg|thumb|center|400px|Figure 3 One Pixel Attack Examples]]<br />
<br />
This paper proposed one pixel attack in a scenario where the only information available is the probability labels. Comparing to previous work, this proposal showed its effectiveness of successful attack rate up to 73%, its simplicity of semi-black-box which only required probability label no need inner information, and its flexibility in attacking more models, especially the networks that are not differentiable and the gradient calculation is difficult.<br />
<br />
With the intension of creating an adversarial attack for better understanding the security of DNN, one pixel attack should be considered. Two main reasons: 1) a new way of exploring the high dimensional DNN by using fewer and lower dimensional slices. It is different from previous work, where perturbation was done by adding small value to each pixel. 2) a measure of perceptiveness to demonstrate the severity of one-pixel attack as comparing to a few pixel examples.<br />
<br />
= Related works =<br />
The sensitivity to well-turned artificial perturbation were investigated in various related work.<br />
<br />
1. First perturbation was crafted by several gradient-based algorithms using back- propagation for obtaining gradient information. (C. Szegedy et al. )<br />
<br />
2. Fast gradient sign algorithm for calculating effective perturbation<br />
It was with the hypotheses of the linearity and high-dimensions of inputs were the main reason of vulnerability. (I.J.Goodfellow et al.) <br />
<br />
3. Greedy perturbation searching method by assuming the linearity of DNN decision boundary (S.M Moosavi-Dezfooli et al.) Jacobian matrix to build “Adversarial Saliency Map” which indicates the effectiveness of conducting a fixed length perturbation through the direction of each axis (N. Papernot et al. )<br />
<br />
4. The images can hardly be recognized by human eyes but nevertheless classified by the network with high confi- dence. (A. Nguyen et al. )<br />
<br />
5. Several black-box attacks that require no internal knowledge about the target systems such as gradients, have also been proposed. only utilized it as a starting point to derive a further semi black-box attack which needs to modify more pixels (N. Narodytska et al)<br />
<br />
6. Both natural and random images are found to be vulnerable to adversarial perturbation. Assuming these images are evenly distributed, it suggests that most data points in the input space are gathered near to the boundaries. ( A. Fawzi, S. M. Moosavi Dezfooli, and P. Frossard.)<br />
<br />
7. A curvature analysis region along most directions around natural images are flat with only few directions where the space is curved and the images are sensitive to perturbation. (A. Fawzi et al.)<br />
<br />
8. Universal perturbations (i.e. a perturbation that when added to any natural image can generate adversarial samples with high effectiveness) were shown possible and to achieve a high effectiveness when compared to random perturbation. This indicates that the diversity of boundaries might be low while the boundaries’ shapes near different data points are similar (S. M. Moosavi Dezfooli, A. Fawzi, O. Fawzi, and P. Frossard.)<br />
<br />
= Methodology =<br />
<br />
== Problem Description ==<br />
<br />
We can formalize the generation of adversarial images as an constrained optimization problem. We are given a classifier '''F''' and targeted adversarial class '''adv'''. Let '''x''' be the vectorized form of an image. Let F_adv('''x''') represent the probability assigned to the targeted class '''adv''' for vector '''x''' by the given classifier. Let e('''x''') represent an additive adversarial perturbation vector for image '''x'''. The goal of the targeted attack adversary is to find the perturbation vector with constrained size(norm) that maximizes the probability assigned to the targeted class by the classifier. Formally <br />
<br />
[[File:ONE_PIXEL_CSP1.png | center ]]<br />
<br />
<br />
For few-pixel attack, the problem statement is changed slightly. We constrain on the number of non-zero elements of the perturbation vector instead of the norm of the vector.<br />
<br />
[[File:ONE_PIXEL_CSP2.png | center ]]<br />
<br />
<br />
For the usual adversarial case, we can shift '''x''' in all dimensions, but the strength(norm) of the shift is bounded by L. For our one-pixel attack case, we are using the second equation with d=1. '''x''' is only allowed to be perturbed along a single axis, so the degree of freedom of the shift is greatly reduces. However, the shift in the one axis can be of arbitrary strength.<br />
<br />
<br />
== Differential Evolution ==<br />
Differential evolution(DE) is a population based optimization algorithm belonging to the class of evolutionary algorithms(EA). For DE, in the selection process, the population is in a way "segmented" into families( offspring and parents ), and the most optimal from each family is chosen. This segmentation allows DE to keep more diversity in each iteration than other EAs and makes it more suitable for complex multi-modal optimization problems. Additionally, DE does not use gradient information, which makes it suitable for our problem.<br />
<br />
= Evaluation and Results =<br />
<br />
=== Measurement Metrics and Datasets ===<br />
<br />
The authors used 4 measurement metrics to evaluate the effectiveness of the proposed attacks. <br />
* '''Success Rate''' <br /> ''Non-targeted attack'': the percentage of adversarial images were successfully classified to any possible classes other than the true one. <br /><br />
:<math> \textrm{success rate }=\displaystyle\sum_{k=1}^N I(\textrm{Network}(\textrm{Attack}(\textrm{Image}_k))\neq \textrm{TrueClass}_k) </math> <br />''Targeted attack'': the probability of successfully classifying a perturbed image to a targeted class.<br />
:<math> \textrm{success rate }=\displaystyle\sum_{k=1}^N I(\textrm{Network}(\textrm{TargetedAttack}(\textrm{Image}_k))= \textrm{TrueClass}_k) </math><br />
* '''Adversarial Probability Labels (Confidence)'''<br />The ratio of sum of probability level of the target class for each successful perturbation and the total number of successful attacks. This gives the mean confidence of the successful attacks on the target classification system.<br />
* '''Number of Target Classes'''<br />The number of images after perturbations cannot be classified to any other classes.<br />
* '''Number of Original-Target Class Pairs'''<br />The number of times of each pair being attacked.<br />
<br />
<br />
''Evaluation setups on CIFAR-10 test dataset'':<br />
<br />
Three types of networks played as defense systems: All Convolutional Networks (AllConv), Network in Network (NiN) and VGG16 network. (Figure 1)<br />
<br />
The authors randomly sampled 500 images, with resolution 32x32x3, from the dataset to perform one-pixel attacks and generated 500 samples with 3 and 5 pixel-modification respectively to conduct three-pixel and five-pixel attacks. The effectiveness of one-pixel attack was evaluated on all three introduced networks and the performance comparison, using success rate, was performed as the following: 1-pixel attack and 3-pixel attack on AllConv and 5-pixel attack on NiN. <br />
<br />
Both targeted attacks and non-targeted attacks were considered but only targeted attacks were conducted since the performance of non-targeted attack results could be obtained from the result of targeted attacks by applying a fitness function to increase the probability level of the target class.<br />
<br />
<br />
<br />
''Evaluation setups on ImageNet validation dataset (ILSVRC 2012)'':<br />
<br />
BVLC AlexNet played as defense system. (Figure 1)<br />
<br />
600 sample images were randomly sampled from the dataset. Due to the relatively high resolution of the images (224x224x3), only one-pixel attacked was carried out aiming to verify if extremely small pixel modification in a relatively large image can alternate the classification result.<br />
<br />
Since the number of classes in this dataset is way larger than the one of CIFAR-10, the authors only launched non-targeted attacks and applied a fitness function to decrease the probability level of the true class. <br />
<br />
<br />
<br />
[[File:1px_attack.jpg|thumb|center|400px|Figure 1. Networks Architecture]]<br />
<br />
=== Results ===<br />
<br />
On CIFAR-10, the proposed one-pixel attack succeeded (Table 1) in general and success rate and confidence increased by a significant amount when more pixels were modified (Table 2). Moreover, with one-pixel modification, each image can be perturbed to 2~4 classes in all 3 defense systems, i.e. AllConv, NiN and VGG16 (Figure 2); in particular, VGG16 is slightly more robust than the other two. Generally, all these three networks are vulnerable to one-pixel attack. When more pixels were allowed modifications, a significant number of images were successfully perturbed to up to 8 classes (Figure 3).<br />
<br />
On ImageNet, one-pixel attack performed well regarding to relatively large images (Table 1). The low confidence 5.53% is due to the large number of classes and the main efforts in decreasing the probability level of the true class. <br />
<br />
<gallery mode="slideshow"><br />
File:tables.jpg<br />
File:1px_3nets.jpg|frameless|Figure 2. One-pixel attack on three networks (Su, Vargas and Sakurai, 2018)<br />
File:3attacks.png|frameless|Figure 3. 1, 3 and 5-pixel attacks (Su, Vargas and Sakurai, 2018)<br />
</gallery><br />
<br />
By comparing the non-targeted attack effectiveness between the proposed method and two previous works, single dimensional perturbation vector is enough to delivery the corresponding adversarial images for most of the samples (Figure 4).<br />
<br />
[[File:Capture.PNG|thumb|center|400px|Figure 4. Comparison of non-targeted attack effectiveness between proposed method, LSA and FGSM]]<br />
<br />
<br />
Also, it is easy to craft adversarial images for certain original-target class pairs, especially in the case of one-pixel attack, as the data points of these classes share the vulnerable target directions. However, by increasing the dimensions of perturbations (e.g. from 1 to 3 and 5), most original classes are equally robust and have similar number of adversarial images (Figure 5).<br />
<br />
[[File:Capture2.PNG|thumb|center|400px|Figure 5. Number of successful attacks for a specific class; original in blue, target in red]]<br />
<br />
For special case (e.g. the ship when attacking NiN networks, class 8), it is relatively simple to craft adversarial samples from it but difficult to craft adversarial samples to it. Reason behind it is that, the boundary shape of the class is long and thin with natural images close to the border.<br />
<br />
In order to evaluate the time cost of conducting one-pixel attack for the four types of networks, the writer uses 2 metrics: <br />
<br /> ''1. AvgEvaluation'': the average number of evaluations to produce adversarial images<br />
<br /> ''2. AvgDistortion'': the required average distortion in one-colour channel of a single pixel to produce adversarial images<br />
<br />
The result is summarized in Figure 6:<br />
<br />
[[File:Capture3.PNG|thumb|center|400px|Figure 6. Evaluation of Time Complexity]]<br />
<br />
= Discussion and Future Work =<br />
<br />
This paper illustrates the possibility of finding point boundaries where class labels changes by moving data points along few dimensions and analyzing the frequency of changes in class labels quantitatively; it also proves that a single pixel change is powerful enough to perturb a considerable portion of images and the one-pixel attack can further work on different network structures or image sizes. Given the time constraints, the conducted experiments used a low number of DE iterations with a relatively small set of initial candidate solutions; by increasing the number of DE iterations or the initial candidate solutions, the perturbation success rate can be further improved. <br />
<br />
For the future work, it is suggested to use the proposed algorithms and natural image samples for further development of advanced models and better artificial adversarial samples.<br />
<br />
= Critique =<br />
<br />
* When the authors launched 1, 3 and 5-pixel attacks, they did it on different target networks, i.e. 1 and 3-pixel attacks on AllConv and 5-pixel attack on NiN, then compared the corresponding success rates. Such a comparison is not rigorous as different networks might lead to varied classification results. Although we understand that the authors wanted to show the best attack results among those networks, we think all three attacks should be conducted on the same defense systems respectively and consequently we can see the performance of each attack on each target network. The conclusion from these results would be more convincing.<br />
<br />
* Regarding to ImageNet, only AlexNet was used to attack but AllConv, NiN and VGG16. The papers in which AllConv and VGG16 were introduced have good experiment results on ImageNet. Attack experiments towards these networks should be considered to show the effectiveness of one-pixel attack.<br />
<br />
= Reference =<br />
<br />
Carlini.N,2017 https://www.youtube.com/watch?v=yIXNL88JBWQ<br />
<br />
Jiawei Su, Danilo Vasconcellos Vargas, Sakurai Kouichi, 2017 https://arxiv.org/pdf/1710.08864.pdf<br />
<br />
Roman Trusov,2017 https://blog.xix.ai/how-adversarial-attacks-work-87495b81da2d</div>Z3chuhttp://wiki.math.uwaterloo.ca/statwiki/index.php?title=One_pixel_attack_for_fooling_deep_neural_networks&diff=35739One pixel attack for fooling deep neural networks2018-03-27T18:37:17Z<p>Z3chu: /* Results */</p>
<hr />
<div>= Presented by = <br />
1. Ziheng Chu<br />
<br />
2. Minghao Lu<br />
<br />
3. Qi Mai<br />
<br />
4. Qici Tan<br />
<br />
= Introduction =<br />
Neural network first caught many people’s attention in imageNet contest in 2012. Neural network increased accuracy to 85% from 75%. The following year, it is increased to 89%. From no one used neural network to everyone uses the neural network. Today we have 97% accuracy in using deep neural network (DNN). So the problem of image recognition by are artificial intelligence is solved. However, there is one catch(Carlini.N,2017).<br />
<br />
The catch is that the DNN is really easy to be fooled. Here is an example. An image of a dog is classified as a hummingbird. Research studies by Google Brian, which is a deep learning artificial intelligence research team at Google, showed that any machine learning classifier can be tricked to give wrong predictions. The action of designing an input in a specific way to get the wrong result from the model is called an adversarial attack (Roman Trusov,2017). The input image is the adversarial image. This image is created by adding a tiny amount of perturbation, which is not so imperceptible to human eyes as shown in Figure 1. After zooming into Figure 1, as shown in Figure 2, a small amount of perturbation led to misclassify a dog as a hummingbird.<br />
<br />
[[File:abcd.png|thumb|center|400px|Figure 1 Adversarial Example]]<br />
<br />
[[File:zoom.jpg|thumb|center|400px|Figure 2 Adversarial Example (Zoomed)]]<br />
<br />
How is it possible? DNN models consist of transformation. Most of those transformations are sometimes very sensitive to a small change. Think of the DNN as a set of high-dimensional decision boundary. When an input is not perfect, if the decision boundary is too simple and linear, mostly it leads to misclassify. <br />
<br />
Harnessing this sensitivity is a way to better understand and product robust algorithm in AI security. This paper aims to demonstrate the vulnerability of DNN by presenting some extreme scenarios - one pixel attack. As shown in Figure3, only one pixel was perturbed, the classification was wrong in each image. Although, there is no profound defense to the attack as of current state, the investigation of one-pixel attack may shield lights on the behavior of DNN. Ultimately, it leads to the discussion of the security implications to future solution.<br />
<br />
[[File:one.jpg|thumb|center|400px|Figure 3 One Pixel Attack Examples]]<br />
<br />
This paper proposed one pixel attack in a scenario where the only information available is the probability labels. Comparing to previous work, this proposal showed its effectiveness of successful attack rate up to 73%, its simplicity of semi-black-box which only required probability label no need inner information, and its flexibility in attacking more models, especially the networks that are not differentiable and the gradient calculation is difficult.<br />
<br />
With the intension of creating an adversarial attack for better understanding the security of DNN, one pixel attack should be considered. Two main reasons: 1) a new way of exploring the high dimensional DNN by using fewer and lower dimensional slices. It is different from previous work, where perturbation was done by adding small value to each pixel. 2) a measure of perceptiveness to demonstrate the severity of one-pixel attack as comparing to a few pixel examples.<br />
<br />
= Related works =<br />
The sensitivity to well-turned artificial perturbation were investigated in various related work.<br />
<br />
1. First perturbation was crafted by several gradient-based algorithms using back- propagation for obtaining gradient information. (C. Szegedy et al. )<br />
<br />
2. Fast gradient sign algorithm for calculating effective perturbation<br />
It was with the hypotheses of the linearity and high-dimensions of inputs were the main reason of vulnerability. (I.J.Goodfellow et al.) <br />
<br />
3. Greedy perturbation searching method by assuming the linearity of DNN decision boundary (S.M Moosavi-Dezfooli et al.) Jacobian matrix to build “Adversarial Saliency Map” which indicates the effectiveness of conducting a fixed length perturbation through the direction of each axis (N. Papernot et al. )<br />
<br />
4. The images can hardly be recognized by human eyes but nevertheless classified by the network with high confi- dence. (A. Nguyen et al. )<br />
<br />
5. Several black-box attacks that require no internal knowledge about the target systems such as gradients, have also been proposed. only utilized it as a starting point to derive a further semi black-box attack which needs to modify more pixels (N. Narodytska et al)<br />
<br />
6. Both natural and random images are found to be vulnerable to adversarial perturbation. Assuming these images are evenly distributed, it suggests that most data points in the input space are gathered near to the boundaries. ( A. Fawzi, S. M. Moosavi Dezfooli, and P. Frossard.)<br />
<br />
7. A curvature analysis region along most directions around natural images are flat with only few directions where the space is curved and the images are sensitive to perturbation. (A. Fawzi et al.)<br />
<br />
8. Universal perturbations (i.e. a perturbation that when added to any natural image can generate adversarial samples with high effectiveness) were shown possible and to achieve a high effectiveness when compared to random perturbation. This indicates that the diversity of boundaries might be low while the boundaries’ shapes near different data points are similar (S. M. Moosavi Dezfooli, A. Fawzi, O. Fawzi, and P. Frossard.)<br />
<br />
= Methodology =<br />
<br />
== Problem Description ==<br />
<br />
We can formalize the generation of adversarial images as an constrained optimization problem. We are given a classifier '''F''' and targeted adversarial class '''adv'''. Let '''x''' be the vectorized form of an image. Let F_adv('''x''') represent the probability assigned to the targeted class '''adv''' for vector '''x''' by the given classifier. Let e('''x''') represent an additive adversarial perturbation vector for image '''x'''. The goal of the targeted attack adversary is to find the perturbation vector with constrained size(norm) that maximizes the probability assigned to the targeted class by the classifier. Formally <br />
<br />
[[File:ONE_PIXEL_CSP1.png | center ]]<br />
<br />
<br />
For few-pixel attack, the problem statement is changed slightly. We constrain on the number of non-zero elements of the perturbation vector instead of the norm of the vector.<br />
<br />
[[File:ONE_PIXEL_CSP2.png | center ]]<br />
<br />
<br />
For the usual adversarial case, we can shift '''x''' in all dimensions, but the strength(norm) of the shift is bounded by L. For our one-pixel attack case, we are using the second equation with d=1. '''x''' is only allowed to be perturbed along a single axis, so the degree of freedom of the shift is greatly reduces. However, the shift in the one axis can be of arbitrary strength.<br />
<br />
<br />
== Differential Evolution ==<br />
Differential evolution(DE) is a population based optimization algorithm belonging to the class of evolutionary algorithms(EA). For DE, in the selection process, the population is in a way "segmented" into families( offspring and parents ), and the most optimal from each family is chosen. This segmentation allows DE to keep more diversity in each iteration than other EAs and makes it more suitable for complex multi-modal optimization problems. Additionally, DE does not use gradient information, which makes it suitable for our problem.<br />
<br />
= Evaluation and Results =<br />
<br />
=== Measurement Metrics and Datasets ===<br />
<br />
The authors used 4 measurement metrics to evaluate the effectiveness of the proposed attacks. <br />
* '''Success Rate''' <br /> ''Non-targeted attack'': the percentage of adversarial images were successfully classified to any possible classes other than the true one. <br /><br />
:<math> \textrm{success rate }=\displaystyle\sum_{k=1}^N I(\textrm{Network}(\textrm{Attack}(\textrm{Image}_k))\neq \textrm{TrueClass}_k) </math> <br />''Targeted attack'': the probability of successfully classifying a perturbed image to a targeted class.<br />
:<math> \textrm{success rate }=\displaystyle\sum_{k=1}^N I(\textrm{Network}(\textrm{TargetedAttack}(\textrm{Image}_k))= \textrm{TrueClass}_k) </math><br />
* '''Adversarial Probability Labels (Confidence)'''<br />The ratio of sum of probability level of the target class for each successful perturbation and the total number of successful attacks. This gives the mean confidence of the successful attacks on the target classification system.<br />
* '''Number of Target Classes'''<br />The number of images after perturbations cannot be classified to any other classes.<br />
* '''Number of Original-Target Class Pairs'''<br />The number of times of each pair being attacked.<br />
<br />
<br />
''Evaluation setups on CIFAR-10 test dataset'':<br />
<br />
Three types of networks played as defense systems: All Convolutional Networks (AllConv), Network in Network (NiN) and VGG16 network. (Figure 1)<br />
<br />
The authors randomly sampled 500 images, with resolution 32x32x3, from the dataset to perform one-pixel attacks and generated 500 samples with 3 and 5 pixel-modification respectively to conduct three-pixel and five-pixel attacks. The effectiveness of one-pixel attack was evaluated on all three introduced networks and the performance comparison, using success rate, was performed as the following: 1-pixel attack and 3-pixel attack on AllConv and 5-pixel attack on NiN. <br />
<br />
Both targeted attacks and non-targeted attacks were considered but only targeted attacks were conducted since the performance of non-targeted attack results could be obtained from the result of targeted attacks by applying a fitness function to increase the probability level of the target class.<br />
<br />
<br />
<br />
''Evaluation setups on ImageNet validation dataset (ILSVRC 2012)'':<br />
<br />
BVLC AlexNet played as defense system. (Figure 1)<br />
<br />
600 sample images were randomly sampled from the dataset. Due to the relatively high resolution of the images (224x224x3), only one-pixel attacked was carried out aiming to verify if extremely small pixel modification in a relatively large image can alternate the classification result.<br />
<br />
Since the number of classes in this dataset is way larger than the one of CIFAR-10, the authors only launched non-targeted attacks and applied a fitness function to decrease the probability level of the true class. <br />
<br />
<br />
<br />
[[File:1px_attack.jpg|thumb|center|400px|Figure 1. Networks Architecture]]<br />
<br />
=== Results ===<br />
<br />
On CIFAR-10, the proposed one-pixel attack succeeded (Table 1) in general and success rate and confidence increased by a significant amount when more pixels were modified (Table 2). Moreover, with one-pixel modification, each image can be perturbed to 2~4 classes in all 3 defense systems, i.e. AllConv, NiN and VGG16 (Figure 2); in particular, VGG16 is slightly more robust than the other two. Generally, all these three networks are vulnerable to one-pixel attack. When more pixels were allowed modifications, a significant number of images were successfully perturbed to up to 8 classes (Figure 3).<br />
<br />
On ImageNet, one-pixel attack performed well regarding to relatively large images (Table 1). The low confidence 5.53% is due to the large number of classes and the main efforts in decreasing the probability level of the true class. <br />
<br />
<gallery mode="slideshow"><br />
File:tables.jpg<br />
File:1px_3nets.jpg|frameless|Figure 2. One-pixel attack on three networks (Su, Vargas and Sakurai, 2018)<br />
File:3attacks.png|frameless|Figure 3. 1, 3 and 5-pixel attacks (Su, Vargas and Sakurai, 2018)<br />
</gallery><br />
<br />
By comparing the non-targeted attack effectiveness between the proposed method and two previous works, single dimensional perturbation vector is enough to delivery the corresponding adversarial images for most of the samples (Figure 4).<br />
<br />
[[File:Capture.PNG|thumb|center|400px|Figure 4. Comparison of non-targeted attack effectiveness between proposed method, LSA and FGSM]]<br />
<br />
<br />
Also, it is easy to craft adversarial images for certain original-target class pairs, especially in the case of one-pixel attack, as the data points of these classes share the vulnerable target directions. However, by increasing the dimensions of perturbations (e.g. from 1 to 3 and 5), most original classes are equally robust and have similar number of adversarial images(Figure 5).<br />
<br />
[[File:Capture2.PNG|thumb|center|400px|Figure 5. Number of successful attacks for a specific class; original in blue, target in red]]<br />
<br />
For special case (e.g. the ship when attacking NiN networks, class 8), it is relatively simple to craft adversarial samples from it but difficult to craft adversarial samples to it. Reason behind it is that, the boundary shape of the class is long and thin with natural images close to the border.<br />
<br />
In order to evaluate the time cost of conducting one-pixel attack for the four types of networks, the writer uses 2 metrics: <br />
<br /> ''1. AvgEvaluation'': the average number of evaluations to produce adversarial images<br />
<br /> ''2. AvgDistortion'': the required average distortion in one-colour channel of a single pixel to produce adversarial images<br />
<br />
The result is summarized in Figure 6:<br />
<br />
[[File:Capture3.PNG|thumb|center|400px|Figure 6. Evaluation of Time Complexity]]<br />
<br />
= Discussion and Future Work =<br />
<br />
This paper illustrates the possibility of finding point boundaries where class labels changes by moving data points along few dimensions and analyzing the frequency of changes in class labels quantitatively; it also proves that a single pixel change is powerful enough to perturb a considerable portion of images and the one-pixel attack can further work on different network structures or image sizes. Given the time constraints, the conducted experiments used a low number of DE iterations with a relatively small set of initial candidate solutions; by increasing the number of DE iterations or the initial candidate solutions, the perturbation success rate can be further improved. <br />
<br />
For the future work, it is suggested to use the proposed algorithms and natural image samples for further development of advanced models and better artificial adversarial samples.<br />
<br />
= Critique =<br />
<br />
* When the authors launched 1, 3 and 5-pixel attacks, they did it on different target networks, i.e. 1 and 3-pixel attacks on AllConv and 5-pixel attack on NiN, then compared the corresponding success rates. Such a comparison is not rigorous as different networks might lead to varied classification results. Although we understand that the authors wanted to show the best attack results among those networks, we think all three attacks should be conducted on the same defense systems respectively and consequently we can see the performance of each attack on each target network. The conclusion from these results would be more convincing.<br />
<br />
* Regarding to ImageNet, only AlexNet was used to attack but AllConv, NiN and VGG16. The papers in which AllConv and VGG16 were introduced have good experiment results on ImageNet. Attack experiments towards these networks should be considered to show the effectiveness of one-pixel attack.<br />
<br />
= Reference =<br />
<br />
Carlini.N,2017 https://www.youtube.com/watch?v=yIXNL88JBWQ<br />
<br />
Jiawei Su, Danilo Vasconcellos Vargas, Sakurai Kouichi, 2017 https://arxiv.org/pdf/1710.08864.pdf<br />
<br />
Roman Trusov,2017 https://blog.xix.ai/how-adversarial-attacks-work-87495b81da2d</div>Z3chuhttp://wiki.math.uwaterloo.ca/statwiki/index.php?title=One_pixel_attack_for_fooling_deep_neural_networks&diff=35738One pixel attack for fooling deep neural networks2018-03-27T18:30:37Z<p>Z3chu: /* Results */</p>
<hr />
<div>= Presented by = <br />
1. Ziheng Chu<br />
<br />
2. Minghao Lu<br />
<br />
3. Qi Mai<br />
<br />
4. Qici Tan<br />
<br />
= Introduction =<br />
Neural network first caught many people’s attention in imageNet contest in 2012. Neural network increased accuracy to 85% from 75%. The following year, it is increased to 89%. From no one used neural network to everyone uses the neural network. Today we have 97% accuracy in using deep neural network (DNN). So the problem of image recognition by are artificial intelligence is solved. However, there is one catch(Carlini.N,2017).<br />
<br />
The catch is that the DNN is really easy to be fooled. Here is an example. An image of a dog is classified as a hummingbird. Research studies by Google Brian, which is a deep learning artificial intelligence research team at Google, showed that any machine learning classifier can be tricked to give wrong predictions. The action of designing an input in a specific way to get the wrong result from the model is called an adversarial attack (Roman Trusov,2017). The input image is the adversarial image. This image is created by adding a tiny amount of perturbation, which is not so imperceptible to human eyes as shown in Figure 1. After zooming into Figure 1, as shown in Figure 2, a small amount of perturbation led to misclassify a dog as a hummingbird.<br />
<br />
[[File:abcd.png|thumb|center|400px|Figure 1 Adversarial Example]]<br />
<br />
[[File:zoom.jpg|thumb|center|400px|Figure 2 Adversarial Example (Zoomed)]]<br />
<br />
How is it possible? DNN models consist of transformation. Most of those transformations are sometimes very sensitive to a small change. Think of the DNN as a set of high-dimensional decision boundary. When an input is not perfect, if the decision boundary is too simple and linear, mostly it leads to misclassify. <br />
<br />
Harnessing this sensitivity is a way to better understand and product robust algorithm in AI security. This paper aims to demonstrate the vulnerability of DNN by presenting some extreme scenarios - one pixel attack. As shown in Figure3, only one pixel was perturbed, the classification was wrong in each image. Although, there is no profound defense to the attack as of current state, the investigation of one-pixel attack may shield lights on the behavior of DNN. Ultimately, it leads to the discussion of the security implications to future solution.<br />
<br />
[[File:one.jpg|thumb|center|400px|Figure 3 One Pixel Attack Examples]]<br />
<br />
This paper proposed one pixel attack in a scenario where the only information available is the probability labels. Comparing to previous work, this proposal showed its effectiveness of successful attack rate up to 73%, its simplicity of semi-black-box which only required probability label no need inner information, and its flexibility in attacking more models, especially the networks that are not differentiable and the gradient calculation is difficult.<br />
<br />
With the intension of creating an adversarial attack for better understanding the security of DNN, one pixel attack should be considered. Two main reasons: 1) a new way of exploring the high dimensional DNN by using fewer and lower dimensional slices. It is different from previous work, where perturbation was done by adding small value to each pixel. 2) a measure of perceptiveness to demonstrate the severity of one-pixel attack as comparing to a few pixel examples.<br />
<br />
= Related works =<br />
The sensitivity to well-turned artificial perturbation were investigated in various related work.<br />
<br />
1. First perturbation was crafted by several gradient-based algorithms using back- propagation for obtaining gradient information. (C. Szegedy et al. )<br />
<br />
2. Fast gradient sign algorithm for calculating effective perturbation<br />
It was with the hypotheses of the linearity and high-dimensions of inputs were the main reason of vulnerability. (I.J.Goodfellow et al.) <br />
<br />
3. Greedy perturbation searching method by assuming the linearity of DNN decision boundary (S.M Moosavi-Dezfooli et al.) Jacobian matrix to build “Adversarial Saliency Map” which indicates the effectiveness of conducting a fixed length perturbation through the direction of each axis (N. Papernot et al. )<br />
<br />
4. The images can hardly be recognized by human eyes but nevertheless classified by the network with high confi- dence. (A. Nguyen et al. )<br />
<br />
5. Several black-box attacks that require no internal knowledge about the target systems such as gradients, have also been proposed. only utilized it as a starting point to derive a further semi black-box attack which needs to modify more pixels (N. Narodytska et al)<br />
<br />
6. Both natural and random images are found to be vulnerable to adversarial perturbation. Assuming these images are evenly distributed, it suggests that most data points in the input space are gathered near to the boundaries. ( A. Fawzi, S. M. Moosavi Dezfooli, and P. Frossard.)<br />
<br />
7. A curvature analysis region along most directions around natural images are flat with only few directions where the space is curved and the images are sensitive to perturbation. (A. Fawzi et al.)<br />
<br />
8. Universal perturbations (i.e. a perturbation that when added to any natural image can generate adversarial samples with high effectiveness) were shown possible and to achieve a high effectiveness when compared to random perturbation. This indicates that the diversity of boundaries might be low while the boundaries’ shapes near different data points are similar (S. M. Moosavi Dezfooli, A. Fawzi, O. Fawzi, and P. Frossard.)<br />
<br />
= Methodology =<br />
<br />
== Problem Description ==<br />
<br />
We can formalize the generation of adversarial images as an constrained optimization problem. We are given a classifier '''F''' and targeted adversarial class '''adv'''. Let '''x''' be the vectorized form of an image. Let F_adv('''x''') represent the probability assigned to the targeted class '''adv''' for vector '''x''' by the given classifier. Let e('''x''') represent an additive adversarial perturbation vector for image '''x'''. The goal of the targeted attack adversary is to find the perturbation vector with constrained size(norm) that maximizes the probability assigned to the targeted class by the classifier. Formally <br />
<br />
[[File:ONE_PIXEL_CSP1.png | center ]]<br />
<br />
<br />
For few-pixel attack, the problem statement is changed slightly. We constrain on the number of non-zero elements of the perturbation vector instead of the norm of the vector.<br />
<br />
[[File:ONE_PIXEL_CSP2.png | center ]]<br />
<br />
<br />
For the usual adversarial case, we can shift '''x''' in all dimensions, but the strength(norm) of the shift is bounded by L. For our one-pixel attack case, we are using the second equation with d=1. '''x''' is only allowed to be perturbed along a single axis, so the degree of freedom of the shift is greatly reduces. However, the shift in the one axis can be of arbitrary strength.<br />
<br />
<br />
== Differential Evolution ==<br />
Differential evolution(DE) is a population based optimization algorithm belonging to the class of evolutionary algorithms(EA). For DE, in the selection process, the population is in a way "segmented" into families( offspring and parents ), and the most optimal from each family is chosen. This segmentation allows DE to keep more diversity in each iteration than other EAs and makes it more suitable for complex multi-modal optimization problems. Additionally, DE does not use gradient information, which makes it suitable for our problem.<br />
<br />
= Evaluation and Results =<br />
<br />
=== Measurement Metrics and Datasets ===<br />
<br />
The authors used 4 measurement metrics to evaluate the effectiveness of the proposed attacks. <br />
* '''Success Rate''' <br /> ''Non-targeted attack'': the percentage of adversarial images were successfully classified to any possible classes other than the true one. <br /><br />
:<math> \textrm{success rate }=\displaystyle\sum_{k=1}^N I(\textrm{Network}(\textrm{Attack}(\textrm{Image}_k))\neq \textrm{TrueClass}_k) </math> <br />''Targeted attack'': the probability of successfully classifying a perturbed image to a targeted class.<br />
:<math> \textrm{success rate }=\displaystyle\sum_{k=1}^N I(\textrm{Network}(\textrm{TargetedAttack}(\textrm{Image}_k))= \textrm{TrueClass}_k) </math><br />
* '''Adversarial Probability Labels (Confidence)'''<br />The ratio of sum of probability level of the target class for each successful perturbation and the total number of successful attacks. This gives the mean confidence of the successful attacks on the target classification system.<br />
* '''Number of Target Classes'''<br />The number of images after perturbations cannot be classified to any other classes.<br />
* '''Number of Original-Target Class Pairs'''<br />The number of times of each pair being attacked.<br />
<br />
<br />
''Evaluation setups on CIFAR-10 test dataset'':<br />
<br />
Three types of networks played as defense systems: All Convolutional Networks (AllConv), Network in Network (NiN) and VGG16 network. (Figure 1)<br />
<br />
The authors randomly sampled 500 images, with resolution 32x32x3, from the dataset to perform one-pixel attacks and generated 500 samples with 3 and 5 pixel-modification respectively to conduct three-pixel and five-pixel attacks. The effectiveness of one-pixel attack was evaluated on all three introduced networks and the performance comparison, using success rate, was performed as the following: 1-pixel attack and 3-pixel attack on AllConv and 5-pixel attack on NiN. <br />
<br />
Both targeted attacks and non-targeted attacks were considered but only targeted attacks were conducted since the performance of non-targeted attack results could be obtained from the result of targeted attacks by applying a fitness function to increase the probability level of the target class.<br />
<br />
<br />
<br />
''Evaluation setups on ImageNet validation dataset (ILSVRC 2012)'':<br />
<br />
BVLC AlexNet played as defense system. (Figure 1)<br />
<br />
600 sample images were randomly sampled from the dataset. Due to the relatively high resolution of the images (224x224x3), only one-pixel attacked was carried out aiming to verify if extremely small pixel modification in a relatively large image can alternate the classification result.<br />
<br />
Since the number of classes in this dataset is way larger than the one of CIFAR-10, the authors only launched non-targeted attacks and applied a fitness function to decrease the probability level of the true class. <br />
<br />
<br />
<br />
[[File:1px_attack.jpg|thumb|center|400px|Figure 1. Networks Architecture]]<br />
<br />
=== Results ===<br />
<br />
On CIFAR-10, the proposed one-pixel attack succeeded (Table 1) in general and success rate and confidence increased by a significant amount when more pixels were modified (Table 2). Moreover, with one-pixel modification, each image can be perturbed to 2~4 classes in all 3 defense systems, i.e. AllConv, NiN and VGG16 (Figure 2); in particular, VGG16 is slightly more robust than the other two. Generally, all these three networks are vulnerable to one-pixel attack. When more pixels were allowed modifications, a significant number of images were successfully perturbed to up to 8 classes (Figure 3).<br />
<br />
On ImageNet, one-pixel attack performed well regarding to relatively large images (Table 1). The low confidence 5.53% is due to the large number of classes and the main efforts in decreasing the probability level of the true class. <br />
<br />
<gallery mode="slideshow"><br />
File:tables.jpg<br />
File:1px_3nets.jpg|frameless|Figure 2. One-pixel attack on three networks (Su, Vargas and Sakurai, 2018)<br />
File:3attacks.png|frameless|Figure 3. 1, 3 and 5-pixel attacks (Su, Vargas and Sakurai, 2018)<br />
</gallery><br />
<br />
By comparing the non-targeted attack effectiveness between the proposed method and two previous works, single dimensional perturbation vector is enough to delivery the corresponding adversarial images for most of the samples (Figure 4).<br />
<br />
[[File:Capture.PNG|thumb|center|400px|Figure 4. Comparison of non-targeted attack effectiveness between proposed method, LSA and FGSM]]<br />
<br />
<br />
Also, it is easy to craft adversarial images for certain original-target class pairs, especially in the case of one-pixel attack, as the data points of these classes share the vulnerable target directions. However, by increasing the dimensions of perturbations (e.g. from 1 to 3 and 5), most original classes are equally robust and have similar number of adversarial images(Figure 5).<br />
<br />
[[File:Capture2.PNG|thumb|center|400px|Figure 5. Number of successful attacks for a specific class; original in blue, target in red]]<br />
<br />
For special case (e.g. the ship (class 8) when attacking NiN networks), it is relatively simple to craft adversarial samples from it but difficult to craft adversarial samples to it. Reason behind it is that, the boundary shape of the class is long and thin with natural images close to the border.<br />
<br />
In order to evaluate the time cost of conducting one-pixel attack for the four types of networks, the writer uses 2 metrics: <br />
<br /> ''1. AvgEvaluation'': the average number of evaluations to produce adversarial images<br />
<br /> ''2. AvgDistortion'': the required average distortion in one-colour channel of a single pixel to produce adversarial images<br />
<br />
The result is summarized in Figure 6:<br />
<br />
[[File:Capture3.PNG|thumb|center|400px|Figure 6. Evaluation of Time Complexity]]<br />
<br />
= Discussion and Future Work =<br />
<br />
This paper illustrates the possibility of finding point boundaries where class labels changes by moving data points along few dimensions and analyzing the frequency of changes in class labels quantitatively; it also proves that a single pixel change is powerful enough to perturb a considerable portion of images and the one-pixel attack can further work on different network structures or image sizes. Given the time constraints, the conducted experiments used a low number of DE iterations with a relatively small set of initial candidate solutions; by increasing the number of DE iterations or the initial candidate solutions, the perturbation success rate can be further improved. <br />
<br />
For the future work, it is suggested to use the proposed algorithms and natural image samples for further development of advanced models and better artificial adversarial samples.<br />
<br />
= Critique =<br />
<br />
* When the authors launched 1, 3 and 5-pixel attacks, they did it on different target networks, i.e. 1 and 3-pixel attacks on AllConv and 5-pixel attack on NiN, then compared the corresponding success rates. Such a comparison is not rigorous as different networks might lead to varied classification results. Although we understand that the authors wanted to show the best attack results among those networks, we think all three attacks should be conducted on the same defense systems respectively and consequently we can see the performance of each attack on each target network. The conclusion from these results would be more convincing.<br />
<br />
* Regarding to ImageNet, only AlexNet was used to attack but AllConv, NiN and VGG16. The papers in which AllConv and VGG16 were introduced have good experiment results on ImageNet. Attack experiments towards these networks should be considered to show the effectiveness of one-pixel attack.<br />
<br />
= Reference =<br />
<br />
Carlini.N,2017 https://www.youtube.com/watch?v=yIXNL88JBWQ<br />
<br />
Jiawei Su, Danilo Vasconcellos Vargas, Sakurai Kouichi, 2017 https://arxiv.org/pdf/1710.08864.pdf<br />
<br />
Roman Trusov,2017 https://blog.xix.ai/how-adversarial-attacks-work-87495b81da2d</div>Z3chuhttp://wiki.math.uwaterloo.ca/statwiki/index.php?title=One_pixel_attack_for_fooling_deep_neural_networks&diff=35737One pixel attack for fooling deep neural networks2018-03-27T18:27:52Z<p>Z3chu: /* Results */</p>
<hr />
<div>= Presented by = <br />
1. Ziheng Chu<br />
<br />
2. Minghao Lu<br />
<br />
3. Qi Mai<br />
<br />
4. Qici Tan<br />
<br />
= Introduction =<br />
Neural network first caught many people’s attention in imageNet contest in 2012. Neural network increased accuracy to 85% from 75%. The following year, it is increased to 89%. From no one used neural network to everyone uses the neural network. Today we have 97% accuracy in using deep neural network (DNN). So the problem of image recognition by are artificial intelligence is solved. However, there is one catch(Carlini.N,2017).<br />
<br />
The catch is that the DNN is really easy to be fooled. Here is an example. An image of a dog is classified as a hummingbird. Research studies by Google Brian, which is a deep learning artificial intelligence research team at Google, showed that any machine learning classifier can be tricked to give wrong predictions. The action of designing an input in a specific way to get the wrong result from the model is called an adversarial attack (Roman Trusov,2017). The input image is the adversarial image. This image is created by adding a tiny amount of perturbation, which is not so imperceptible to human eyes as shown in Figure 1. After zooming into Figure 1, as shown in Figure 2, a small amount of perturbation led to misclassify a dog as a hummingbird.<br />
<br />
[[File:abcd.png|thumb|center|400px|Figure 1 Adversarial Example]]<br />
<br />
[[File:zoom.jpg|thumb|center|400px|Figure 2 Adversarial Example (Zoomed)]]<br />
<br />
How is it possible? DNN models consist of transformation. Most of those transformations are sometimes very sensitive to a small change. Think of the DNN as a set of high-dimensional decision boundary. When an input is not perfect, if the decision boundary is too simple and linear, mostly it leads to misclassify. <br />
<br />
Harnessing this sensitivity is a way to better understand and product robust algorithm in AI security. This paper aims to demonstrate the vulnerability of DNN by presenting some extreme scenarios - one pixel attack. As shown in Figure3, only one pixel was perturbed, the classification was wrong in each image. Although, there is no profound defense to the attack as of current state, the investigation of one-pixel attack may shield lights on the behavior of DNN. Ultimately, it leads to the discussion of the security implications to future solution.<br />
<br />
[[File:one.jpg|thumb|center|400px|Figure 3 One Pixel Attack Examples]]<br />
<br />
This paper proposed one pixel attack in a scenario where the only information available is the probability labels. Comparing to previous work, this proposal showed its effectiveness of successful attack rate up to 73%, its simplicity of semi-black-box which only required probability label no need inner information, and its flexibility in attacking more models, especially the networks that are not differentiable and the gradient calculation is difficult.<br />
<br />
With the intension of creating an adversarial attack for better understanding the security of DNN, one pixel attack should be considered. Two main reasons: 1) a new way of exploring the high dimensional DNN by using fewer and lower dimensional slices. It is different from previous work, where perturbation was done by adding small value to each pixel. 2) a measure of perceptiveness to demonstrate the severity of one-pixel attack as comparing to a few pixel examples.<br />
<br />
= Related works =<br />
The sensitivity to well-turned artificial perturbation were investigated in various related work.<br />
<br />
1. First perturbation was crafted by several gradient-based algorithms using back- propagation for obtaining gradient information. (C. Szegedy et al. )<br />
<br />
2. Fast gradient sign algorithm for calculating effective perturbation<br />
It was with the hypotheses of the linearity and high-dimensions of inputs were the main reason of vulnerability. (I.J.Goodfellow et al.) <br />
<br />
3. Greedy perturbation searching method by assuming the linearity of DNN decision boundary (S.M Moosavi-Dezfooli et al.) Jacobian matrix to build “Adversarial Saliency Map” which indicates the effectiveness of conducting a fixed length perturbation through the direction of each axis (N. Papernot et al. )<br />
<br />
4. The images can hardly be recognized by human eyes but nevertheless classified by the network with high confi- dence. (A. Nguyen et al. )<br />
<br />
5. Several black-box attacks that require no internal knowledge about the target systems such as gradients, have also been proposed. only utilized it as a starting point to derive a further semi black-box attack which needs to modify more pixels (N. Narodytska et al)<br />
<br />
6. Both natural and random images are found to be vulnerable to adversarial perturbation. Assuming these images are evenly distributed, it suggests that most data points in the input space are gathered near to the boundaries. ( A. Fawzi, S. M. Moosavi Dezfooli, and P. Frossard.)<br />
<br />
7. A curvature analysis region along most directions around natural images are flat with only few directions where the space is curved and the images are sensitive to perturbation. (A. Fawzi et al.)<br />
<br />
8. Universal perturbations (i.e. a perturbation that when added to any natural image can generate adversarial samples with high effectiveness) were shown possible and to achieve a high effectiveness when compared to random perturbation. This indicates that the diversity of boundaries might be low while the boundaries’ shapes near different data points are similar (S. M. Moosavi Dezfooli, A. Fawzi, O. Fawzi, and P. Frossard.)<br />
<br />
= Methodology =<br />
<br />
== Problem Description ==<br />
<br />
We can formalize the generation of adversarial images as an constrained optimization problem. We are given a classifier '''F''' and targeted adversarial class '''adv'''. Let '''x''' be the vectorized form of an image. Let F_adv('''x''') represent the probability assigned to the targeted class '''adv''' for vector '''x''' by the given classifier. Let e('''x''') represent an additive adversarial perturbation vector for image '''x'''. The goal of the targeted attack adversary is to find the perturbation vector with constrained size(norm) that maximizes the probability assigned to the targeted class by the classifier. Formally <br />
<br />
[[File:ONE_PIXEL_CSP1.png | center ]]<br />
<br />
<br />
For few-pixel attack, the problem statement is changed slightly. We constrain on the number of non-zero elements of the perturbation vector instead of the norm of the vector.<br />
<br />
[[File:ONE_PIXEL_CSP2.png | center ]]<br />
<br />
<br />
For the usual adversarial case, we can shift '''x''' in all dimensions, but the strength(norm) of the shift is bounded by L. For our one-pixel attack case, we are using the second equation with d=1. '''x''' is only allowed to be perturbed along a single axis, so the degree of freedom of the shift is greatly reduces. However, the shift in the one axis can be of arbitrary strength.<br />
<br />
<br />
== Differential Evolution ==<br />
Differential evolution(DE) is a population based optimization algorithm belonging to the class of evolutionary algorithms(EA). For DE, in the selection process, the population is in a way "segmented" into families( offspring and parents ), and the most optimal from each family is chosen. This segmentation allows DE to keep more diversity in each iteration than other EAs and makes it more suitable for complex multi-modal optimization problems. Additionally, DE does not use gradient information, which makes it suitable for our problem.<br />
<br />
= Evaluation and Results =<br />
<br />
=== Measurement Metrics and Datasets ===<br />
<br />
The authors used 4 measurement metrics to evaluate the effectiveness of the proposed attacks. <br />
* '''Success Rate''' <br /> ''Non-targeted attack'': the percentage of adversarial images were successfully classified to any possible classes other than the true one. <br /><br />
:<math> \textrm{success rate }=\displaystyle\sum_{k=1}^N I(\textrm{Network}(\textrm{Attack}(\textrm{Image}_k))\neq \textrm{TrueClass}_k) </math> <br />''Targeted attack'': the probability of successfully classifying a perturbed image to a targeted class.<br />
:<math> \textrm{success rate }=\displaystyle\sum_{k=1}^N I(\textrm{Network}(\textrm{TargetedAttack}(\textrm{Image}_k))= \textrm{TrueClass}_k) </math><br />
* '''Adversarial Probability Labels (Confidence)'''<br />The ratio of sum of probability level of the target class for each successful perturbation and the total number of successful attacks. This gives the mean confidence of the successful attacks on the target classification system.<br />
* '''Number of Target Classes'''<br />The number of images after perturbations cannot be classified to any other classes.<br />
* '''Number of Original-Target Class Pairs'''<br />The number of times of each pair being attacked.<br />
<br />
<br />
''Evaluation setups on CIFAR-10 test dataset'':<br />
<br />
Three types of networks played as defense systems: All Convolutional Networks (AllConv), Network in Network (NiN) and VGG16 network. (Figure 1)<br />
<br />
The authors randomly sampled 500 images, with resolution 32x32x3, from the dataset to perform one-pixel attacks and generated 500 samples with 3 and 5 pixel-modification respectively to conduct three-pixel and five-pixel attacks. The effectiveness of one-pixel attack was evaluated on all three introduced networks and the performance comparison, using success rate, was performed as the following: 1-pixel attack and 3-pixel attack on AllConv and 5-pixel attack on NiN. <br />
<br />
Both targeted attacks and non-targeted attacks were considered but only targeted attacks were conducted since the performance of non-targeted attack results could be obtained from the result of targeted attacks by applying a fitness function to increase the probability level of the target class.<br />
<br />
<br />
<br />
''Evaluation setups on ImageNet validation dataset (ILSVRC 2012)'':<br />
<br />
BVLC AlexNet played as defense system. (Figure 1)<br />
<br />
600 sample images were randomly sampled from the dataset. Due to the relatively high resolution of the images (224x224x3), only one-pixel attacked was carried out aiming to verify if extremely small pixel modification in a relatively large image can alternate the classification result.<br />
<br />
Since the number of classes in this dataset is way larger than the one of CIFAR-10, the authors only launched non-targeted attacks and applied a fitness function to decrease the probability level of the true class. <br />
<br />
<br />
<br />
[[File:1px_attack.jpg|thumb|center|400px|Figure 1. Networks Architecture]]<br />
<br />
=== Results ===<br />
<br />
On CIFAR-10, the proposed one-pixel attack succeeded (Table 1) in general and success rate and confidence increased by a significant amount when more pixels were modified (Table 2). Moreover, with one-pixel modification, each image can be perturbed to 2~4 classes in all 3 defense systems, i.e. AllConv, NiN and VGG16 (Figure 2); in particular, VGG16 is slightly more robust than the other two. Generally, all these three networks are vulnerable to one-pixel attack. When more pixels were allowed modifications, a significant number of images were successfully perturbed to up to 8 classes (Figure 3).<br />
<br />
On ImageNet, one-pixel attack performed well regarding to relatively large images (Table 1). The low confidence 5.53% is due to the large number of classes and the main efforts in decreasing the probability level of the true class. <br />
<br />
<gallery mode="slideshow"><br />
File:tables.jpg<br />
File:1px_3nets.jpg|frameless|Figure 2. One-pixel attack on three networks (Su, Vargas and Sakurai, 2018)<br />
File:3attacks.png|frameless|Figure 3. 1, 3 and 5-pixel attacks (Su, Vargas and Sakurai, 2018)<br />
</gallery><br />
<br />
By comparing the non-targeted attack effectiveness between the proposed method and two previous works, single dimensional perturbation vector is enough to delivery the corresponding adversarial images for most of the samples (Figure 4).<br />
<br />
[[File:Capture.PNG|thumb|center|400px|Figure 4. Comparison of non-targeted attack effectiveness between proposed method, LSA and FGSM]]<br />
<br />
<br />
Also, it is easy to craft adversarial images for certain original-target class pairs, especially in the case of one-pixel attack, as the data points of these classes share the vulnerable target directions. However, by increasing the dimensions of perturbations (e.g. from 1 to 3 and 5), most original classes are equally robust and have similar number of adversarial images(Figure 5).<br />
<br />
[[File:Capture2.PNG|thumb|center|400px|Figure 5. Number of successful attacks for a specific class; original in blue, target in red]]<br />
<br />
For special case (e.g. the ship (class 8) when attacking NiN networks), it is relatively simple to craft adversarial samples from it but difficult to craft adversarial samples to it. Reason behind it is that, the boundary shape of the class is long and thin with natural images close to the border.<br />
<br />
In order to evaluate the time cost of conducting one-pixel attack for the four types of networks, the writer uses 2 metrics: <br />
1. the average number of evaluations to produce adversarial images - AvgEvaluation<br />
2. the required average distortion in one-colour channel of a single pixel to produce adversarial images - AvgDistortion<br />
The result is summarized in Figure 6:<br />
<br />
[[File:Capture3.PNG|thumb|center|400px|Figure 6. Evaluation of Time Complexity]]<br />
<br />
= Discussion and Future Work =<br />
<br />
This paper illustrates the possibility of finding point boundaries where class labels changes by moving data points along few dimensions and analyzing the frequency of changes in class labels quantitatively; it also proves that a single pixel change is powerful enough to perturb a considerable portion of images and the one-pixel attack can further work on different network structures or image sizes. Given the time constraints, the conducted experiments used a low number of DE iterations with a relatively small set of initial candidate solutions; by increasing the number of DE iterations or the initial candidate solutions, the perturbation success rate can be further improved. <br />
<br />
For the future work, it is suggested to use the proposed algorithms and natural image samples for further development of advanced models and better artificial adversarial samples.<br />
<br />
= Critique =<br />
<br />
* When the authors launched 1, 3 and 5-pixel attacks, they did it on different target networks, i.e. 1 and 3-pixel attacks on AllConv and 5-pixel attack on NiN, then compared the corresponding success rates. Such a comparison is not rigorous as different networks might lead to varied classification results. Although we understand that the authors wanted to show the best attack results among those networks, we think all three attacks should be conducted on the same defense systems respectively and consequently we can see the performance of each attack on each target network. The conclusion from these results would be more convincing.<br />
<br />
* Regarding to ImageNet, only AlexNet was used to attack but AllConv, NiN and VGG16. The papers in which AllConv and VGG16 were introduced have good experiment results on ImageNet. Attack experiments towards these networks should be considered to show the effectiveness of one-pixel attack.<br />
<br />
= Reference =<br />
<br />
Carlini.N,2017 https://www.youtube.com/watch?v=yIXNL88JBWQ<br />
<br />
Jiawei Su, Danilo Vasconcellos Vargas, Sakurai Kouichi, 2017 https://arxiv.org/pdf/1710.08864.pdf<br />
<br />
Roman Trusov,2017 https://blog.xix.ai/how-adversarial-attacks-work-87495b81da2d</div>Z3chuhttp://wiki.math.uwaterloo.ca/statwiki/index.php?title=File:Capture3.PNG&diff=35736File:Capture3.PNG2018-03-27T18:27:16Z<p>Z3chu: </p>
<hr />
<div></div>Z3chuhttp://wiki.math.uwaterloo.ca/statwiki/index.php?title=One_pixel_attack_for_fooling_deep_neural_networks&diff=35735One pixel attack for fooling deep neural networks2018-03-27T18:26:38Z<p>Z3chu: /* Results */</p>
<hr />
<div>= Presented by = <br />
1. Ziheng Chu<br />
<br />
2. Minghao Lu<br />
<br />
3. Qi Mai<br />
<br />
4. Qici Tan<br />
<br />
= Introduction =<br />
Neural network first caught many people’s attention in imageNet contest in 2012. Neural network increased accuracy to 85% from 75%. The following year, it is increased to 89%. From no one used neural network to everyone uses the neural network. Today we have 97% accuracy in using deep neural network (DNN). So the problem of image recognition by are artificial intelligence is solved. However, there is one catch(Carlini.N,2017).<br />
<br />
The catch is that the DNN is really easy to be fooled. Here is an example. An image of a dog is classified as a hummingbird. Research studies by Google Brian, which is a deep learning artificial intelligence research team at Google, showed that any machine learning classifier can be tricked to give wrong predictions. The action of designing an input in a specific way to get the wrong result from the model is called an adversarial attack (Roman Trusov,2017). The input image is the adversarial image. This image is created by adding a tiny amount of perturbation, which is not so imperceptible to human eyes as shown in Figure 1. After zooming into Figure 1, as shown in Figure 2, a small amount of perturbation led to misclassify a dog as a hummingbird.<br />
<br />
[[File:abcd.png|thumb|center|400px|Figure 1 Adversarial Example]]<br />
<br />
[[File:zoom.jpg|thumb|center|400px|Figure 2 Adversarial Example (Zoomed)]]<br />
<br />
How is it possible? DNN models consist of transformation. Most of those transformations are sometimes very sensitive to a small change. Think of the DNN as a set of high-dimensional decision boundary. When an input is not perfect, if the decision boundary is too simple and linear, mostly it leads to misclassify. <br />
<br />
Harnessing this sensitivity is a way to better understand and product robust algorithm in AI security. This paper aims to demonstrate the vulnerability of DNN by presenting some extreme scenarios - one pixel attack. As shown in Figure3, only one pixel was perturbed, the classification was wrong in each image. Although, there is no profound defense to the attack as of current state, the investigation of one-pixel attack may shield lights on the behavior of DNN. Ultimately, it leads to the discussion of the security implications to future solution.<br />
<br />
[[File:one.jpg|thumb|center|400px|Figure 3 One Pixel Attack Examples]]<br />
<br />
This paper proposed one pixel attack in a scenario where the only information available is the probability labels. Comparing to previous work, this proposal showed its effectiveness of successful attack rate up to 73%, its simplicity of semi-black-box which only required probability label no need inner information, and its flexibility in attacking more models, especially the networks that are not differentiable and the gradient calculation is difficult.<br />
<br />
With the intension of creating an adversarial attack for better understanding the security of DNN, one pixel attack should be considered. Two main reasons: 1) a new way of exploring the high dimensional DNN by using fewer and lower dimensional slices. It is different from previous work, where perturbation was done by adding small value to each pixel. 2) a measure of perceptiveness to demonstrate the severity of one-pixel attack as comparing to a few pixel examples.<br />
<br />
= Related works =<br />
The sensitivity to well-turned artificial perturbation were investigated in various related work.<br />
<br />
1. First perturbation was crafted by several gradient-based algorithms using back- propagation for obtaining gradient information. (C. Szegedy et al. )<br />
<br />
2. Fast gradient sign algorithm for calculating effective perturbation<br />
It was with the hypotheses of the linearity and high-dimensions of inputs were the main reason of vulnerability. (I.J.Goodfellow et al.) <br />
<br />
3. Greedy perturbation searching method by assuming the linearity of DNN decision boundary (S.M Moosavi-Dezfooli et al.) Jacobian matrix to build “Adversarial Saliency Map” which indicates the effectiveness of conducting a fixed length perturbation through the direction of each axis (N. Papernot et al. )<br />
<br />
4. The images can hardly be recognized by human eyes but nevertheless classified by the network with high confi- dence. (A. Nguyen et al. )<br />
<br />
5. Several black-box attacks that require no internal knowledge about the target systems such as gradients, have also been proposed. only utilized it as a starting point to derive a further semi black-box attack which needs to modify more pixels (N. Narodytska et al)<br />
<br />
6. Both natural and random images are found to be vulnerable to adversarial perturbation. Assuming these images are evenly distributed, it suggests that most data points in the input space are gathered near to the boundaries. ( A. Fawzi, S. M. Moosavi Dezfooli, and P. Frossard.)<br />
<br />
7. A curvature analysis region along most directions around natural images are flat with only few directions where the space is curved and the images are sensitive to perturbation. (A. Fawzi et al.)<br />
<br />
8. Universal perturbations (i.e. a perturbation that when added to any natural image can generate adversarial samples with high effectiveness) were shown possible and to achieve a high effectiveness when compared to random perturbation. This indicates that the diversity of boundaries might be low while the boundaries’ shapes near different data points are similar (S. M. Moosavi Dezfooli, A. Fawzi, O. Fawzi, and P. Frossard.)<br />
<br />
= Methodology =<br />
<br />
== Problem Description ==<br />
<br />
We can formalize the generation of adversarial images as an constrained optimization problem. We are given a classifier '''F''' and targeted adversarial class '''adv'''. Let '''x''' be the vectorized form of an image. Let F_adv('''x''') represent the probability assigned to the targeted class '''adv''' for vector '''x''' by the given classifier. Let e('''x''') represent an additive adversarial perturbation vector for image '''x'''. The goal of the targeted attack adversary is to find the perturbation vector with constrained size(norm) that maximizes the probability assigned to the targeted class by the classifier. Formally <br />
<br />
[[File:ONE_PIXEL_CSP1.png | center ]]<br />
<br />
<br />
For few-pixel attack, the problem statement is changed slightly. We constrain on the number of non-zero elements of the perturbation vector instead of the norm of the vector.<br />
<br />
[[File:ONE_PIXEL_CSP2.png | center ]]<br />
<br />
<br />
For the usual adversarial case, we can shift '''x''' in all dimensions, but the strength(norm) of the shift is bounded by L. For our one-pixel attack case, we are using the second equation with d=1. '''x''' is only allowed to be perturbed along a single axis, so the degree of freedom of the shift is greatly reduces. However, the shift in the one axis can be of arbitrary strength.<br />
<br />
<br />
== Differential Evolution ==<br />
Differential evolution(DE) is a population based optimization algorithm belonging to the class of evolutionary algorithms(EA). For DE, in the selection process, the population is in a way "segmented" into families( offspring and parents ), and the most optimal from each family is chosen. This segmentation allows DE to keep more diversity in each iteration than other EAs and makes it more suitable for complex multi-modal optimization problems. Additionally, DE does not use gradient information, which makes it suitable for our problem.<br />
<br />
= Evaluation and Results =<br />
<br />
=== Measurement Metrics and Datasets ===<br />
<br />
The authors used 4 measurement metrics to evaluate the effectiveness of the proposed attacks. <br />
* '''Success Rate''' <br /> ''Non-targeted attack'': the percentage of adversarial images were successfully classified to any possible classes other than the true one. <br /><br />
:<math> \textrm{success rate }=\displaystyle\sum_{k=1}^N I(\textrm{Network}(\textrm{Attack}(\textrm{Image}_k))\neq \textrm{TrueClass}_k) </math> <br />''Targeted attack'': the probability of successfully classifying a perturbed image to a targeted class.<br />
:<math> \textrm{success rate }=\displaystyle\sum_{k=1}^N I(\textrm{Network}(\textrm{TargetedAttack}(\textrm{Image}_k))= \textrm{TrueClass}_k) </math><br />
* '''Adversarial Probability Labels (Confidence)'''<br />The ratio of sum of probability level of the target class for each successful perturbation and the total number of successful attacks. This gives the mean confidence of the successful attacks on the target classification system.<br />
* '''Number of Target Classes'''<br />The number of images after perturbations cannot be classified to any other classes.<br />
* '''Number of Original-Target Class Pairs'''<br />The number of times of each pair being attacked.<br />
<br />
<br />
''Evaluation setups on CIFAR-10 test dataset'':<br />
<br />
Three types of networks played as defense systems: All Convolutional Networks (AllConv), Network in Network (NiN) and VGG16 network. (Figure 1)<br />
<br />
The authors randomly sampled 500 images, with resolution 32x32x3, from the dataset to perform one-pixel attacks and generated 500 samples with 3 and 5 pixel-modification respectively to conduct three-pixel and five-pixel attacks. The effectiveness of one-pixel attack was evaluated on all three introduced networks and the performance comparison, using success rate, was performed as the following: 1-pixel attack and 3-pixel attack on AllConv and 5-pixel attack on NiN. <br />
<br />
Both targeted attacks and non-targeted attacks were considered but only targeted attacks were conducted since the performance of non-targeted attack results could be obtained from the result of targeted attacks by applying a fitness function to increase the probability level of the target class.<br />
<br />
<br />
<br />
''Evaluation setups on ImageNet validation dataset (ILSVRC 2012)'':<br />
<br />
BVLC AlexNet played as defense system. (Figure 1)<br />
<br />
600 sample images were randomly sampled from the dataset. Due to the relatively high resolution of the images (224x224x3), only one-pixel attacked was carried out aiming to verify if extremely small pixel modification in a relatively large image can alternate the classification result.<br />
<br />
Since the number of classes in this dataset is way larger than the one of CIFAR-10, the authors only launched non-targeted attacks and applied a fitness function to decrease the probability level of the true class. <br />
<br />
<br />
<br />
[[File:1px_attack.jpg|thumb|center|400px|Figure 1. Networks Architecture]]<br />
<br />
=== Results ===<br />
<br />
On CIFAR-10, the proposed one-pixel attack succeeded (Table 1) in general and success rate and confidence increased by a significant amount when more pixels were modified (Table 2). Moreover, with one-pixel modification, each image can be perturbed to 2~4 classes in all 3 defense systems, i.e. AllConv, NiN and VGG16 (Figure 2); in particular, VGG16 is slightly more robust than the other two. Generally, all these three networks are vulnerable to one-pixel attack. When more pixels were allowed modifications, a significant number of images were successfully perturbed to up to 8 classes (Figure 3).<br />
<br />
On ImageNet, one-pixel attack performed well regarding to relatively large images (Table 1). The low confidence 5.53% is due to the large number of classes and the main efforts in decreasing the probability level of the true class. <br />
<br />
<gallery mode="slideshow"><br />
File:tables.jpg<br />
File:1px_3nets.jpg|frameless|Figure 2. One-pixel attack on three networks (Su, Vargas and Sakurai, 2018)<br />
File:3attacks.png|frameless|Figure 3. 1, 3 and 5-pixel attacks (Su, Vargas and Sakurai, 2018)<br />
</gallery><br />
<br />
By comparing the non-targeted attack effectiveness between the proposed method and two previous works, single dimensional perturbation vector is enough to delivery the corresponding adversarial images for most of the samples (Figure 4).<br />
<br />
[[File:Capture.PNG|thumb|center|400px|Figure 4. Comparison of non-targeted attack effectiveness between proposed method, LSA and FGSM]]<br />
<br />
<br />
Also, it is easy to craft adversarial images for certain original-target class pairs, especially in the case of one-pixel attack, as the data points of these classes share the vulnerable target directions. However, by increasing the dimensions of perturbations (e.g. from 1 to 3 and 5), most original classes are equally robust and have similar number of adversarial images(Figure 5).<br />
<br />
[[File:Capture2.PNG|thumb|center|400px|Figure 5. Number of successful attacks for a specific class; original in blue, target in red]]<br />
<br />
For special case (e.g. the ship (class 8) when attacking NiN networks), it is relatively simple to craft adversarial samples from it but difficult to craft adversarial samples to it. Reason behind it is that, the boundary shape of the class is long and thin with natural images close to the border.<br />
<br />
In order to evaluate the time cost of conducting one-pixel attack for the four types of networks, the writer uses 2 metrics: <br />
1. the average number of evaluations to produce adversarial images<br />
2. the required average distortion in one-colour channel of a single pixel to produce adversarial images<br />
The result is summarized in Figure 6:<br />
<br />
[[File:Capture3.PNG|thumb|center|400px|Figure 6. Evaluation of Time Complexity]]<br />
<br />
= Discussion and Future Work =<br />
<br />
This paper illustrates the possibility of finding point boundaries where class labels changes by moving data points along few dimensions and analyzing the frequency of changes in class labels quantitatively; it also proves that a single pixel change is powerful enough to perturb a considerable portion of images and the one-pixel attack can further work on different network structures or image sizes. Given the time constraints, the conducted experiments used a low number of DE iterations with a relatively small set of initial candidate solutions; by increasing the number of DE iterations or the initial candidate solutions, the perturbation success rate can be further improved. <br />
<br />
For the future work, it is suggested to use the proposed algorithms and natural image samples for further development of advanced models and better artificial adversarial samples.<br />
<br />
= Critique =<br />
<br />
* When the authors launched 1, 3 and 5-pixel attacks, they did it on different target networks, i.e. 1 and 3-pixel attacks on AllConv and 5-pixel attack on NiN, then compared the corresponding success rates. Such a comparison is not rigorous as different networks might lead to varied classification results. Although we understand that the authors wanted to show the best attack results among those networks, we think all three attacks should be conducted on the same defense systems respectively and consequently we can see the performance of each attack on each target network. The conclusion from these results would be more convincing.<br />
<br />
* Regarding to ImageNet, only AlexNet was used to attack but AllConv, NiN and VGG16. The papers in which AllConv and VGG16 were introduced have good experiment results on ImageNet. Attack experiments towards these networks should be considered to show the effectiveness of one-pixel attack.<br />
<br />
= Reference =<br />
<br />
Carlini.N,2017 https://www.youtube.com/watch?v=yIXNL88JBWQ<br />
<br />
Jiawei Su, Danilo Vasconcellos Vargas, Sakurai Kouichi, 2017 https://arxiv.org/pdf/1710.08864.pdf<br />
<br />
Roman Trusov,2017 https://blog.xix.ai/how-adversarial-attacks-work-87495b81da2d</div>Z3chuhttp://wiki.math.uwaterloo.ca/statwiki/index.php?title=One_pixel_attack_for_fooling_deep_neural_networks&diff=35734One pixel attack for fooling deep neural networks2018-03-27T18:18:28Z<p>Z3chu: /* Results */</p>
<hr />
<div>= Presented by = <br />
1. Ziheng Chu<br />
<br />
2. Minghao Lu<br />
<br />
3. Qi Mai<br />
<br />
4. Qici Tan<br />
<br />
= Introduction =<br />
Neural network first caught many people’s attention in imageNet contest in 2012. Neural network increased accuracy to 85% from 75%. The following year, it is increased to 89%. From no one used neural network to everyone uses the neural network. Today we have 97% accuracy in using deep neural network (DNN). So the problem of image recognition by are artificial intelligence is solved. However, there is one catch(Carlini.N,2017).<br />
<br />
The catch is that the DNN is really easy to be fooled. Here is an example. An image of a dog is classified as a hummingbird. Research studies by Google Brian, which is a deep learning artificial intelligence research team at Google, showed that any machine learning classifier can be tricked to give wrong predictions. The action of designing an input in a specific way to get the wrong result from the model is called an adversarial attack (Roman Trusov,2017). The input image is the adversarial image. This image is created by adding a tiny amount of perturbation, which is not so imperceptible to human eyes as shown in Figure 1. After zooming into Figure 1, as shown in Figure 2, a small amount of perturbation led to misclassify a dog as a hummingbird.<br />
<br />
[[File:abcd.png|thumb|center|400px|Figure 1 Adversarial Example]]<br />
<br />
[[File:zoom.jpg|thumb|center|400px|Figure 2 Adversarial Example (Zoomed)]]<br />
<br />
How is it possible? DNN models consist of transformation. Most of those transformations are sometimes very sensitive to a small change. Think of the DNN as a set of high-dimensional decision boundary. When an input is not perfect, if the decision boundary is too simple and linear, mostly it leads to misclassify. <br />
<br />
Harnessing this sensitivity is a way to better understand and product robust algorithm in AI security. This paper aims to demonstrate the vulnerability of DNN by presenting some extreme scenarios - one pixel attack. As shown in Figure3, only one pixel was perturbed, the classification was wrong in each image. Although, there is no profound defense to the attack as of current state, the investigation of one-pixel attack may shield lights on the behavior of DNN. Ultimately, it leads to the discussion of the security implications to future solution.<br />
<br />
[[File:one.jpg|thumb|center|400px|Figure 3 One Pixel Attack Examples]]<br />
<br />
This paper proposed one pixel attack in a scenario where the only information available is the probability labels. Comparing to previous work, this proposal showed its effectiveness of successful attack rate up to 73%, its simplicity of semi-black-box which only required probability label no need inner information, and its flexibility in attacking more models, especially the networks that are not differentiable and the gradient calculation is difficult.<br />
<br />
With the intension of creating an adversarial attack for better understanding the security of DNN, one pixel attack should be considered. Two main reasons: 1) a new way of exploring the high dimensional DNN by using fewer and lower dimensional slices. It is different from previous work, where perturbation was done by adding small value to each pixel. 2) a measure of perceptiveness to demonstrate the severity of one-pixel attack as comparing to a few pixel examples.<br />
<br />
= Related works =<br />
The sensitivity to well-turned artificial perturbation were investigated in various related work.<br />
<br />
1. First perturbation was crafted by several gradient-based algorithms using back- propagation for obtaining gradient information. (C. Szegedy et al. )<br />
<br />
2. Fast gradient sign algorithm for calculating effective perturbation<br />
It was with the hypotheses of the linearity and high-dimensions of inputs were the main reason of vulnerability. (I.J.Goodfellow et al.) <br />
<br />
3. Greedy perturbation searching method by assuming the linearity of DNN decision boundary (S.M Moosavi-Dezfooli et al.) Jacobian matrix to build “Adversarial Saliency Map” which indicates the effectiveness of conducting a fixed length perturbation through the direction of each axis (N. Papernot et al. )<br />
<br />
4. The images can hardly be recognized by human eyes but nevertheless classified by the network with high confi- dence. (A. Nguyen et al. )<br />
<br />
5. Several black-box attacks that require no internal knowledge about the target systems such as gradients, have also been proposed. only utilized it as a starting point to derive a further semi black-box attack which needs to modify more pixels (N. Narodytska et al)<br />
<br />
6. Both natural and random images are found to be vulnerable to adversarial perturbation. Assuming these images are evenly distributed, it suggests that most data points in the input space are gathered near to the boundaries. ( A. Fawzi, S. M. Moosavi Dezfooli, and P. Frossard.)<br />
<br />
7. A curvature analysis region along most directions around natural images are flat with only few directions where the space is curved and the images are sensitive to perturbation. (A. Fawzi et al.)<br />
<br />
8. Universal perturbations (i.e. a perturbation that when added to any natural image can generate adversarial samples with high effectiveness) were shown possible and to achieve a high effectiveness when compared to random perturbation. This indicates that the diversity of boundaries might be low while the boundaries’ shapes near different data points are similar (S. M. Moosavi Dezfooli, A. Fawzi, O. Fawzi, and P. Frossard.)<br />
<br />
= Methodology =<br />
<br />
== Problem Description ==<br />
<br />
We can formalize the generation of adversarial images as an constrained optimization problem. We are given a classifier '''F''' and targeted adversarial class '''adv'''. Let '''x''' be the vectorized form of an image. Let F_adv('''x''') represent the probability assigned to the targeted class '''adv''' for vector '''x''' by the given classifier. Let e('''x''') represent an additive adversarial perturbation vector for image '''x'''. The goal of the targeted attack adversary is to find the perturbation vector with constrained size(norm) that maximizes the probability assigned to the targeted class by the classifier. Formally <br />
<br />
[[File:ONE_PIXEL_CSP1.png | center ]]<br />
<br />
<br />
For few-pixel attack, the problem statement is changed slightly. We constrain on the number of non-zero elements of the perturbation vector instead of the norm of the vector.<br />
<br />
[[File:ONE_PIXEL_CSP2.png | center ]]<br />
<br />
<br />
For the usual adversarial case, we can shift '''x''' in all dimensions, but the strength(norm) of the shift is bounded by L. For our one-pixel attack case, we are using the second equation with d=1. '''x''' is only allowed to be perturbed along a single axis, so the degree of freedom of the shift is greatly reduces. However, the shift in the one axis can be of arbitrary strength.<br />
<br />
<br />
== Differential Evolution ==<br />
Differential evolution(DE) is a population based optimization algorithm belonging to the class of evolutionary algorithms(EA). For DE, in the selection process, the population is in a way "segmented" into families( offspring and parents ), and the most optimal from each family is chosen. This segmentation allows DE to keep more diversity in each iteration than other EAs and makes it more suitable for complex multi-modal optimization problems. Additionally, DE does not use gradient information, which makes it suitable for our problem.<br />
<br />
= Evaluation and Results =<br />
<br />
=== Measurement Metrics and Datasets ===<br />
<br />
The authors used 4 measurement metrics to evaluate the effectiveness of the proposed attacks. <br />
* '''Success Rate''' <br /> ''Non-targeted attack'': the percentage of adversarial images were successfully classified to any possible classes other than the true one. <br /><br />
:<math> \textrm{success rate }=\displaystyle\sum_{k=1}^N I(\textrm{Network}(\textrm{Attack}(\textrm{Image}_k))\neq \textrm{TrueClass}_k) </math> <br />''Targeted attack'': the probability of successfully classifying a perturbed image to a targeted class.<br />
:<math> \textrm{success rate }=\displaystyle\sum_{k=1}^N I(\textrm{Network}(\textrm{TargetedAttack}(\textrm{Image}_k))= \textrm{TrueClass}_k) </math><br />
* '''Adversarial Probability Labels (Confidence)'''<br />The ratio of sum of probability level of the target class for each successful perturbation and the total number of successful attacks. This gives the mean confidence of the successful attacks on the target classification system.<br />
* '''Number of Target Classes'''<br />The number of images after perturbations cannot be classified to any other classes.<br />
* '''Number of Original-Target Class Pairs'''<br />The number of times of each pair being attacked.<br />
<br />
<br />
''Evaluation setups on CIFAR-10 test dataset'':<br />
<br />
Three types of networks played as defense systems: All Convolutional Networks (AllConv), Network in Network (NiN) and VGG16 network. (Figure 1)<br />
<br />
The authors randomly sampled 500 images, with resolution 32x32x3, from the dataset to perform one-pixel attacks and generated 500 samples with 3 and 5 pixel-modification respectively to conduct three-pixel and five-pixel attacks. The effectiveness of one-pixel attack was evaluated on all three introduced networks and the performance comparison, using success rate, was performed as the following: 1-pixel attack and 3-pixel attack on AllConv and 5-pixel attack on NiN. <br />
<br />
Both targeted attacks and non-targeted attacks were considered but only targeted attacks were conducted since the performance of non-targeted attack results could be obtained from the result of targeted attacks by applying a fitness function to increase the probability level of the target class.<br />
<br />
<br />
<br />
''Evaluation setups on ImageNet validation dataset (ILSVRC 2012)'':<br />
<br />
BVLC AlexNet played as defense system. (Figure 1)<br />
<br />
600 sample images were randomly sampled from the dataset. Due to the relatively high resolution of the images (224x224x3), only one-pixel attacked was carried out aiming to verify if extremely small pixel modification in a relatively large image can alternate the classification result.<br />
<br />
Since the number of classes in this dataset is way larger than the one of CIFAR-10, the authors only launched non-targeted attacks and applied a fitness function to decrease the probability level of the true class. <br />
<br />
<br />
<br />
[[File:1px_attack.jpg|thumb|center|400px|Figure 1. Networks Architecture]]<br />
<br />
=== Results ===<br />
<br />
On CIFAR-10, the proposed one-pixel attack succeeded (Table 1) in general and success rate and confidence increased by a significant amount when more pixels were modified (Table 2). Moreover, with one-pixel modification, each image can be perturbed to 2~4 classes in all 3 defense systems, i.e. AllConv, NiN and VGG16 (Figure 2); in particular, VGG16 is slightly more robust than the other two. Generally, all these three networks are vulnerable to one-pixel attack. When more pixels were allowed modifications, a significant number of images were successfully perturbed to up to 8 classes (Figure 3).<br />
<br />
On ImageNet, one-pixel attack performed well regarding to relatively large images (Table 1). The low confidence 5.53% is due to the large number of classes and the main efforts in decreasing the probability level of the true class. <br />
<br />
<gallery mode="slideshow"><br />
File:tables.jpg<br />
File:1px_3nets.jpg|frameless|Figure 2. One-pixel attack on three networks (Su, Vargas and Sakurai, 2018)<br />
File:3attacks.png|frameless|Figure 3. 1, 3 and 5-pixel attacks (Su, Vargas and Sakurai, 2018)<br />
</gallery><br />
<br />
By comparing the non-targeted attack effectiveness between the proposed method and two previous works, single dimensional perturbation vector is enough to delivery the corresponding adversarial images for most of the samples (Figure 4).<br />
<br />
[[File:Capture.PNG|thumb|center|400px|Figure 4. Comparison of non-targeted attack effectiveness between proposed method, LSA and FGSM]]<br />
<br />
<br />
Also, it is easy to craft adversarial images for certain original-target class pairs, especially in the case of one-pixel attack, as the data points of these classes share the vulnerable target directions. However, by increasing the dimensions of perturbations (e.g. from 1 to 3 and 5), most original classes are equally robust and have similar number of adversarial images(Figure 5).<br />
<br />
[[File:Capture2.PNG|thumb|center|400px|Figure 5. Number of successful attacks for a specific class; original in blue, target in red]]<br />
<br />
For special case (e.g. the ship (class 8) when attacking NiN networks), it is relatively simple to craft adversarial samples from it but difficult to craft adversarial samples to it. Reason behind it is that, the boundary shape of the class is long and thin with natural images close to the border.<br />
<br />
= Discussion and Future Work =<br />
<br />
This paper illustrates the possibility of finding point boundaries where class labels changes by moving data points along few dimensions and analyzing the frequency of changes in class labels quantitatively; it also proves that a single pixel change is powerful enough to perturb a considerable portion of images and the one-pixel attack can further work on different network structures or image sizes. Given the time constraints, the conducted experiments used a low number of DE iterations with a relatively small set of initial candidate solutions; by increasing the number of DE iterations or the initial candidate solutions, the perturbation success rate can be further improved. <br />
<br />
For the future work, it is suggested to use the proposed algorithms and natural image samples for further development of advanced models and better artificial adversarial samples.<br />
<br />
= Critique =<br />
<br />
* When the authors launched 1, 3 and 5-pixel attacks, they did it on different target networks, i.e. 1 and 3-pixel attacks on AllConv and 5-pixel attack on NiN, then compared the corresponding success rates. Such a comparison is not rigorous as different networks might lead to varied classification results. Although we understand that the authors wanted to show the best attack results among those networks, we think all three attacks should be conducted on the same defense systems respectively and consequently we can see the performance of each attack on each target network. The conclusion from these results would be more convincing.<br />
<br />
* Regarding to ImageNet, only AlexNet was used to attack but AllConv, NiN and VGG16. The papers in which AllConv and VGG16 were introduced have good experiment results on ImageNet. Attack experiments towards these networks should be considered to show the effectiveness of one-pixel attack.<br />
<br />
= Reference =<br />
<br />
Carlini.N,2017 https://www.youtube.com/watch?v=yIXNL88JBWQ<br />
<br />
Jiawei Su, Danilo Vasconcellos Vargas, Sakurai Kouichi, 2017 https://arxiv.org/pdf/1710.08864.pdf<br />
<br />
Roman Trusov,2017 https://blog.xix.ai/how-adversarial-attacks-work-87495b81da2d</div>Z3chuhttp://wiki.math.uwaterloo.ca/statwiki/index.php?title=One_pixel_attack_for_fooling_deep_neural_networks&diff=35733One pixel attack for fooling deep neural networks2018-03-27T18:17:28Z<p>Z3chu: /* Results */</p>
<hr />
<div>= Presented by = <br />
1. Ziheng Chu<br />
<br />
2. Minghao Lu<br />
<br />
3. Qi Mai<br />
<br />
4. Qici Tan<br />
<br />
= Introduction =<br />
Neural network first caught many people’s attention in imageNet contest in 2012. Neural network increased accuracy to 85% from 75%. The following year, it is increased to 89%. From no one used neural network to everyone uses the neural network. Today we have 97% accuracy in using deep neural network (DNN). So the problem of image recognition by are artificial intelligence is solved. However, there is one catch(Carlini.N,2017).<br />
<br />
The catch is that the DNN is really easy to be fooled. Here is an example. An image of a dog is classified as a hummingbird. Research studies by Google Brian, which is a deep learning artificial intelligence research team at Google, showed that any machine learning classifier can be tricked to give wrong predictions. The action of designing an input in a specific way to get the wrong result from the model is called an adversarial attack (Roman Trusov,2017). The input image is the adversarial image. This image is created by adding a tiny amount of perturbation, which is not so imperceptible to human eyes as shown in Figure 1. After zooming into Figure 1, as shown in Figure 2, a small amount of perturbation led to misclassify a dog as a hummingbird.<br />
<br />
[[File:abcd.png|thumb|center|400px|Figure 1 Adversarial Example]]<br />
<br />
[[File:zoom.jpg|thumb|center|400px|Figure 2 Adversarial Example (Zoomed)]]<br />
<br />
How is it possible? DNN models consist of transformation. Most of those transformations are sometimes very sensitive to a small change. Think of the DNN as a set of high-dimensional decision boundary. When an input is not perfect, if the decision boundary is too simple and linear, mostly it leads to misclassify. <br />
<br />
Harnessing this sensitivity is a way to better understand and product robust algorithm in AI security. This paper aims to demonstrate the vulnerability of DNN by presenting some extreme scenarios - one pixel attack. As shown in Figure3, only one pixel was perturbed, the classification was wrong in each image. Although, there is no profound defense to the attack as of current state, the investigation of one-pixel attack may shield lights on the behavior of DNN. Ultimately, it leads to the discussion of the security implications to future solution.<br />
<br />
[[File:one.jpg|thumb|center|400px|Figure 3 One Pixel Attack Examples]]<br />
<br />
This paper proposed one pixel attack in a scenario where the only information available is the probability labels. Comparing to previous work, this proposal showed its effectiveness of successful attack rate up to 73%, its simplicity of semi-black-box which only required probability label no need inner information, and its flexibility in attacking more models, especially the networks that are not differentiable and the gradient calculation is difficult.<br />
<br />
With the intension of creating an adversarial attack for better understanding the security of DNN, one pixel attack should be considered. Two main reasons: 1) a new way of exploring the high dimensional DNN by using fewer and lower dimensional slices. It is different from previous work, where perturbation was done by adding small value to each pixel. 2) a measure of perceptiveness to demonstrate the severity of one-pixel attack as comparing to a few pixel examples.<br />
<br />
= Related works =<br />
The sensitivity to well-turned artificial perturbation were investigated in various related work.<br />
<br />
1. First perturbation was crafted by several gradient-based algorithms using back- propagation for obtaining gradient information. (C. Szegedy et al. )<br />
<br />
2. Fast gradient sign algorithm for calculating effective perturbation<br />
It was with the hypotheses of the linearity and high-dimensions of inputs were the main reason of vulnerability. (I.J.Goodfellow et al.) <br />
<br />
3. Greedy perturbation searching method by assuming the linearity of DNN decision boundary (S.M Moosavi-Dezfooli et al.) Jacobian matrix to build “Adversarial Saliency Map” which indicates the effectiveness of conducting a fixed length perturbation through the direction of each axis (N. Papernot et al. )<br />
<br />
4. The images can hardly be recognized by human eyes but nevertheless classified by the network with high confi- dence. (A. Nguyen et al. )<br />
<br />
5. Several black-box attacks that require no internal knowledge about the target systems such as gradients, have also been proposed. only utilized it as a starting point to derive a further semi black-box attack which needs to modify more pixels (N. Narodytska et al)<br />
<br />
6. Both natural and random images are found to be vulnerable to adversarial perturbation. Assuming these images are evenly distributed, it suggests that most data points in the input space are gathered near to the boundaries. ( A. Fawzi, S. M. Moosavi Dezfooli, and P. Frossard.)<br />
<br />
7. A curvature analysis region along most directions around natural images are flat with only few directions where the space is curved and the images are sensitive to perturbation. (A. Fawzi et al.)<br />
<br />
8. Universal perturbations (i.e. a perturbation that when added to any natural image can generate adversarial samples with high effectiveness) were shown possible and to achieve a high effectiveness when compared to random perturbation. This indicates that the diversity of boundaries might be low while the boundaries’ shapes near different data points are similar (S. M. Moosavi Dezfooli, A. Fawzi, O. Fawzi, and P. Frossard.)<br />
<br />
= Methodology =<br />
<br />
== Problem Description ==<br />
<br />
We can formalize the generation of adversarial images as an constrained optimization problem. We are given a classifier '''F''' and targeted adversarial class '''adv'''. Let '''x''' be the vectorized form of an image. Let F_adv('''x''') represent the probability assigned to the targeted class '''adv''' for vector '''x''' by the given classifier. Let e('''x''') represent an additive adversarial perturbation vector for image '''x'''. The goal of the targeted attack adversary is to find the perturbation vector with constrained size(norm) that maximizes the probability assigned to the targeted class by the classifier. Formally <br />
<br />
[[File:ONE_PIXEL_CSP1.png | center ]]<br />
<br />
<br />
For few-pixel attack, the problem statement is changed slightly. We constrain on the number of non-zero elements of the perturbation vector instead of the norm of the vector.<br />
<br />
[[File:ONE_PIXEL_CSP2.png | center ]]<br />
<br />
<br />
For the usual adversarial case, we can shift '''x''' in all dimensions, but the strength(norm) of the shift is bounded by L. For our one-pixel attack case, we are using the second equation with d=1. '''x''' is only allowed to be perturbed along a single axis, so the degree of freedom of the shift is greatly reduces. However, the shift in the one axis can be of arbitrary strength.<br />
<br />
<br />
== Differential Evolution ==<br />
Differential evolution(DE) is a population based optimization algorithm belonging to the class of evolutionary algorithms(EA). For DE, in the selection process, the population is in a way "segmented" into families( offspring and parents ), and the most optimal from each family is chosen. This segmentation allows DE to keep more diversity in each iteration than other EAs and makes it more suitable for complex multi-modal optimization problems. Additionally, DE does not use gradient information, which makes it suitable for our problem.<br />
<br />
= Evaluation and Results =<br />
<br />
=== Measurement Metrics and Datasets ===<br />
<br />
The authors used 4 measurement metrics to evaluate the effectiveness of the proposed attacks. <br />
* '''Success Rate''' <br /> ''Non-targeted attack'': the percentage of adversarial images were successfully classified to any possible classes other than the true one. <br /><br />
:<math> \textrm{success rate }=\displaystyle\sum_{k=1}^N I(\textrm{Network}(\textrm{Attack}(\textrm{Image}_k))\neq \textrm{TrueClass}_k) </math> <br />''Targeted attack'': the probability of successfully classifying a perturbed image to a targeted class.<br />
:<math> \textrm{success rate }=\displaystyle\sum_{k=1}^N I(\textrm{Network}(\textrm{TargetedAttack}(\textrm{Image}_k))= \textrm{TrueClass}_k) </math><br />
* '''Adversarial Probability Labels (Confidence)'''<br />The ratio of sum of probability level of the target class for each successful perturbation and the total number of successful attacks. This gives the mean confidence of the successful attacks on the target classification system.<br />
* '''Number of Target Classes'''<br />The number of images after perturbations cannot be classified to any other classes.<br />
* '''Number of Original-Target Class Pairs'''<br />The number of times of each pair being attacked.<br />
<br />
<br />
''Evaluation setups on CIFAR-10 test dataset'':<br />
<br />
Three types of networks played as defense systems: All Convolutional Networks (AllConv), Network in Network (NiN) and VGG16 network. (Figure 1)<br />
<br />
The authors randomly sampled 500 images, with resolution 32x32x3, from the dataset to perform one-pixel attacks and generated 500 samples with 3 and 5 pixel-modification respectively to conduct three-pixel and five-pixel attacks. The effectiveness of one-pixel attack was evaluated on all three introduced networks and the performance comparison, using success rate, was performed as the following: 1-pixel attack and 3-pixel attack on AllConv and 5-pixel attack on NiN. <br />
<br />
Both targeted attacks and non-targeted attacks were considered but only targeted attacks were conducted since the performance of non-targeted attack results could be obtained from the result of targeted attacks by applying a fitness function to increase the probability level of the target class.<br />
<br />
<br />
<br />
''Evaluation setups on ImageNet validation dataset (ILSVRC 2012)'':<br />
<br />
BVLC AlexNet played as defense system. (Figure 1)<br />
<br />
600 sample images were randomly sampled from the dataset. Due to the relatively high resolution of the images (224x224x3), only one-pixel attacked was carried out aiming to verify if extremely small pixel modification in a relatively large image can alternate the classification result.<br />
<br />
Since the number of classes in this dataset is way larger than the one of CIFAR-10, the authors only launched non-targeted attacks and applied a fitness function to decrease the probability level of the true class. <br />
<br />
<br />
<br />
[[File:1px_attack.jpg|thumb|center|400px|Figure 1. Networks Architecture]]<br />
<br />
=== Results ===<br />
<br />
On CIFAR-10, the proposed one-pixel attack succeeded (Table 1) in general and success rate and confidence increased by a significant amount when more pixels were modified (Table 2). Moreover, with one-pixel modification, each image can be perturbed to 2~4 classes in all 3 defense systems, i.e. AllConv, NiN and VGG16 (Figure 2); in particular, VGG16 is slightly more robust than the other two. Generally, all these three networks are vulnerable to one-pixel attack. When more pixels were allowed modifications, a significant number of images were successfully perturbed to up to 8 classes (Figure 3).<br />
<br />
On ImageNet, one-pixel attack performed well regarding to relatively large images (Table 1). The low confidence 5.53% is due to the large number of classes and the main efforts in decreasing the probability level of the true class. <br />
<br />
<gallery mode="slideshow"><br />
File:tables.jpg<br />
File:1px_3nets.jpg|frameless|Figure 2. One-pixel attack on three networks (Su, Vargas and Sakurai, 2018)<br />
File:3attacks.png|frameless|Figure 3. 1, 3 and 5-pixel attacks (Su, Vargas and Sakurai, 2018)<br />
</gallery><br />
<br />
By comparing the non-targeted attack effectiveness between the proposed method and two previous works, single dimensional perturbation vector is enough to delivery the corresponding adversarial images for most of the samples (Figure 4).<br />
<br />
[[File:Capture.PNG|thumb|center|400px|Figure 4. Comparison of non-targeted attack effectiveness between proposed method, LSA and FGSM]]<br />
<br />
<br />
Also, it is easy to craft adversarial images for certain original-target class pairs, especially in the case of one-pixel attack, as the data points of these classes share the vulnerable target directions. However, by increasing the dimensions of perturbations (e.g. from 1 to 3 and 5), most original classes are equally robust and have similar number of adversarial images(Figure 5).<br />
<br />
[[File:Capture2.PNG|thumb|center|400px|Figure 5. Number of successful attacks for a specific class; original in blue, target in red]]<br />
<br />
For special case (e.g. the ship class when attacking NiN networks), it is relatively simple to craft adversarial samples from it but difficult to craft adversarial samples to it (Figure 5 Class 8). Reason behind it is that, the boundary shape of the class is long and thin with natural images close to the border.<br />
<br />
= Discussion and Future Work =<br />
<br />
This paper illustrates the possibility of finding point boundaries where class labels changes by moving data points along few dimensions and analyzing the frequency of changes in class labels quantitatively; it also proves that a single pixel change is powerful enough to perturb a considerable portion of images and the one-pixel attack can further work on different network structures or image sizes. Given the time constraints, the conducted experiments used a low number of DE iterations with a relatively small set of initial candidate solutions; by increasing the number of DE iterations or the initial candidate solutions, the perturbation success rate can be further improved. <br />
<br />
For the future work, it is suggested to use the proposed algorithms and natural image samples for further development of advanced models and better artificial adversarial samples.<br />
<br />
= Critique =<br />
<br />
* When the authors launched 1, 3 and 5-pixel attacks, they did it on different target networks, i.e. 1 and 3-pixel attacks on AllConv and 5-pixel attack on NiN, then compared the corresponding success rates. Such a comparison is not rigorous as different networks might lead to varied classification results. Although we understand that the authors wanted to show the best attack results among those networks, we think all three attacks should be conducted on the same defense systems respectively and consequently we can see the performance of each attack on each target network. The conclusion from these results would be more convincing.<br />
<br />
* Regarding to ImageNet, only AlexNet was used to attack but AllConv, NiN and VGG16. The papers in which AllConv and VGG16 were introduced have good experiment results on ImageNet. Attack experiments towards these networks should be considered to show the effectiveness of one-pixel attack.<br />
<br />
= Reference =<br />
<br />
Carlini.N,2017 https://www.youtube.com/watch?v=yIXNL88JBWQ<br />
<br />
Jiawei Su, Danilo Vasconcellos Vargas, Sakurai Kouichi, 2017 https://arxiv.org/pdf/1710.08864.pdf<br />
<br />
Roman Trusov,2017 https://blog.xix.ai/how-adversarial-attacks-work-87495b81da2d</div>Z3chuhttp://wiki.math.uwaterloo.ca/statwiki/index.php?title=One_pixel_attack_for_fooling_deep_neural_networks&diff=35732One pixel attack for fooling deep neural networks2018-03-27T18:08:08Z<p>Z3chu: /* Results */</p>
<hr />
<div>= Presented by = <br />
1. Ziheng Chu<br />
<br />
2. Minghao Lu<br />
<br />
3. Qi Mai<br />
<br />
4. Qici Tan<br />
<br />
= Introduction =<br />
Neural network first caught many people’s attention in imageNet contest in 2012. Neural network increased accuracy to 85% from 75%. The following year, it is increased to 89%. From no one used neural network to everyone uses the neural network. Today we have 97% accuracy in using deep neural network (DNN). So the problem of image recognition by are artificial intelligence is solved. However, there is one catch(Carlini.N,2017).<br />
<br />
The catch is that the DNN is really easy to be fooled. Here is an example. An image of a dog is classified as a hummingbird. Research studies by Google Brian, which is a deep learning artificial intelligence research team at Google, showed that any machine learning classifier can be tricked to give wrong predictions. The action of designing an input in a specific way to get the wrong result from the model is called an adversarial attack (Roman Trusov,2017). The input image is the adversarial image. This image is created by adding a tiny amount of perturbation, which is not so imperceptible to human eyes as shown in Figure 1. After zooming into Figure 1, as shown in Figure 2, a small amount of perturbation led to misclassify a dog as a hummingbird.<br />
<br />
[[File:abcd.png|thumb|center|400px|Figure 1 Adversarial Example]]<br />
<br />
[[File:zoom.jpg|thumb|center|400px|Figure 2 Adversarial Example (Zoomed)]]<br />
<br />
How is it possible? DNN models consist of transformation. Most of those transformations are sometimes very sensitive to a small change. Think of the DNN as a set of high-dimensional decision boundary. When an input is not perfect, if the decision boundary is too simple and linear, mostly it leads to misclassify. <br />
<br />
Harnessing this sensitivity is a way to better understand and product robust algorithm in AI security. This paper aims to demonstrate the vulnerability of DNN by presenting some extreme scenarios - one pixel attack. As shown in Figure3, only one pixel was perturbed, the classification was wrong in each image. Although, there is no profound defense to the attack as of current state, the investigation of one-pixel attack may shield lights on the behavior of DNN. Ultimately, it leads to the discussion of the security implications to future solution.<br />
<br />
[[File:one.jpg|thumb|center|400px|Figure 3 One Pixel Attack Examples]]<br />
<br />
This paper proposed one pixel attack in a scenario where the only information available is the probability labels. Comparing to previous work, this proposal showed its effectiveness of successful attack rate up to 73%, its simplicity of semi-black-box which only required probability label no need inner information, and its flexibility in attacking more models, especially the networks that are not differentiable and the gradient calculation is difficult.<br />
<br />
With the intension of creating an adversarial attack for better understanding the security of DNN, one pixel attack should be considered. Two main reasons: 1) a new way of exploring the high dimensional DNN by using fewer and lower dimensional slices. It is different from previous work, where perturbation was done by adding small value to each pixel. 2) a measure of perceptiveness to demonstrate the severity of one-pixel attack as comparing to a few pixel examples.<br />
<br />
= Related works =<br />
The sensitivity to well-turned artificial perturbation were investigated in various related work.<br />
<br />
1. First perturbation was crafted by several gradient-based algorithms using back- propagation for obtaining gradient information. (C. Szegedy et al. )<br />
<br />
2. Fast gradient sign algorithm for calculating effective perturbation<br />
It was with the hypotheses of the linearity and high-dimensions of inputs were the main reason of vulnerability. (I.J.Goodfellow et al.) <br />
<br />
3. Greedy perturbation searching method by assuming the linearity of DNN decision boundary (S.M Moosavi-Dezfooli et al.) Jacobian matrix to build “Adversarial Saliency Map” which indicates the effectiveness of conducting a fixed length perturbation through the direction of each axis (N. Papernot et al. )<br />
<br />
4. The images can hardly be recognized by human eyes but nevertheless classified by the network with high confi- dence. (A. Nguyen et al. )<br />
<br />
5. Several black-box attacks that require no internal knowledge about the target systems such as gradients, have also been proposed. only utilized it as a starting point to derive a further semi black-box attack which needs to modify more pixels (N. Narodytska et al)<br />
<br />
6. Both natural and random images are found to be vulnerable to adversarial perturbation. Assuming these images are evenly distributed, it suggests that most data points in the input space are gathered near to the boundaries. ( A. Fawzi, S. M. Moosavi Dezfooli, and P. Frossard.)<br />
<br />
7. A curvature analysis region along most directions around natural images are flat with only few directions where the space is curved and the images are sensitive to perturbation. (A. Fawzi et al.)<br />
<br />
8. Universal perturbations (i.e. a perturbation that when added to any natural image can generate adversarial samples with high effectiveness) were shown possible and to achieve a high effectiveness when compared to random perturbation. This indicates that the diversity of boundaries might be low while the boundaries’ shapes near different data points are similar (S. M. Moosavi Dezfooli, A. Fawzi, O. Fawzi, and P. Frossard.)<br />
<br />
= Methodology =<br />
<br />
== Problem Description ==<br />
<br />
We can formalize the generation of adversarial images as an constrained optimization problem. We are given a classifier '''F''' and targeted adversarial class '''adv'''. Let '''x''' be the vectorized form of an image. Let F_adv('''x''') represent the probability assigned to the targeted class '''adv''' for vector '''x''' by the given classifier. Let e('''x''') represent an additive adversarial perturbation vector for image '''x'''. The goal of the targeted attack adversary is to find the perturbation vector with constrained size(norm) that maximizes the probability assigned to the targeted class by the classifier. Formally <br />
<br />
[[File:ONE_PIXEL_CSP1.png | center ]]<br />
<br />
<br />
For few-pixel attack, the problem statement is changed slightly. We constrain on the number of non-zero elements of the perturbation vector instead of the norm of the vector.<br />
<br />
[[File:ONE_PIXEL_CSP2.png | center ]]<br />
<br />
<br />
For the usual adversarial case, we can shift '''x''' in all dimensions, but the strength(norm) of the shift is bounded by L. For our one-pixel attack case, we are using the second equation with d=1. '''x''' is only allowed to be perturbed along a single axis, so the degree of freedom of the shift is greatly reduces. However, the shift in the one axis can be of arbitrary strength.<br />
<br />
<br />
== Differential Evolution ==<br />
Differential evolution(DE) is a population based optimization algorithm belonging to the class of evolutionary algorithms(EA). For DE, in the selection process, the population is in a way "segmented" into families( offspring and parents ), and the most optimal from each family is chosen. This segmentation allows DE to keep more diversity in each iteration than other EAs and makes it more suitable for complex multi-modal optimization problems. Additionally, DE does not use gradient information, which makes it suitable for our problem.<br />
<br />
= Evaluation and Results =<br />
<br />
=== Measurement Metrics and Datasets ===<br />
<br />
The authors used 4 measurement metrics to evaluate the effectiveness of the proposed attacks. <br />
* '''Success Rate''' <br /> ''Non-targeted attack'': the percentage of adversarial images were successfully classified to any possible classes other than the true one. <br /><br />
:<math> \textrm{success rate }=\displaystyle\sum_{k=1}^N I(\textrm{Network}(\textrm{Attack}(\textrm{Image}_k))\neq \textrm{TrueClass}_k) </math> <br />''Targeted attack'': the probability of successfully classifying a perturbed image to a targeted class.<br />
:<math> \textrm{success rate }=\displaystyle\sum_{k=1}^N I(\textrm{Network}(\textrm{TargetedAttack}(\textrm{Image}_k))= \textrm{TrueClass}_k) </math><br />
* '''Adversarial Probability Labels (Confidence)'''<br />The ratio of sum of probability level of the target class for each successful perturbation and the total number of successful attacks. This gives the mean confidence of the successful attacks on the target classification system.<br />
* '''Number of Target Classes'''<br />The number of images after perturbations cannot be classified to any other classes.<br />
* '''Number of Original-Target Class Pairs'''<br />The number of times of each pair being attacked.<br />
<br />
<br />
''Evaluation setups on CIFAR-10 test dataset'':<br />
<br />
Three types of networks played as defense systems: All Convolutional Networks (AllConv), Network in Network (NiN) and VGG16 network. (Figure 1)<br />
<br />
The authors randomly sampled 500 images, with resolution 32x32x3, from the dataset to perform one-pixel attacks and generated 500 samples with 3 and 5 pixel-modification respectively to conduct three-pixel and five-pixel attacks. The effectiveness of one-pixel attack was evaluated on all three introduced networks and the performance comparison, using success rate, was performed as the following: 1-pixel attack and 3-pixel attack on AllConv and 5-pixel attack on NiN. <br />
<br />
Both targeted attacks and non-targeted attacks were considered but only targeted attacks were conducted since the performance of non-targeted attack results could be obtained from the result of targeted attacks by applying a fitness function to increase the probability level of the target class.<br />
<br />
<br />
<br />
''Evaluation setups on ImageNet validation dataset (ILSVRC 2012)'':<br />
<br />
BVLC AlexNet played as defense system. (Figure 1)<br />
<br />
600 sample images were randomly sampled from the dataset. Due to the relatively high resolution of the images (224x224x3), only one-pixel attacked was carried out aiming to verify if extremely small pixel modification in a relatively large image can alternate the classification result.<br />
<br />
Since the number of classes in this dataset is way larger than the one of CIFAR-10, the authors only launched non-targeted attacks and applied a fitness function to decrease the probability level of the true class. <br />
<br />
<br />
<br />
[[File:1px_attack.jpg|thumb|center|400px|Figure 1. Networks Architecture]]<br />
<br />
=== Results ===<br />
<br />
On CIFAR-10, the proposed one-pixel attack succeeded (Table 1) in general and success rate and confidence increased by a significant amount when more pixels were modified (Table 2). Moreover, with one-pixel modification, each image can be perturbed to 2~4 classes in all 3 defense systems, i.e. AllConv, NiN and VGG16 (Figure 2); in particular, VGG16 is slightly more robust than the other two. Generally, all these three networks are vulnerable to one-pixel attack. When more pixels were allowed modifications, a significant number of images were successfully perturbed to up to 8 classes (Figure 3).<br />
<br />
On ImageNet, one-pixel attack performed well regarding to relatively large images (Table 1). The low confidence 5.53% is due to the large number of classes and the main efforts in decreasing the probability level of the true class. <br />
<br />
<gallery mode="slideshow"><br />
File:tables.jpg<br />
File:1px_3nets.jpg|frameless|Figure 2. One-pixel attack on three networks (Su, Vargas and Sakurai, 2018)<br />
File:3attacks.png|frameless|Figure 3. 1, 3 and 5-pixel attacks (Su, Vargas and Sakurai, 2018)<br />
</gallery><br />
<br />
By comparing the non-targeted attack effectiveness between the proposed method and two previous works, single dimensional perturbation vector is enough to delivery the corresponding adversarial images for most of the samples (Figure 4).<br />
<br />
[[File:Capture.PNG|thumb|center|400px|Figure 4. Comparison of non-targeted attack effectiveness between proposed method, LSA and FGSM]]<br />
<br />
<br />
Also, it is easy to craft adversarial images for certain original-target class pairs, especially in the case of one-pixel attack, as the data points of these classes share the vulnerable target directions. However, by increasing the dimensions of perturbations (e.g. from 1 to 3 and 5), original classes are equally robust and have similar number of adversarial images(Figure 5).<br />
<br />
[[File:Capture2.PNG|thumb|center|400px|Figure 5. Number of successful attacks for a specific class; original in blue, target in red]]<br />
<br />
= Discussion and Future Work =<br />
<br />
This paper illustrates the possibility of finding point boundaries where class labels changes by moving data points along few dimensions and analyzing the frequency of changes in class labels quantitatively; it also proves that a single pixel change is powerful enough to perturb a considerable portion of images and the one-pixel attack can further work on different network structures or image sizes. Given the time constraints, the conducted experiments used a low number of DE iterations with a relatively small set of initial candidate solutions; by increasing the number of DE iterations or the initial candidate solutions, the perturbation success rate can be further improved. <br />
<br />
For the future work, it is suggested to use the proposed algorithms and natural image samples for further development of advanced models and better artificial adversarial samples.<br />
<br />
= Critique =<br />
<br />
* When the authors launched 1, 3 and 5-pixel attacks, they did it on different target networks, i.e. 1 and 3-pixel attacks on AllConv and 5-pixel attack on NiN, then compared the corresponding success rates. Such a comparison is not rigorous as different networks might lead to varied classification results. Although we understand that the authors wanted to show the best attack results among those networks, we think all three attacks should be conducted on the same defense systems respectively and consequently we can see the performance of each attack on each target network. The conclusion from these results would be more convincing.<br />
<br />
* Regarding to ImageNet, only AlexNet was used to attack but AllConv, NiN and VGG16. The papers in which AllConv and VGG16 were introduced have good experiment results on ImageNet. Attack experiments towards these networks should be considered to show the effectiveness of one-pixel attack.<br />
<br />
= Reference =<br />
<br />
Carlini.N,2017 https://www.youtube.com/watch?v=yIXNL88JBWQ<br />
<br />
Jiawei Su, Danilo Vasconcellos Vargas, Sakurai Kouichi, 2017 https://arxiv.org/pdf/1710.08864.pdf<br />
<br />
Roman Trusov,2017 https://blog.xix.ai/how-adversarial-attacks-work-87495b81da2d</div>Z3chuhttp://wiki.math.uwaterloo.ca/statwiki/index.php?title=One_pixel_attack_for_fooling_deep_neural_networks&diff=35731One pixel attack for fooling deep neural networks2018-03-27T18:07:18Z<p>Z3chu: /* Results */</p>
<hr />
<div>= Presented by = <br />
1. Ziheng Chu<br />
<br />
2. Minghao Lu<br />
<br />
3. Qi Mai<br />
<br />
4. Qici Tan<br />
<br />
= Introduction =<br />
Neural network first caught many people’s attention in imageNet contest in 2012. Neural network increased accuracy to 85% from 75%. The following year, it is increased to 89%. From no one used neural network to everyone uses the neural network. Today we have 97% accuracy in using deep neural network (DNN). So the problem of image recognition by are artificial intelligence is solved. However, there is one catch(Carlini.N,2017).<br />
<br />
The catch is that the DNN is really easy to be fooled. Here is an example. An image of a dog is classified as a hummingbird. Research studies by Google Brian, which is a deep learning artificial intelligence research team at Google, showed that any machine learning classifier can be tricked to give wrong predictions. The action of designing an input in a specific way to get the wrong result from the model is called an adversarial attack (Roman Trusov,2017). The input image is the adversarial image. This image is created by adding a tiny amount of perturbation, which is not so imperceptible to human eyes as shown in Figure 1. After zooming into Figure 1, as shown in Figure 2, a small amount of perturbation led to misclassify a dog as a hummingbird.<br />
<br />
[[File:abcd.png|thumb|center|400px|Figure 1 Adversarial Example]]<br />
<br />
[[File:zoom.jpg|thumb|center|400px|Figure 2 Adversarial Example (Zoomed)]]<br />
<br />
How is it possible? DNN models consist of transformation. Most of those transformations are sometimes very sensitive to a small change. Think of the DNN as a set of high-dimensional decision boundary. When an input is not perfect, if the decision boundary is too simple and linear, mostly it leads to misclassify. <br />
<br />
Harnessing this sensitivity is a way to better understand and product robust algorithm in AI security. This paper aims to demonstrate the vulnerability of DNN by presenting some extreme scenarios - one pixel attack. As shown in Figure3, only one pixel was perturbed, the classification was wrong in each image. Although, there is no profound defense to the attack as of current state, the investigation of one-pixel attack may shield lights on the behavior of DNN. Ultimately, it leads to the discussion of the security implications to future solution.<br />
<br />
[[File:one.jpg|thumb|center|400px|Figure 3 One Pixel Attack Examples]]<br />
<br />
This paper proposed one pixel attack in a scenario where the only information available is the probability labels. Comparing to previous work, this proposal showed its effectiveness of successful attack rate up to 73%, its simplicity of semi-black-box which only required probability label no need inner information, and its flexibility in attacking more models, especially the networks that are not differentiable and the gradient calculation is difficult.<br />
<br />
With the intension of creating an adversarial attack for better understanding the security of DNN, one pixel attack should be considered. Two main reasons: 1) a new way of exploring the high dimensional DNN by using fewer and lower dimensional slices. It is different from previous work, where perturbation was done by adding small value to each pixel. 2) a measure of perceptiveness to demonstrate the severity of one-pixel attack as comparing to a few pixel examples.<br />
<br />
= Related works =<br />
The sensitivity to well-turned artificial perturbation were investigated in various related work.<br />
<br />
1. First perturbation was crafted by several gradient-based algorithms using back- propagation for obtaining gradient information. (C. Szegedy et al. )<br />
<br />
2. Fast gradient sign algorithm for calculating effective perturbation<br />
It was with the hypotheses of the linearity and high-dimensions of inputs were the main reason of vulnerability. (I.J.Goodfellow et al.) <br />
<br />
3. Greedy perturbation searching method by assuming the linearity of DNN decision boundary (S.M Moosavi-Dezfooli et al.) Jacobian matrix to build “Adversarial Saliency Map” which indicates the effectiveness of conducting a fixed length perturbation through the direction of each axis (N. Papernot et al. )<br />
<br />
4. The images can hardly be recognized by human eyes but nevertheless classified by the network with high confi- dence. (A. Nguyen et al. )<br />
<br />
5. Several black-box attacks that require no internal knowledge about the target systems such as gradients, have also been proposed. only utilized it as a starting point to derive a further semi black-box attack which needs to modify more pixels (N. Narodytska et al)<br />
<br />
6. Both natural and random images are found to be vulnerable to adversarial perturbation. Assuming these images are evenly distributed, it suggests that most data points in the input space are gathered near to the boundaries. ( A. Fawzi, S. M. Moosavi Dezfooli, and P. Frossard.)<br />
<br />
7. A curvature analysis region along most directions around natural images are flat with only few directions where the space is curved and the images are sensitive to perturbation. (A. Fawzi et al.)<br />
<br />
8. Universal perturbations (i.e. a perturbation that when added to any natural image can generate adversarial samples with high effectiveness) were shown possible and to achieve a high effectiveness when compared to random perturbation. This indicates that the diversity of boundaries might be low while the boundaries’ shapes near different data points are similar (S. M. Moosavi Dezfooli, A. Fawzi, O. Fawzi, and P. Frossard.)<br />
<br />
= Methodology =<br />
<br />
== Problem Description ==<br />
<br />
We can formalize the generation of adversarial images as an constrained optimization problem. We are given a classifier '''F''' and targeted adversarial class '''adv'''. Let '''x''' be the vectorized form of an image. Let F_adv('''x''') represent the probability assigned to the targeted class '''adv''' for vector '''x''' by the given classifier. Let e('''x''') represent an additive adversarial perturbation vector for image '''x'''. The goal of the targeted attack adversary is to find the perturbation vector with constrained size(norm) that maximizes the probability assigned to the targeted class by the classifier. Formally <br />
<br />
[[File:ONE_PIXEL_CSP1.png | center ]]<br />
<br />
<br />
For few-pixel attack, the problem statement is changed slightly. We constrain on the number of non-zero elements of the perturbation vector instead of the norm of the vector.<br />
<br />
[[File:ONE_PIXEL_CSP2.png | center ]]<br />
<br />
<br />
For the usual adversarial case, we can shift '''x''' in all dimensions, but the strength(norm) of the shift is bounded by L. For our one-pixel attack case, we are using the second equation with d=1. '''x''' is only allowed to be perturbed along a single axis, so the degree of freedom of the shift is greatly reduces. However, the shift in the one axis can be of arbitrary strength.<br />
<br />
<br />
== Differential Evolution ==<br />
Differential evolution(DE) is a population based optimization algorithm belonging to the class of evolutionary algorithms(EA). For DE, in the selection process, the population is in a way "segmented" into families( offspring and parents ), and the most optimal from each family is chosen. This segmentation allows DE to keep more diversity in each iteration than other EAs and makes it more suitable for complex multi-modal optimization problems. Additionally, DE does not use gradient information, which makes it suitable for our problem.<br />
<br />
= Evaluation and Results =<br />
<br />
=== Measurement Metrics and Datasets ===<br />
<br />
The authors used 4 measurement metrics to evaluate the effectiveness of the proposed attacks. <br />
* '''Success Rate''' <br /> ''Non-targeted attack'': the percentage of adversarial images were successfully classified to any possible classes other than the true one. <br /><br />
:<math> \textrm{success rate }=\displaystyle\sum_{k=1}^N I(\textrm{Network}(\textrm{Attack}(\textrm{Image}_k))\neq \textrm{TrueClass}_k) </math> <br />''Targeted attack'': the probability of successfully classifying a perturbed image to a targeted class.<br />
:<math> \textrm{success rate }=\displaystyle\sum_{k=1}^N I(\textrm{Network}(\textrm{TargetedAttack}(\textrm{Image}_k))= \textrm{TrueClass}_k) </math><br />
* '''Adversarial Probability Labels (Confidence)'''<br />The ratio of sum of probability level of the target class for each successful perturbation and the total number of successful attacks. This gives the mean confidence of the successful attacks on the target classification system.<br />
* '''Number of Target Classes'''<br />The number of images after perturbations cannot be classified to any other classes.<br />
* '''Number of Original-Target Class Pairs'''<br />The number of times of each pair being attacked.<br />
<br />
<br />
''Evaluation setups on CIFAR-10 test dataset'':<br />
<br />
Three types of networks played as defense systems: All Convolutional Networks (AllConv), Network in Network (NiN) and VGG16 network. (Figure 1)<br />
<br />
The authors randomly sampled 500 images, with resolution 32x32x3, from the dataset to perform one-pixel attacks and generated 500 samples with 3 and 5 pixel-modification respectively to conduct three-pixel and five-pixel attacks. The effectiveness of one-pixel attack was evaluated on all three introduced networks and the performance comparison, using success rate, was performed as the following: 1-pixel attack and 3-pixel attack on AllConv and 5-pixel attack on NiN. <br />
<br />
Both targeted attacks and non-targeted attacks were considered but only targeted attacks were conducted since the performance of non-targeted attack results could be obtained from the result of targeted attacks by applying a fitness function to increase the probability level of the target class.<br />
<br />
<br />
<br />
''Evaluation setups on ImageNet validation dataset (ILSVRC 2012)'':<br />
<br />
BVLC AlexNet played as defense system. (Figure 1)<br />
<br />
600 sample images were randomly sampled from the dataset. Due to the relatively high resolution of the images (224x224x3), only one-pixel attacked was carried out aiming to verify if extremely small pixel modification in a relatively large image can alternate the classification result.<br />
<br />
Since the number of classes in this dataset is way larger than the one of CIFAR-10, the authors only launched non-targeted attacks and applied a fitness function to decrease the probability level of the true class. <br />
<br />
<br />
<br />
[[File:1px_attack.jpg|thumb|center|400px|Figure 1. Networks Architecture]]<br />
<br />
=== Results ===<br />
<br />
On CIFAR-10, the proposed one-pixel attack succeeded (Table 1) in general and success rate and confidence increased by a significant amount when more pixels were modified (Table 2). Moreover, with one-pixel modification, each image can be perturbed to 2~4 classes in all 3 defense systems, i.e. AllConv, NiN and VGG16 (Figure 2); in particular, VGG16 is slightly more robust than the other two. Generally, all these three networks are vulnerable to one-pixel attack. When more pixels were allowed modifications, a significant number of images were successfully perturbed to up to 8 classes (Figure 3).<br />
<br />
On ImageNet, one-pixel attack performed well regarding to relatively large images (Table 1). The low confidence 5.53% is due to the large number of classes and the main efforts in decreasing the probability level of the true class. <br />
<br />
<gallery mode="slideshow"><br />
File:tables.jpg<br />
File:1px_3nets.jpg|frameless|Figure 2. One-pixel attack on three networks (Su, Vargas and Sakurai, 2018)<br />
File:3attacks.png|frameless|Figure 3. 1, 3 and 5-pixel attacks (Su, Vargas and Sakurai, 2018)<br />
</gallery><br />
<br />
By comparing the non-targeted attack effectiveness between the proposed method and two previous works, single dimensional perturbation vector is enough to delivery the corresponding adversarial images for most of the samples (Figure 4).<br />
<br />
[[File:Capture.PNG|thumb|center|400px|Figure 4. Comparison of non-targeted attack effectiveness between proposed method, LSA and FGSM]]<br />
<br />
<br />
Also, it is easy to craft adversarial images for certain original-target class pairs, especially in the case of one-pixel attack, as the data points of these classes share the vulnerable target directions. However, by increasing the dimensions of perturbations (e.g. from 1 to 3 and 5), original classes are equally robust and have similar number of adversarial images(Figure 5).<br />
<br />
[[File:Capture2.PNG|thumb|center|400px|Figure 5. Number of successful attacks for a specific class; original (blue), target(red)]]<br />
<br />
= Discussion and Future Work =<br />
<br />
This paper illustrates the possibility of finding point boundaries where class labels changes by moving data points along few dimensions and analyzing the frequency of changes in class labels quantitatively; it also proves that a single pixel change is powerful enough to perturb a considerable portion of images and the one-pixel attack can further work on different network structures or image sizes. Given the time constraints, the conducted experiments used a low number of DE iterations with a relatively small set of initial candidate solutions; by increasing the number of DE iterations or the initial candidate solutions, the perturbation success rate can be further improved. <br />
<br />
For the future work, it is suggested to use the proposed algorithms and natural image samples for further development of advanced models and better artificial adversarial samples.<br />
<br />
= Critique =<br />
<br />
* When the authors launched 1, 3 and 5-pixel attacks, they did it on different target networks, i.e. 1 and 3-pixel attacks on AllConv and 5-pixel attack on NiN, then compared the corresponding success rates. Such a comparison is not rigorous as different networks might lead to varied classification results. Although we understand that the authors wanted to show the best attack results among those networks, we think all three attacks should be conducted on the same defense systems respectively and consequently we can see the performance of each attack on each target network. The conclusion from these results would be more convincing.<br />
<br />
* Regarding to ImageNet, only AlexNet was used to attack but AllConv, NiN and VGG16. The papers in which AllConv and VGG16 were introduced have good experiment results on ImageNet. Attack experiments towards these networks should be considered to show the effectiveness of one-pixel attack.<br />
<br />
= Reference =<br />
<br />
Carlini.N,2017 https://www.youtube.com/watch?v=yIXNL88JBWQ<br />
<br />
Jiawei Su, Danilo Vasconcellos Vargas, Sakurai Kouichi, 2017 https://arxiv.org/pdf/1710.08864.pdf<br />
<br />
Roman Trusov,2017 https://blog.xix.ai/how-adversarial-attacks-work-87495b81da2d</div>Z3chuhttp://wiki.math.uwaterloo.ca/statwiki/index.php?title=File:Capture2.PNG&diff=35729File:Capture2.PNG2018-03-27T18:04:06Z<p>Z3chu: </p>
<hr />
<div></div>Z3chuhttp://wiki.math.uwaterloo.ca/statwiki/index.php?title=One_pixel_attack_for_fooling_deep_neural_networks&diff=35728One pixel attack for fooling deep neural networks2018-03-27T18:03:10Z<p>Z3chu: /* Results */</p>
<hr />
<div>= Presented by = <br />
1. Ziheng Chu<br />
<br />
2. Minghao Lu<br />
<br />
3. Qi Mai<br />
<br />
4. Qici Tan<br />
<br />
= Introduction =<br />
Neural network first caught many people’s attention in imageNet contest in 2012. Neural network increased accuracy to 85% from 75%. The following year, it is increased to 89%. From no one used neural network to everyone uses the neural network. Today we have 97% accuracy in using deep neural network (DNN). So the problem of image recognition by are artificial intelligence is solved. However, there is one catch(Carlini.N,2017).<br />
<br />
The catch is that the DNN is really easy to be fooled. Here is an example. An image of a dog is classified as a hummingbird. Research studies by Google Brian, which is a deep learning artificial intelligence research team at Google, showed that any machine learning classifier can be tricked to give wrong predictions. The action of designing an input in a specific way to get the wrong result from the model is called an adversarial attack (Roman Trusov,2017). The input image is the adversarial image. This image is created by adding a tiny amount of perturbation, which is not so imperceptible to human eyes as shown in Figure 1. After zooming into Figure 1, as shown in Figure 2, a small amount of perturbation led to misclassify a dog as a hummingbird.<br />
<br />
[[File:abcd.png|thumb|center|400px|Figure 1 Adversarial Example]]<br />
<br />
[[File:zoom.jpg|thumb|center|400px|Figure 2 Adversarial Example (Zoomed)]]<br />
<br />
How is it possible? DNN models consist of transformation. Most of those transformations are sometimes very sensitive to a small change. Think of the DNN as a set of high-dimensional decision boundary. When an input is not perfect, if the decision boundary is too simple and linear, mostly it leads to misclassify. <br />
<br />
Harnessing this sensitivity is a way to better understand and product robust algorithm in AI security. This paper aims to demonstrate the vulnerability of DNN by presenting some extreme scenarios - one pixel attack. As shown in Figure3, only one pixel was perturbed, the classification was wrong in each image. Although, there is no profound defense to the attack as of current state, the investigation of one-pixel attack may shield lights on the behavior of DNN. Ultimately, it leads to the discussion of the security implications to future solution.<br />
<br />
[[File:one.jpg|thumb|center|400px|Figure 3 One Pixel Attack Examples]]<br />
<br />
This paper proposed one pixel attack in a scenario where the only information available is the probability labels. Comparing to previous work, this proposal showed its effectiveness of successful attack rate up to 73%, its simplicity of semi-black-box which only required probability label no need inner information, and its flexibility in attacking more models, especially the networks that are not differentiable and the gradient calculation is difficult.<br />
<br />
With the intension of creating an adversarial attack for better understanding the security of DNN, one pixel attack should be considered. Two main reasons: 1) a new way of exploring the high dimensional DNN by using fewer and lower dimensional slices. It is different from previous work, where perturbation was done by adding small value to each pixel. 2) a measure of perceptiveness to demonstrate the severity of one-pixel attack as comparing to a few pixel examples.<br />
<br />
= Related works =<br />
The sensitivity to well-turned artificial perturbation were investigated in various related work.<br />
<br />
1. First perturbation was crafted by several gradient-based algorithms using back- propagation for obtaining gradient information. (C. Szegedy et al. )<br />
<br />
2. Fast gradient sign algorithm for calculating effective perturbation<br />
It was with the hypotheses of the linearity and high-dimensions of inputs were the main reason of vulnerability. (I.J.Goodfellow et al.) <br />
<br />
3. Greedy perturbation searching method by assuming the linearity of DNN decision boundary (S.M Moosavi-Dezfooli et al.) Jacobian matrix to build “Adversarial Saliency Map” which indicates the effectiveness of conducting a fixed length perturbation through the direction of each axis (N. Papernot et al. )<br />
<br />
4. The images can hardly be recognized by human eyes but nevertheless classified by the network with high confi- dence. (A. Nguyen et al. )<br />
<br />
5. Several black-box attacks that require no internal knowledge about the target systems such as gradients, have also been proposed. only utilized it as a starting point to derive a further semi black-box attack which needs to modify more pixels (N. Narodytska et al)<br />
<br />
6. Both natural and random images are found to be vulnerable to adversarial perturbation. Assuming these images are evenly distributed, it suggests that most data points in the input space are gathered near to the boundaries. ( A. Fawzi, S. M. Moosavi Dezfooli, and P. Frossard.)<br />
<br />
7. A curvature analysis region along most directions around natural images are flat with only few directions where the space is curved and the images are sensitive to perturbation. (A. Fawzi et al.)<br />
<br />
8. Universal perturbations (i.e. a perturbation that when added to any natural image can generate adversarial samples with high effectiveness) were shown possible and to achieve a high effectiveness when compared to random perturbation. This indicates that the diversity of boundaries might be low while the boundaries’ shapes near different data points are similar (S. M. Moosavi Dezfooli, A. Fawzi, O. Fawzi, and P. Frossard.)<br />
<br />
= Methodology =<br />
<br />
== Problem Description ==<br />
<br />
We can formalize the generation of adversarial images as an constrained optimization problem. We are given a classifier '''F''' and targeted adversarial class '''adv'''. Let '''x''' be the vectorized form of an image. Let F_adv('''x''') represent the probability assigned to the targeted class '''adv''' for vector '''x''' by the given classifier. Let e('''x''') represent an additive adversarial perturbation vector for image '''x'''. The goal of the targeted attack adversary is to find the perturbation vector with constrained size(norm) that maximizes the probability assigned to the targeted class by the classifier. Formally <br />
<br />
[[File:ONE_PIXEL_CSP1.png | center ]]<br />
<br />
<br />
For few-pixel attack, the problem statement is changed slightly. We constrain on the number of non-zero elements of the perturbation vector instead of the norm of the vector.<br />
<br />
[[File:ONE_PIXEL_CSP2.png | center ]]<br />
<br />
<br />
For the usual adversarial case, we can shift '''x''' in all dimensions, but the strength(norm) of the shift is bounded by L. For our one-pixel attack case, we are using the second equation with d=1. '''x''' is only allowed to be perturbed along a single axis, so the degree of freedom of the shift is greatly reduces. However, the shift in the one axis can be of arbitrary strength.<br />
<br />
<br />
== Differential Evolution ==<br />
Differential evolution(DE) is a population based optimization algorithm belonging to the class of evolutionary algorithms(EA). For DE, in the selection process, the population is in a way "segmented" into families( offspring and parents ), and the most optimal from each family is chosen. This segmentation allows DE to keep more diversity in each iteration than other EAs and makes it more suitable for complex multi-modal optimization problems. Additionally, DE does not use gradient information, which makes it suitable for our problem.<br />
<br />
= Evaluation and Results =<br />
<br />
=== Measurement Metrics and Datasets ===<br />
<br />
The authors used 4 measurement metrics to evaluate the effectiveness of the proposed attacks. <br />
* '''Success Rate''' <br /> ''Non-targeted attack'': the percentage of adversarial images were successfully classified to any possible classes other than the true one. <br /><br />
:<math> \textrm{success rate }=\displaystyle\sum_{k=1}^N I(\textrm{Network}(\textrm{Attack}(\textrm{Image}_k))\neq \textrm{TrueClass}_k) </math> <br />''Targeted attack'': the probability of successfully classifying a perturbed image to a targeted class.<br />
:<math> \textrm{success rate }=\displaystyle\sum_{k=1}^N I(\textrm{Network}(\textrm{TargetedAttack}(\textrm{Image}_k))= \textrm{TrueClass}_k) </math><br />
* '''Adversarial Probability Labels (Confidence)'''<br />The ratio of sum of probability level of the target class for each successful perturbation and the total number of successful attacks. This gives the mean confidence of the successful attacks on the target classification system.<br />
* '''Number of Target Classes'''<br />The number of images after perturbations cannot be classified to any other classes.<br />
* '''Number of Original-Target Class Pairs'''<br />The number of times of each pair being attacked.<br />
<br />
<br />
''Evaluation setups on CIFAR-10 test dataset'':<br />
<br />
Three types of networks played as defense systems: All Convolutional Networks (AllConv), Network in Network (NiN) and VGG16 network. (Figure 1)<br />
<br />
The authors randomly sampled 500 images, with resolution 32x32x3, from the dataset to perform one-pixel attacks and generated 500 samples with 3 and 5 pixel-modification respectively to conduct three-pixel and five-pixel attacks. The effectiveness of one-pixel attack was evaluated on all three introduced networks and the performance comparison, using success rate, was performed as the following: 1-pixel attack and 3-pixel attack on AllConv and 5-pixel attack on NiN. <br />
<br />
Both targeted attacks and non-targeted attacks were considered but only targeted attacks were conducted since the performance of non-targeted attack results could be obtained from the result of targeted attacks by applying a fitness function to increase the probability level of the target class.<br />
<br />
<br />
<br />
''Evaluation setups on ImageNet validation dataset (ILSVRC 2012)'':<br />
<br />
BVLC AlexNet played as defense system. (Figure 1)<br />
<br />
600 sample images were randomly sampled from the dataset. Due to the relatively high resolution of the images (224x224x3), only one-pixel attacked was carried out aiming to verify if extremely small pixel modification in a relatively large image can alternate the classification result.<br />
<br />
Since the number of classes in this dataset is way larger than the one of CIFAR-10, the authors only launched non-targeted attacks and applied a fitness function to decrease the probability level of the true class. <br />
<br />
<br />
<br />
[[File:1px_attack.jpg|thumb|center|400px|Figure 1. Networks Architecture]]<br />
<br />
=== Results ===<br />
<br />
On CIFAR-10, the proposed one-pixel attack succeeded (Table 1) in general and success rate and confidence increased by a significant amount when more pixels were modified (Table 2). Moreover, with one-pixel modification, each image can be perturbed to 2~4 classes in all 3 defense systems, i.e. AllConv, NiN and VGG16 (Figure 2); in particular, VGG16 is slightly more robust than the other two. Generally, all these three networks are vulnerable to one-pixel attack. When more pixels were allowed modifications, a significant number of images were successfully perturbed to up to 8 classes (Figure 3).<br />
<br />
On ImageNet, one-pixel attack performed well regarding to relatively large images (Table 1). The low confidence 5.53% is due to the large number of classes and the main efforts in decreasing the probability level of the true class. <br />
<br />
<gallery mode="slideshow"><br />
File:tables.jpg<br />
File:1px_3nets.jpg|frameless|Figure 2. One-pixel attack on three networks (Su, Vargas and Sakurai, 2018)<br />
File:3attacks.png|frameless|Figure 3. 1, 3 and 5-pixel attacks (Su, Vargas and Sakurai, 2018)<br />
</gallery><br />
<br />
By comparing the non-targeted attack effectiveness between the proposed method and two previous works, single dimensional perturbation vector is enough to delivery the corresponding adversarial images for most of the samples (Figure 4).<br />
<br />
[[File:Capture.PNG|thumb|center|400px|Figure 4. Comparison of non-targeted attack effectiveness between proposed method, LSA and FGSM]]<br />
<br />
<br />
Also, it is easy to craft adversarial images for certain original-target class pairs, especially in the case of one-pixel attack, as the data points of these classes share the vulnerable target directions. However, by increasing the dimensions of perturbations (e.g. from 1 to 3 and 5), most original classes are equally robust (Figure 5).<br />
<br />
= Discussion and Future Work =<br />
<br />
This paper illustrates the possibility of finding point boundaries where class labels changes by moving data points along few dimensions and analyzing the frequency of changes in class labels quantitatively; it also proves that a single pixel change is powerful enough to perturb a considerable portion of images and the one-pixel attack can further work on different network structures or image sizes. Given the time constraints, the conducted experiments used a low number of DE iterations with a relatively small set of initial candidate solutions; by increasing the number of DE iterations or the initial candidate solutions, the perturbation success rate can be further improved. <br />
<br />
For the future work, it is suggested to use the proposed algorithms and natural image samples for further development of advanced models and better artificial adversarial samples.<br />
<br />
= Critique =<br />
<br />
* When the authors launched 1, 3 and 5-pixel attacks, they did it on different target networks, i.e. 1 and 3-pixel attacks on AllConv and 5-pixel attack on NiN, then compared the corresponding success rates. Such a comparison is not rigorous as different networks might lead to varied classification results. Although we understand that the authors wanted to show the best attack results among those networks, we think all three attacks should be conducted on the same defense systems respectively and consequently we can see the performance of each attack on each target network. The conclusion from these results would be more convincing.<br />
<br />
* Regarding to ImageNet, only AlexNet was used to attack but AllConv, NiN and VGG16. The papers in which AllConv and VGG16 were introduced have good experiment results on ImageNet. Attack experiments towards these networks should be considered to show the effectiveness of one-pixel attack.<br />
<br />
= Reference =<br />
<br />
Carlini.N,2017 https://www.youtube.com/watch?v=yIXNL88JBWQ<br />
<br />
Jiawei Su, Danilo Vasconcellos Vargas, Sakurai Kouichi, 2017 https://arxiv.org/pdf/1710.08864.pdf<br />
<br />
Roman Trusov,2017 https://blog.xix.ai/how-adversarial-attacks-work-87495b81da2d</div>Z3chuhttp://wiki.math.uwaterloo.ca/statwiki/index.php?title=One_pixel_attack_for_fooling_deep_neural_networks&diff=35727One pixel attack for fooling deep neural networks2018-03-27T18:02:42Z<p>Z3chu: /* Results */</p>
<hr />
<div>= Presented by = <br />
1. Ziheng Chu<br />
<br />
2. Minghao Lu<br />
<br />
3. Qi Mai<br />
<br />
4. Qici Tan<br />
<br />
= Introduction =<br />
Neural network first caught many people’s attention in imageNet contest in 2012. Neural network increased accuracy to 85% from 75%. The following year, it is increased to 89%. From no one used neural network to everyone uses the neural network. Today we have 97% accuracy in using deep neural network (DNN). So the problem of image recognition by are artificial intelligence is solved. However, there is one catch(Carlini.N,2017).<br />
<br />
The catch is that the DNN is really easy to be fooled. Here is an example. An image of a dog is classified as a hummingbird. Research studies by Google Brian, which is a deep learning artificial intelligence research team at Google, showed that any machine learning classifier can be tricked to give wrong predictions. The action of designing an input in a specific way to get the wrong result from the model is called an adversarial attack (Roman Trusov,2017). The input image is the adversarial image. This image is created by adding a tiny amount of perturbation, which is not so imperceptible to human eyes as shown in Figure 1. After zooming into Figure 1, as shown in Figure 2, a small amount of perturbation led to misclassify a dog as a hummingbird.<br />
<br />
[[File:abcd.png|thumb|center|400px|Figure 1 Adversarial Example]]<br />
<br />
[[File:zoom.jpg|thumb|center|400px|Figure 2 Adversarial Example (Zoomed)]]<br />
<br />
How is it possible? DNN models consist of transformation. Most of those transformations are sometimes very sensitive to a small change. Think of the DNN as a set of high-dimensional decision boundary. When an input is not perfect, if the decision boundary is too simple and linear, mostly it leads to misclassify. <br />
<br />
Harnessing this sensitivity is a way to better understand and product robust algorithm in AI security. This paper aims to demonstrate the vulnerability of DNN by presenting some extreme scenarios - one pixel attack. As shown in Figure3, only one pixel was perturbed, the classification was wrong in each image. Although, there is no profound defense to the attack as of current state, the investigation of one-pixel attack may shield lights on the behavior of DNN. Ultimately, it leads to the discussion of the security implications to future solution.<br />
<br />
[[File:one.jpg|thumb|center|400px|Figure 3 One Pixel Attack Examples]]<br />
<br />
This paper proposed one pixel attack in a scenario where the only information available is the probability labels. Comparing to previous work, this proposal showed its effectiveness of successful attack rate up to 73%, its simplicity of semi-black-box which only required probability label no need inner information, and its flexibility in attacking more models, especially the networks that are not differentiable and the gradient calculation is difficult.<br />
<br />
With the intension of creating an adversarial attack for better understanding the security of DNN, one pixel attack should be considered. Two main reasons: 1) a new way of exploring the high dimensional DNN by using fewer and lower dimensional slices. It is different from previous work, where perturbation was done by adding small value to each pixel. 2) a measure of perceptiveness to demonstrate the severity of one-pixel attack as comparing to a few pixel examples.<br />
<br />
= Related works =<br />
The sensitivity to well-turned artificial perturbation were investigated in various related work.<br />
<br />
1. First perturbation was crafted by several gradient-based algorithms using back- propagation for obtaining gradient information. (C. Szegedy et al. )<br />
<br />
2. Fast gradient sign algorithm for calculating effective perturbation<br />
It was with the hypotheses of the linearity and high-dimensions of inputs were the main reason of vulnerability. (I.J.Goodfellow et al.) <br />
<br />
3. Greedy perturbation searching method by assuming the linearity of DNN decision boundary (S.M Moosavi-Dezfooli et al.) Jacobian matrix to build “Adversarial Saliency Map” which indicates the effectiveness of conducting a fixed length perturbation through the direction of each axis (N. Papernot et al. )<br />
<br />
4. The images can hardly be recognized by human eyes but nevertheless classified by the network with high confi- dence. (A. Nguyen et al. )<br />
<br />
5. Several black-box attacks that require no internal knowledge about the target systems such as gradients, have also been proposed. only utilized it as a starting point to derive a further semi black-box attack which needs to modify more pixels (N. Narodytska et al)<br />
<br />
6. Both natural and random images are found to be vulnerable to adversarial perturbation. Assuming these images are evenly distributed, it suggests that most data points in the input space are gathered near to the boundaries. ( A. Fawzi, S. M. Moosavi Dezfooli, and P. Frossard.)<br />
<br />
7. A curvature analysis region along most directions around natural images are flat with only few directions where the space is curved and the images are sensitive to perturbation. (A. Fawzi et al.)<br />
<br />
8. Universal perturbations (i.e. a perturbation that when added to any natural image can generate adversarial samples with high effectiveness) were shown possible and to achieve a high effectiveness when compared to random perturbation. This indicates that the diversity of boundaries might be low while the boundaries’ shapes near different data points are similar (S. M. Moosavi Dezfooli, A. Fawzi, O. Fawzi, and P. Frossard.)<br />
<br />
= Methodology =<br />
<br />
== Problem Description ==<br />
<br />
We can formalize the generation of adversarial images as an constrained optimization problem. We are given a classifier '''F''' and targeted adversarial class '''adv'''. Let '''x''' be the vectorized form of an image. Let F_adv('''x''') represent the probability assigned to the targeted class '''adv''' for vector '''x''' by the given classifier. Let e('''x''') represent an additive adversarial perturbation vector for image '''x'''. The goal of the targeted attack adversary is to find the perturbation vector with constrained size(norm) that maximizes the probability assigned to the targeted class by the classifier. Formally <br />
<br />
[[File:ONE_PIXEL_CSP1.png | center ]]<br />
<br />
<br />
For few-pixel attack, the problem statement is changed slightly. We constrain on the number of non-zero elements of the perturbation vector instead of the norm of the vector.<br />
<br />
[[File:ONE_PIXEL_CSP2.png | center ]]<br />
<br />
<br />
For the usual adversarial case, we can shift '''x''' in all dimensions, but the strength(norm) of the shift is bounded by L. For our one-pixel attack case, we are using the second equation with d=1. '''x''' is only allowed to be perturbed along a single axis, so the degree of freedom of the shift is greatly reduces. However, the shift in the one axis can be of arbitrary strength.<br />
<br />
<br />
== Differential Evolution ==<br />
Differential evolution(DE) is a population based optimization algorithm belonging to the class of evolutionary algorithms(EA). For DE, in the selection process, the population is in a way "segmented" into families( offspring and parents ), and the most optimal from each family is chosen. This segmentation allows DE to keep more diversity in each iteration than other EAs and makes it more suitable for complex multi-modal optimization problems. Additionally, DE does not use gradient information, which makes it suitable for our problem.<br />
<br />
= Evaluation and Results =<br />
<br />
=== Measurement Metrics and Datasets ===<br />
<br />
The authors used 4 measurement metrics to evaluate the effectiveness of the proposed attacks. <br />
* '''Success Rate''' <br /> ''Non-targeted attack'': the percentage of adversarial images were successfully classified to any possible classes other than the true one. <br /><br />
:<math> \textrm{success rate }=\displaystyle\sum_{k=1}^N I(\textrm{Network}(\textrm{Attack}(\textrm{Image}_k))\neq \textrm{TrueClass}_k) </math> <br />''Targeted attack'': the probability of successfully classifying a perturbed image to a targeted class.<br />
:<math> \textrm{success rate }=\displaystyle\sum_{k=1}^N I(\textrm{Network}(\textrm{TargetedAttack}(\textrm{Image}_k))= \textrm{TrueClass}_k) </math><br />
* '''Adversarial Probability Labels (Confidence)'''<br />The ratio of sum of probability level of the target class for each successful perturbation and the total number of successful attacks. This gives the mean confidence of the successful attacks on the target classification system.<br />
* '''Number of Target Classes'''<br />The number of images after perturbations cannot be classified to any other classes.<br />
* '''Number of Original-Target Class Pairs'''<br />The number of times of each pair being attacked.<br />
<br />
<br />
''Evaluation setups on CIFAR-10 test dataset'':<br />
<br />
Three types of networks played as defense systems: All Convolutional Networks (AllConv), Network in Network (NiN) and VGG16 network. (Figure 1)<br />
<br />
The authors randomly sampled 500 images, with resolution 32x32x3, from the dataset to perform one-pixel attacks and generated 500 samples with 3 and 5 pixel-modification respectively to conduct three-pixel and five-pixel attacks. The effectiveness of one-pixel attack was evaluated on all three introduced networks and the performance comparison, using success rate, was performed as the following: 1-pixel attack and 3-pixel attack on AllConv and 5-pixel attack on NiN. <br />
<br />
Both targeted attacks and non-targeted attacks were considered but only targeted attacks were conducted since the performance of non-targeted attack results could be obtained from the result of targeted attacks by applying a fitness function to increase the probability level of the target class.<br />
<br />
<br />
<br />
''Evaluation setups on ImageNet validation dataset (ILSVRC 2012)'':<br />
<br />
BVLC AlexNet played as defense system. (Figure 1)<br />
<br />
600 sample images were randomly sampled from the dataset. Due to the relatively high resolution of the images (224x224x3), only one-pixel attacked was carried out aiming to verify if extremely small pixel modification in a relatively large image can alternate the classification result.<br />
<br />
Since the number of classes in this dataset is way larger than the one of CIFAR-10, the authors only launched non-targeted attacks and applied a fitness function to decrease the probability level of the true class. <br />
<br />
<br />
<br />
[[File:1px_attack.jpg|thumb|center|400px|Figure 1. Networks Architecture]]<br />
<br />
=== Results ===<br />
<br />
On CIFAR-10, the proposed one-pixel attack succeeded (Table 1) in general and success rate and confidence increased by a significant amount when more pixels were modified (Table 2). Moreover, with one-pixel modification, each image can be perturbed to 2~4 classes in all 3 defense systems, i.e. AllConv, NiN and VGG16 (Figure 2); in particular, VGG16 is slightly more robust than the other two. Generally, all these three networks are vulnerable to one-pixel attack. When more pixels were allowed modifications, a significant number of images were successfully perturbed to up to 8 classes (Figure 3).<br />
<br />
On ImageNet, one-pixel attack performed well regarding to relatively large images (Table 1). The low confidence 5.53% is due to the large number of classes and the main efforts in decreasing the probability level of the true class. <br />
<br />
<gallery mode="slideshow"><br />
File:tables.jpg<br />
File:1px_3nets.jpg|frameless|Figure 2. One-pixel attack on three networks (Su, Vargas and Sakurai, 2018)<br />
File:3attacks.png|frameless|Figure 3. 1, 3 and 5-pixel attacks (Su, Vargas and Sakurai, 2018)<br />
</gallery><br />
<br />
By comparing the non-targeted attack effectiveness between the proposed method and two previous works, single dimensional perturbation vector is enough to delivery the corresponding adversarial images for most of the samples (Figure 4).<br />
<br />
[[File:Capture.PNG|thumb|center|400px|Figure 4. Comparison of non-targeted attack effectiveness between proposed method, LSA and FGSM]]<br />
<br />
<br />
Also, it is easy to craft adversarial images for certain original-target class pairs, especially in the case of one-pixel attack, as the data points of these classes share the vulnerable target directions. However, by increasing the dimensions of perturbations (e.g. from 1 to 3/5), most original classes are equally robust (Figure 5).<br />
<br />
= Discussion and Future Work =<br />
<br />
This paper illustrates the possibility of finding point boundaries where class labels changes by moving data points along few dimensions and analyzing the frequency of changes in class labels quantitatively; it also proves that a single pixel change is powerful enough to perturb a considerable portion of images and the one-pixel attack can further work on different network structures or image sizes. Given the time constraints, the conducted experiments used a low number of DE iterations with a relatively small set of initial candidate solutions; by increasing the number of DE iterations or the initial candidate solutions, the perturbation success rate can be further improved. <br />
<br />
For the future work, it is suggested to use the proposed algorithms and natural image samples for further development of advanced models and better artificial adversarial samples.<br />
<br />
= Critique =<br />
<br />
* When the authors launched 1, 3 and 5-pixel attacks, they did it on different target networks, i.e. 1 and 3-pixel attacks on AllConv and 5-pixel attack on NiN, then compared the corresponding success rates. Such a comparison is not rigorous as different networks might lead to varied classification results. Although we understand that the authors wanted to show the best attack results among those networks, we think all three attacks should be conducted on the same defense systems respectively and consequently we can see the performance of each attack on each target network. The conclusion from these results would be more convincing.<br />
<br />
* Regarding to ImageNet, only AlexNet was used to attack but AllConv, NiN and VGG16. The papers in which AllConv and VGG16 were introduced have good experiment results on ImageNet. Attack experiments towards these networks should be considered to show the effectiveness of one-pixel attack.<br />
<br />
= Reference =<br />
<br />
Carlini.N,2017 https://www.youtube.com/watch?v=yIXNL88JBWQ<br />
<br />
Jiawei Su, Danilo Vasconcellos Vargas, Sakurai Kouichi, 2017 https://arxiv.org/pdf/1710.08864.pdf<br />
<br />
Roman Trusov,2017 https://blog.xix.ai/how-adversarial-attacks-work-87495b81da2d</div>Z3chuhttp://wiki.math.uwaterloo.ca/statwiki/index.php?title=One_pixel_attack_for_fooling_deep_neural_networks&diff=35726One pixel attack for fooling deep neural networks2018-03-27T18:02:23Z<p>Z3chu: /* Results */</p>
<hr />
<div>= Presented by = <br />
1. Ziheng Chu<br />
<br />
2. Minghao Lu<br />
<br />
3. Qi Mai<br />
<br />
4. Qici Tan<br />
<br />
= Introduction =<br />
Neural network first caught many people’s attention in imageNet contest in 2012. Neural network increased accuracy to 85% from 75%. The following year, it is increased to 89%. From no one used neural network to everyone uses the neural network. Today we have 97% accuracy in using deep neural network (DNN). So the problem of image recognition by are artificial intelligence is solved. However, there is one catch(Carlini.N,2017).<br />
<br />
The catch is that the DNN is really easy to be fooled. Here is an example. An image of a dog is classified as a hummingbird. Research studies by Google Brian, which is a deep learning artificial intelligence research team at Google, showed that any machine learning classifier can be tricked to give wrong predictions. The action of designing an input in a specific way to get the wrong result from the model is called an adversarial attack (Roman Trusov,2017). The input image is the adversarial image. This image is created by adding a tiny amount of perturbation, which is not so imperceptible to human eyes as shown in Figure 1. After zooming into Figure 1, as shown in Figure 2, a small amount of perturbation led to misclassify a dog as a hummingbird.<br />
<br />
[[File:abcd.png|thumb|center|400px|Figure 1 Adversarial Example]]<br />
<br />
[[File:zoom.jpg|thumb|center|400px|Figure 2 Adversarial Example (Zoomed)]]<br />
<br />
How is it possible? DNN models consist of transformation. Most of those transformations are sometimes very sensitive to a small change. Think of the DNN as a set of high-dimensional decision boundary. When an input is not perfect, if the decision boundary is too simple and linear, mostly it leads to misclassify. <br />
<br />
Harnessing this sensitivity is a way to better understand and product robust algorithm in AI security. This paper aims to demonstrate the vulnerability of DNN by presenting some extreme scenarios - one pixel attack. As shown in Figure3, only one pixel was perturbed, the classification was wrong in each image. Although, there is no profound defense to the attack as of current state, the investigation of one-pixel attack may shield lights on the behavior of DNN. Ultimately, it leads to the discussion of the security implications to future solution.<br />
<br />
[[File:one.jpg|thumb|center|400px|Figure 3 One Pixel Attack Examples]]<br />
<br />
This paper proposed one pixel attack in a scenario where the only information available is the probability labels. Comparing to previous work, this proposal showed its effectiveness of successful attack rate up to 73%, its simplicity of semi-black-box which only required probability label no need inner information, and its flexibility in attacking more models, especially the networks that are not differentiable and the gradient calculation is difficult.<br />
<br />
With the intension of creating an adversarial attack for better understanding the security of DNN, one pixel attack should be considered. Two main reasons: 1) a new way of exploring the high dimensional DNN by using fewer and lower dimensional slices. It is different from previous work, where perturbation was done by adding small value to each pixel. 2) a measure of perceptiveness to demonstrate the severity of one-pixel attack as comparing to a few pixel examples.<br />
<br />
= Related works =<br />
The sensitivity to well-turned artificial perturbation were investigated in various related work.<br />
<br />
1. First perturbation was crafted by several gradient-based algorithms using back- propagation for obtaining gradient information. (C. Szegedy et al. )<br />
<br />
2. Fast gradient sign algorithm for calculating effective perturbation<br />
It was with the hypotheses of the linearity and high-dimensions of inputs were the main reason of vulnerability. (I.J.Goodfellow et al.) <br />
<br />
3. Greedy perturbation searching method by assuming the linearity of DNN decision boundary (S.M Moosavi-Dezfooli et al.) Jacobian matrix to build “Adversarial Saliency Map” which indicates the effectiveness of conducting a fixed length perturbation through the direction of each axis (N. Papernot et al. )<br />
<br />
4. The images can hardly be recognized by human eyes but nevertheless classified by the network with high confi- dence. (A. Nguyen et al. )<br />
<br />
5. Several black-box attacks that require no internal knowledge about the target systems such as gradients, have also been proposed. only utilized it as a starting point to derive a further semi black-box attack which needs to modify more pixels (N. Narodytska et al)<br />
<br />
6. Both natural and random images are found to be vulnerable to adversarial perturbation. Assuming these images are evenly distributed, it suggests that most data points in the input space are gathered near to the boundaries. ( A. Fawzi, S. M. Moosavi Dezfooli, and P. Frossard.)<br />
<br />
7. A curvature analysis region along most directions around natural images are flat with only few directions where the space is curved and the images are sensitive to perturbation. (A. Fawzi et al.)<br />
<br />
8. Universal perturbations (i.e. a perturbation that when added to any natural image can generate adversarial samples with high effectiveness) were shown possible and to achieve a high effectiveness when compared to random perturbation. This indicates that the diversity of boundaries might be low while the boundaries’ shapes near different data points are similar (S. M. Moosavi Dezfooli, A. Fawzi, O. Fawzi, and P. Frossard.)<br />
<br />
= Methodology =<br />
<br />
== Problem Description ==<br />
<br />
We can formalize the generation of adversarial images as an constrained optimization problem. We are given a classifier '''F''' and targeted adversarial class '''adv'''. Let '''x''' be the vectorized form of an image. Let F_adv('''x''') represent the probability assigned to the targeted class '''adv''' for vector '''x''' by the given classifier. Let e('''x''') represent an additive adversarial perturbation vector for image '''x'''. The goal of the targeted attack adversary is to find the perturbation vector with constrained size(norm) that maximizes the probability assigned to the targeted class by the classifier. Formally <br />
<br />
[[File:ONE_PIXEL_CSP1.png | center ]]<br />
<br />
<br />
For few-pixel attack, the problem statement is changed slightly. We constrain on the number of non-zero elements of the perturbation vector instead of the norm of the vector.<br />
<br />
[[File:ONE_PIXEL_CSP2.png | center ]]<br />
<br />
<br />
For the usual adversarial case, we can shift '''x''' in all dimensions, but the strength(norm) of the shift is bounded by L. For our one-pixel attack case, we are using the second equation with d=1. '''x''' is only allowed to be perturbed along a single axis, so the degree of freedom of the shift is greatly reduces. However, the shift in the one axis can be of arbitrary strength.<br />
<br />
<br />
== Differential Evolution ==<br />
Differential evolution(DE) is a population based optimization algorithm belonging to the class of evolutionary algorithms(EA). For DE, in the selection process, the population is in a way "segmented" into families( offspring and parents ), and the most optimal from each family is chosen. This segmentation allows DE to keep more diversity in each iteration than other EAs and makes it more suitable for complex multi-modal optimization problems. Additionally, DE does not use gradient information, which makes it suitable for our problem.<br />
<br />
= Evaluation and Results =<br />
<br />
=== Measurement Metrics and Datasets ===<br />
<br />
The authors used 4 measurement metrics to evaluate the effectiveness of the proposed attacks. <br />
* '''Success Rate''' <br /> ''Non-targeted attack'': the percentage of adversarial images were successfully classified to any possible classes other than the true one. <br /><br />
:<math> \textrm{success rate }=\displaystyle\sum_{k=1}^N I(\textrm{Network}(\textrm{Attack}(\textrm{Image}_k))\neq \textrm{TrueClass}_k) </math> <br />''Targeted attack'': the probability of successfully classifying a perturbed image to a targeted class.<br />
:<math> \textrm{success rate }=\displaystyle\sum_{k=1}^N I(\textrm{Network}(\textrm{TargetedAttack}(\textrm{Image}_k))= \textrm{TrueClass}_k) </math><br />
* '''Adversarial Probability Labels (Confidence)'''<br />The ratio of sum of probability level of the target class for each successful perturbation and the total number of successful attacks. This gives the mean confidence of the successful attacks on the target classification system.<br />
* '''Number of Target Classes'''<br />The number of images after perturbations cannot be classified to any other classes.<br />
* '''Number of Original-Target Class Pairs'''<br />The number of times of each pair being attacked.<br />
<br />
<br />
''Evaluation setups on CIFAR-10 test dataset'':<br />
<br />
Three types of networks played as defense systems: All Convolutional Networks (AllConv), Network in Network (NiN) and VGG16 network. (Figure 1)<br />
<br />
The authors randomly sampled 500 images, with resolution 32x32x3, from the dataset to perform one-pixel attacks and generated 500 samples with 3 and 5 pixel-modification respectively to conduct three-pixel and five-pixel attacks. The effectiveness of one-pixel attack was evaluated on all three introduced networks and the performance comparison, using success rate, was performed as the following: 1-pixel attack and 3-pixel attack on AllConv and 5-pixel attack on NiN. <br />
<br />
Both targeted attacks and non-targeted attacks were considered but only targeted attacks were conducted since the performance of non-targeted attack results could be obtained from the result of targeted attacks by applying a fitness function to increase the probability level of the target class.<br />
<br />
<br />
<br />
''Evaluation setups on ImageNet validation dataset (ILSVRC 2012)'':<br />
<br />
BVLC AlexNet played as defense system. (Figure 1)<br />
<br />
600 sample images were randomly sampled from the dataset. Due to the relatively high resolution of the images (224x224x3), only one-pixel attacked was carried out aiming to verify if extremely small pixel modification in a relatively large image can alternate the classification result.<br />
<br />
Since the number of classes in this dataset is way larger than the one of CIFAR-10, the authors only launched non-targeted attacks and applied a fitness function to decrease the probability level of the true class. <br />
<br />
<br />
<br />
[[File:1px_attack.jpg|thumb|center|400px|Figure 1. Networks Architecture]]<br />
<br />
=== Results ===<br />
<br />
On CIFAR-10, the proposed one-pixel attack succeeded (Table 1) in general and success rate and confidence increased by a significant amount when more pixels were modified (Table 2). Moreover, with one-pixel modification, each image can be perturbed to 2~4 classes in all 3 defense systems, i.e. AllConv, NiN and VGG16 (Figure 2); in particular, VGG16 is slightly more robust than the other two. Generally, all these three networks are vulnerable to one-pixel attack. When more pixels were allowed modifications, a significant number of images were successfully perturbed to up to 8 classes (Figure 3).<br />
<br />
On ImageNet, one-pixel attack performed well regarding to relatively large images (Table 1). The low confidence 5.53% is due to the large number of classes and the main efforts in decreasing the probability level of the true class. <br />
<br />
<gallery mode="slideshow"><br />
File:tables.jpg<br />
File:1px_3nets.jpg|frameless|Figure 2. One-pixel attack on three networks (Su, Vargas and Sakurai, 2018)<br />
File:3attacks.png|frameless|Figure 3. 1, 3 and 5-pixel attacks (Su, Vargas and Sakurai, 2018)<br />
</gallery><br />
<br />
By comparing the non-targeted attack effectiveness between the proposed method and two previous works, single dimensional perturbation vector is enough to delivery the corresponding adversarial images for most of the samples (Figure 4).<br />
<br />
[[File:Capture.PNG|thumb|center|400px|Figure 4. Comparison of non-targeted attack effectiveness between proposed method, LSA and FGSM]]<br />
<br />
<br />
Also, it is easy to craft adversarial images for certain original-target class pairs, especially in the case of one-pixel attack, as the data points of these classes share the vulnerable target directions. However, by increasing the dimensions of perturbations (e.g. from 1 to 3 & 5), most original classes are equally robust (Figure 5).<br />
<br />
= Discussion and Future Work =<br />
<br />
This paper illustrates the possibility of finding point boundaries where class labels changes by moving data points along few dimensions and analyzing the frequency of changes in class labels quantitatively; it also proves that a single pixel change is powerful enough to perturb a considerable portion of images and the one-pixel attack can further work on different network structures or image sizes. Given the time constraints, the conducted experiments used a low number of DE iterations with a relatively small set of initial candidate solutions; by increasing the number of DE iterations or the initial candidate solutions, the perturbation success rate can be further improved. <br />
<br />
For the future work, it is suggested to use the proposed algorithms and natural image samples for further development of advanced models and better artificial adversarial samples.<br />
<br />
= Critique =<br />
<br />
* When the authors launched 1, 3 and 5-pixel attacks, they did it on different target networks, i.e. 1 and 3-pixel attacks on AllConv and 5-pixel attack on NiN, then compared the corresponding success rates. Such a comparison is not rigorous as different networks might lead to varied classification results. Although we understand that the authors wanted to show the best attack results among those networks, we think all three attacks should be conducted on the same defense systems respectively and consequently we can see the performance of each attack on each target network. The conclusion from these results would be more convincing.<br />
<br />
* Regarding to ImageNet, only AlexNet was used to attack but AllConv, NiN and VGG16. The papers in which AllConv and VGG16 were introduced have good experiment results on ImageNet. Attack experiments towards these networks should be considered to show the effectiveness of one-pixel attack.<br />
<br />
= Reference =<br />
<br />
Carlini.N,2017 https://www.youtube.com/watch?v=yIXNL88JBWQ<br />
<br />
Jiawei Su, Danilo Vasconcellos Vargas, Sakurai Kouichi, 2017 https://arxiv.org/pdf/1710.08864.pdf<br />
<br />
Roman Trusov,2017 https://blog.xix.ai/how-adversarial-attacks-work-87495b81da2d</div>Z3chuhttp://wiki.math.uwaterloo.ca/statwiki/index.php?title=One_pixel_attack_for_fooling_deep_neural_networks&diff=35724One pixel attack for fooling deep neural networks2018-03-27T17:36:21Z<p>Z3chu: /* Discussion and Future Work */</p>
<hr />
<div>= Presented by = <br />
1. Ziheng Chu<br />
<br />
2. Minghao Lu<br />
<br />
3. Qi Mai<br />
<br />
4. Qici Tan<br />
<br />
= Introduction =<br />
Neural network first caught many people’s attention in imageNet contest in 2012. Neural network increased accuracy to 85% from 75%. The following year, it is increased to 89%. From no one used neural network to everyone uses the neural network. Today we have 97% accuracy in using deep neural network (DNN). So the problem of image recognition by are artificial intelligence is solved. However, there is one catch(Carlini.N,2017).<br />
<br />
The catch is that the DNN is really easy to be fooled. Here is an example. An image of a dog is classified as a hummingbird. Research studies by Google Brian, which is a deep learning artificial intelligence research team at Google, showed that any machine learning classifier can be tricked to give wrong predictions. The action of designing an input in a specific way to get the wrong result from the model is called an adversarial attack (Roman Trusov,2017). The input image is the adversarial image. This image is created by adding a tiny amount of perturbation, which is not so imperceptible to human eyes as shown in Figure 1. After zooming into Figure 1, as shown in Figure 2, a small amount of perturbation led to misclassify a dog as a hummingbird.<br />
<br />
[[File:abcd.png|thumb|center|400px|Figure 1 Adversarial Example]]<br />
<br />
[[File:zoom.jpg|thumb|center|400px|Figure 2 Adversarial Example (Zoomed)]]<br />
<br />
How is it possible? DNN models consist of transformation. Most of those transformations are sometimes very sensitive to a small change. Think of the DNN as a set of high-dimensional decision boundary. When an input is not perfect, if the decision boundary is too simple and linear, mostly it leads to misclassify. <br />
<br />
Harnessing this sensitivity is a way to better understand and product robust algorithm in AI security. This paper aims to demonstrate the vulnerability of DNN by presenting some extreme scenarios - one pixel attack. As shown in Figure3, only one pixel was perturbed, the classification was wrong in each image. Although, there is no profound defense to the attack as of current state, the investigation of one-pixel attack may shield lights on the behavior of DNN. Ultimately, it leads to the discussion of the security implications to future solution.<br />
<br />
[[File:one.jpg|thumb|center|400px|Figure 3 One Pixel Attack Examples]]<br />
<br />
This paper proposed one pixel attack in a scenario where the only information available is the probability labels. Comparing to previous work, this proposal showed its effectiveness of successful attack rate up to 73%, its simplicity of semi-black-box which only required probability label no need inner information, and its flexibility in attacking more models, especially the networks that are not differentiable and the gradient calculation is difficult.<br />
<br />
With the intension of creating an adversarial attack for better understanding the security of DNN, one pixel attack should be considered. Two main reasons: 1) a new way of exploring the high dimensional DNN by using fewer and lower dimensional slices. It is different from previous work, where perturbation was done by adding small value to each pixel. 2) a measure of perceptiveness to demonstrate the severity of one-pixel attack as comparing to a few pixel examples.<br />
<br />
= Related works =<br />
The sensitivity to well-turned artificial perturbation were investigated in various related work.<br />
<br />
1. First perturbation was crafted by several gradient-based algorithms using back- propagation for obtaining gradient information. (C. Szegedy et al. )<br />
<br />
2. Fast gradient sign algorithm for calculating effective perturbation<br />
It was with the hypotheses of the linearity and high-dimensions of inputs were the main reason of vulnerability. (I.J.Goodfellow et al.) <br />
<br />
3. Greedy perturbation searching method by assuming the linearity of DNN decision boundary (S.M Moosavi-Dezfooli et al.) Jacobian matrix to build “Adversarial Saliency Map” which indicates the effectiveness of conducting a fixed length perturbation through the direction of each axis (N. Papernot et al. )<br />
<br />
4. The images can hardly be recognized by human eyes but nevertheless classified by the network with high confi- dence. (A. Nguyen et al. )<br />
<br />
5. Several black-box attacks that require no internal knowledge about the target systems such as gradients, have also been proposed. only utilized it as a starting point to derive a further semi black-box attack which needs to modify more pixels (N. Narodytska et al)<br />
<br />
6. Both natural and random images are found to be vulnerable to adversarial perturbation. Assuming these images are evenly distributed, it suggests that most data points in the input space are gathered near to the boundaries. ( A. Fawzi, S. M. Moosavi Dezfooli, and P. Frossard.)<br />
<br />
7. A curvature analysis region along most directions around natural images are flat with only few directions where the space is curved and the images are sensitive to perturbation. (A. Fawzi et al.)<br />
<br />
8. Universal perturbations (i.e. a perturbation that when added to any natural image can generate adversarial samples with high effectiveness) were shown possible and to achieve a high effectiveness when compared to random perturbation. This indicates that the diversity of boundaries might be low while the boundaries’ shapes near different data points are similar (S. M. Moosavi Dezfooli, A. Fawzi, O. Fawzi, and P. Frossard.)<br />
<br />
= Methodology =<br />
<br />
== Problem Description ==<br />
<br />
We can formalize the generation of adversarial images as an constrained optimization problem. We are given a classifier '''F''' and targeted adversarial class '''adv'''. Let '''x''' be the vectorized form of an image. Let F_adv('''x''') represent the probability assigned to the targeted class '''adv''' for vector '''x''' by the given classifier. Let e('''x''') represent an additive adversarial perturbation vector for image '''x'''. The goal of the targeted attack adversary is to find the perturbation vector with constrained size(norm) that maximizes the probability assigned to the targeted class by the classifier. Formally <br />
<br />
[[File:ONE_PIXEL_CSP1.png | center ]]<br />
<br />
<br />
For few-pixel attack, the problem statement is changed slightly. We constrain on the number of non-zero elements of the perturbation vector instead of the norm of the vector.<br />
<br />
[[File:ONE_PIXEL_CSP2.png | center ]]<br />
<br />
<br />
For the usual adversarial case, we can shift '''x''' in all dimensions, but the strength(norm) of the shift is bounded by L. For our one-pixel attack case, we are using the second equation with d=1. '''x''' is only allowed to be perturbed along a single axis, so the degree of freedom of the shift is greatly reduces. However, the shift in the one axis can be of arbitrary strength.<br />
<br />
<br />
== Differential Evolution ==<br />
Differential evolution(DE) is a population based optimization algorithm belonging to the class of evolutionary algorithms(EA). For DE, in the selection process, the population is in a way "segmented" into families( offspring and parents ), and the most optimal from each family is chosen. This segmentation allows DE to keep more diversity in each iteration than other EAs and makes it more suitable for complex multi-modal optimization problems. Additionally, DE does not use gradient information, which makes it suitable for our problem.<br />
<br />
= Evaluation and Results =<br />
<br />
=== Measurement Metrics and Datasets ===<br />
<br />
The authors used 4 measurement metrics to evaluate the effectiveness of the proposed attacks. <br />
* '''Success Rate''' <br /> ''Non-targeted attack'': the percentage of adversarial images were successfully classified to any possible classes other than the true one. <br /><br />
:<math> \textrm{success rate }=\displaystyle\sum_{k=1}^N I(\textrm{Network}(\textrm{Attack}(\textrm{Image}_k))\neq \textrm{TrueClass}_k) </math> <br />''Targeted attack'': the probability of successfully classifying a perturbed image to a targeted class.<br />
:<math> \textrm{success rate }=\displaystyle\sum_{k=1}^N I(\textrm{Network}(\textrm{TargetedAttack}(\textrm{Image}_k))= \textrm{TrueClass}_k) </math><br />
* '''Adversarial Probability Labels (Confidence)'''<br />The ratio of sum of probability level of the target class for each successful perturbation and the total number of successful attacks. This gives the mean confidence of the successful attacks on the target classification system.<br />
* '''Number of Target Classes'''<br />The number of images after perturbations cannot be classified to any other classes.<br />
* '''Number of Original-Target Class Pairs'''<br />The number of times of each pair being attacked.<br />
<br />
<br />
''Evaluation setups on CIFAR-10 test dataset'':<br />
<br />
Three types of networks played as defense systems: All Convolutional Networks (AllConv), Network in Network (NiN) and VGG16 network. (Figure 1)<br />
<br />
The authors randomly sampled 500 images, with resolution 32x32x3, from the dataset to perform one-pixel attacks and generated 500 samples with 3 and 5 pixel-modification respectively to conduct three-pixel and five-pixel attacks. The effectiveness of one-pixel attack was evaluated on all three introduced networks and the performance comparison, using success rate, was performed as the following: 1-pixel attack and 3-pixel attack on AllConv and 5-pixel attack on NiN. <br />
<br />
Both targeted attacks and non-targeted attacks were considered but only targeted attacks were conducted since the performance of non-targeted attack results could be obtained from the result of targeted attacks by applying a fitness function to increase the probability level of the target class.<br />
<br />
<br />
<br />
''Evaluation setups on ImageNet validation dataset (ILSVRC 2012)'':<br />
<br />
BVLC AlexNet played as defense system. (Figure 1)<br />
<br />
600 sample images were randomly sampled from the dataset. Due to the relatively high resolution of the images (224x224x3), only one-pixel attacked was carried out aiming to verify if extremely small pixel modification in a relatively large image can alternate the classification result.<br />
<br />
Since the number of classes in this dataset is way larger than the one of CIFAR-10, the authors only launched non-targeted attacks and applied a fitness function to decrease the probability level of the true class. <br />
<br />
<br />
<br />
[[File:1px_attack.jpg|thumb|center|400px|Figure 1. Networks Architecture]]<br />
<br />
=== Results ===<br />
<br />
On CIFAR-10, the proposed one-pixel attack succeeded (Table 1) in general and success rate and confidence increased by a significant amount when more pixels were modified (Table 2). Moreover, with one-pixel modification, each image can be perturbed to 2~4 classes in all 3 defense systems, i.e. AllConv, NiN and VGG16 (Figure 2); in particular, VGG16 is slightly more robust than the other two. Generally, all these three networks are vulnerable to one-pixel attack. When more pixels were allowed modifications, a significant number of images were successfully perturbed to up to 8 classes (Figure 3).<br />
<br />
On ImageNet, one-pixel attack performed well regarding to relatively large images (Table 1). The low confidence 5.53% is due to the large number of classes and the main efforts in decreasing the probability level of the true class. <br />
<br />
<gallery mode="slideshow"><br />
File:tables.jpg<br />
File:1px_3nets.jpg|frameless|Figure 2. One-pixel attack on three networks (Su, Vargas and Sakurai, 2018)<br />
File:3attacks.png|frameless|Figure 3. 1, 3 and 5-pixel attacks (Su, Vargas and Sakurai, 2018)<br />
</gallery><br />
<br />
By comparing the non-targeted attack effectiveness between the proposed method and two previous works, single dimensional perturbation vector is enough to delivery the corresponding adversarial images for most of the samples (Figure 4).<br />
<br />
[[File:Capture.PNG|thumb|center|400px|Figure 4. Comparison of non-targeted attack effectiveness between proposed method, LSA and FGSM]]<br />
<br />
= Discussion and Future Work =<br />
<br />
This paper illustrates the possibility of finding point boundaries where class labels changes by moving data points along few dimensions and analyzing the frequency of changes in class labels quantitatively; it also proves that a single pixel change is powerful enough to perturb a considerable portion of images and the one-pixel attack can further work on different network structures or image sizes. Given the time constraints, the conducted experiments used a low number of DE iterations with a relatively small set of initial candidate solutions; by increasing the number of DE iterations or the initial candidate solutions, the perturbation success rate can be further improved. <br />
<br />
For the future work, it is suggested to use the proposed algorithms and natural image samples for further development of advanced models and better artificial adversarial samples.<br />
<br />
= Critique =<br />
<br />
* When the authors launched 1, 3 and 5-pixel attacks, they did it on different target networks, i.e. 1 and 3-pixel attacks on AllConv and 5-pixel attack on NiN, then compared the corresponding success rates. Such a comparison is not rigorous as different networks might lead to varied classification results. Although we understand that the authors wanted to show the best attack results among those networks, we think all three attacks should be conducted on the same defense systems respectively and consequently we can see the performance of each attack on each target network. The conclusion from these results would be more convincing.<br />
<br />
* Regarding to ImageNet, only AlexNet was used to attack but AllConv, NiN and VGG16. The papers in which AllConv and VGG16 were introduced have good experiment results on ImageNet. Attack experiments towards these networks should be considered to show the effectiveness of one-pixel attack.<br />
<br />
= Reference =<br />
<br />
Carlini.N,2017 https://www.youtube.com/watch?v=yIXNL88JBWQ<br />
<br />
Jiawei Su, Danilo Vasconcellos Vargas, Sakurai Kouichi, 2017 https://arxiv.org/pdf/1710.08864.pdf<br />
<br />
Roman Trusov,2017 https://blog.xix.ai/how-adversarial-attacks-work-87495b81da2d</div>Z3chuhttp://wiki.math.uwaterloo.ca/statwiki/index.php?title=One_pixel_attack_for_fooling_deep_neural_networks&diff=35722One pixel attack for fooling deep neural networks2018-03-27T17:33:15Z<p>Z3chu: /* Results */</p>
<hr />
<div>= Presented by = <br />
1. Ziheng Chu<br />
<br />
2. Minghao Lu<br />
<br />
3. Qi Mai<br />
<br />
4. Qici Tan<br />
<br />
= Introduction =<br />
Neural network first caught many people’s attention in imageNet contest in 2012. Neural network increased accuracy to 85% from 75%. The following year, it is increased to 89%. From no one used neural network to everyone uses the neural network. Today we have 97% accuracy in using deep neural network (DNN). So the problem of image recognition by are artificial intelligence is solved. However, there is one catch(Carlini.N,2017).<br />
<br />
The catch is that the DNN is really easy to be fooled. Here is an example. An image of a dog is classified as a hummingbird. Research studies by Google Brian, which is a deep learning artificial intelligence research team at Google, showed that any machine learning classifier can be tricked to give wrong predictions. The action of designing an input in a specific way to get the wrong result from the model is called an adversarial attack (Roman Trusov,2017). The input image is the adversarial image. This image is created by adding a tiny amount of perturbation, which is not so imperceptible to human eyes as shown in Figure 1. After zooming into Figure 1, as shown in Figure 2, a small amount of perturbation led to misclassify a dog as a hummingbird.<br />
<br />
[[File:abcd.png|thumb|center|400px|Figure 1 Adversarial Example]]<br />
<br />
[[File:zoom.jpg|thumb|center|400px|Figure 2 Adversarial Example (Zoomed)]]<br />
<br />
How is it possible? DNN models consist of transformation. Most of those transformations are sometimes very sensitive to a small change. Think of the DNN as a set of high-dimensional decision boundary. When an input is not perfect, if the decision boundary is too simple and linear, mostly it leads to misclassify. <br />
<br />
Harnessing this sensitivity is a way to better understand and product robust algorithm in AI security. This paper aims to demonstrate the vulnerability of DNN by presenting some extreme scenarios - one pixel attack. As shown in figure3, only one pixel was perturbed, the classification was wrong in each image. Although, there is no profound defense to the attack as of current state, the investigation of one-pixel attack may shield lights on the behavior of DNN. Ultimately, it leads to the discussion of the security implications to future solution.<br />
<br />
This paper proposed one pixel attack in a scenario where the only information available is the probability labels. Comparing to previous work, this proposal showed its effectiveness of successful attack rate up to 73%, its simplicity of semi-black-box which only required probability label no need inner information, and its flexibility in attacking more models, especially the networks that are not differentiable and the gradient calculation is difficult.<br />
<br />
With the intension of creating an adversarial attack for better understanding the security of DNN, one pixel attack should be considered. Two main reasons: 1) a new way of exploring the high dimensional DNN by using fewer and lower dimensional slices. It is different from previous work, where perturbation was done by adding small value to each pixel. 2) a measure of perceptiveness to demonstrate the severity of one-pixel attack as comparing to a few pixel examples.<br />
<br />
= Related works =<br />
The sensitivity to well-turned artificial perturbation were investigated in various related work.<br />
<br />
1. First perturbation was crafted by several gradient-based algorithms using back- propagation for obtaining gradient information. (C. Szegedy et al. )<br />
<br />
2. Fast gradient sign algorithm for calculating effective perturbation<br />
It was with the hypotheses of the linearity and high-dimensions of inputs were the main reason of vulnerability. (I.J.Goodfellow et al.) <br />
<br />
3. Greedy perturbation searching method by assuming the linearity of DNN decision boundary (S.M Moosavi-Dezfooli et al.) Jacobian matrix to build “Adversarial Saliency Map” which indicates the effectiveness of conducting a fixed length perturbation through the direction of each axis (N. Papernot et al. )<br />
<br />
4. The images can hardly be recognized by human eyes but nevertheless classified by the network with high confi- dence. (A. Nguyen et al. )<br />
<br />
5. Several black-box attacks that require no internal knowledge about the target systems such as gradients, have also been proposed. only utilized it as a starting point to derive a further semi black-box attack which needs to modify more pixels (N. Narodytska et al)<br />
<br />
6. Both natural and random images are found to be vulnerable to adversarial perturbation. Assuming these images are evenly distributed, it suggests that most data points in the input space are gathered near to the boundaries. ( A. Fawzi, S. M. Moosavi Dezfooli, and P. Frossard.)<br />
<br />
7. A curvature analysis region along most directions around natural images are flat with only few directions where the space is curved and the images are sensitive to perturbation. (A. Fawzi et al.)<br />
<br />
8. Universal perturbations (i.e. a perturbation that when added to any natural image can generate adversarial samples with high effectiveness) were shown possible and to achieve a high effectiveness when compared to random perturbation. This indicates that the diversity of boundaries might be low while the boundaries’ shapes near different data points are similar (S. M. Moosavi Dezfooli, A. Fawzi, O. Fawzi, and P. Frossard.)<br />
<br />
= Methodology =<br />
<br />
== Problem Description ==<br />
<br />
We can formalize the generation of adversarial images as an constrained optimization problem. We are given a classifier '''F''' and targeted adversarial class '''adv'''. Let '''x''' be the vectorized form of an image. Let F_adv('''x''') represent the probability assigned to the targeted class '''adv''' for vector '''x''' by the given classifier. Let e('''x''') represent an additive adversarial perturbation vector for image '''x'''. The goal of the targeted attack adversary is to find the perturbation vector with constrained size(norm) that maximizes the probability assigned to the targeted class by the classifier. Formally <br />
<br />
[[File:ONE_PIXEL_CSP1.png | center ]]<br />
<br />
<br />
For few-pixel attack, the problem statement is changed slightly. We constrain on the number of non-zero elements of the perturbation vector instead of the norm of the vector.<br />
<br />
[[File:ONE_PIXEL_CSP2.png | center ]]<br />
<br />
<br />
For the usual adversarial case, we can shift '''x''' in all dimensions, but the strength(norm) of the shift is bounded by L. For our one-pixel attack case, we are using the second equation with d=1. '''x''' is only allowed to be perturbed along a single axis, so the degree of freedom of the shift is greatly reduces. However, the shift in the one axis can be of arbitrary strength.<br />
<br />
<br />
== Differential Evolution ==<br />
Differential evolution(DE) is a population based optimization algorithm belonging to the class of evolutionary algorithms(EA). For DE, in the selection process, the population is in a way "segmented" into families( offspring and parents ), and the most optimal from each family is chosen. This segmentation allows DE to keep more diversity in each iteration than other EAs and makes it more suitable for complex multi-modal optimization problems. Additionally, DE does not use gradient information, which makes it suitable for our problem.<br />
<br />
= Evaluation and Results =<br />
<br />
=== Measurement Metrics and Datasets ===<br />
<br />
The authors used 4 measurement metrics to evaluate the effectiveness of the proposed attacks. <br />
* '''Success Rate''' <br /> ''Non-targeted attack'': the percentage of adversarial images were successfully classified to any possible classes other than the true one. <br /><br />
:<math> \textrm{success rate }=\displaystyle\sum_{k=1}^N I(\textrm{Network}(\textrm{Attack}(\textrm{Image}_k))\neq \textrm{TrueClass}_k) </math> <br />''Targeted attack'': the probability of successfully classifying a perturbed image to a targeted class.<br />
:<math> \textrm{success rate }=\displaystyle\sum_{k=1}^N I(\textrm{Network}(\textrm{TargetedAttack}(\textrm{Image}_k))= \textrm{TrueClass}_k) </math><br />
* '''Adversarial Probability Labels (Confidence)'''<br />The ratio of sum of probability level of the target class for each successful perturbation and the total number of successful attacks. This gives the mean confidence of the successful attacks on the target classification system.<br />
* '''Number of Target Classes'''<br />The number of images after perturbations cannot be classified to any other classes.<br />
* '''Number of Original-Target Class Pairs'''<br />The number of times of each pair being attacked.<br />
<br />
<br />
''Evaluation setups on CIFAR-10 test dataset'':<br />
<br />
Three types of networks played as defense systems: All Convolutional Networks (AllConv), Network in Network (NiN) and VGG16 network. (Figure 1)<br />
<br />
The authors randomly sampled 500 images, with resolution 32x32x3, from the dataset to perform one-pixel attacks and generated 500 samples with 3 and 5 pixel-modification respectively to conduct three-pixel and five-pixel attacks. The effectiveness of one-pixel attack was evaluated on all three introduced networks and the performance comparison, using success rate, was performed as the following: 1-pixel attack and 3-pixel attack on AllConv and 5-pixel attack on NiN. <br />
<br />
Both targeted attacks and non-targeted attacks were considered but only targeted attacks were conducted since the performance of non-targeted attack results could be obtained from the result of targeted attacks by applying a fitness function to increase the probability level of the target class.<br />
<br />
<br />
<br />
''Evaluation setups on ImageNet validation dataset (ILSVRC 2012)'':<br />
<br />
BVLC AlexNet played as defense system. (Figure 1)<br />
<br />
600 sample images were randomly sampled from the dataset. Due to the relatively high resolution of the images (224x224x3), only one-pixel attacked was carried out aiming to verify if extremely small pixel modification in a relatively large image can alternate the classification result.<br />
<br />
Since the number of classes in this dataset is way larger than the one of CIFAR-10, the authors only launched non-targeted attacks and applied a fitness function to decrease the probability level of the true class. <br />
<br />
<br />
<br />
[[File:1px_attack.jpg|thumb|center|400px|Figure 1. Networks Architecture]]<br />
<br />
=== Results ===<br />
<br />
On CIFAR-10, the proposed one-pixel attack succeeded (Table 1) in general and success rate and confidence increased by a significant amount when more pixels were modified (Table 2). Moreover, with one-pixel modification, each image can be perturbed to 2~4 classes in all 3 defense systems, i.e. AllConv, NiN and VGG16 (Figure 2); in particular, VGG16 is slightly more robust than the other two. Generally, all these three networks are vulnerable to one-pixel attack. When more pixels were allowed modifications, a significant number of images were successfully perturbed to up to 8 classes (Figure 3).<br />
<br />
On ImageNet, one-pixel attack performed well regarding to relatively large images (Table 1). The low confidence 5.53% is due to the large number of classes and the main efforts in decreasing the probability level of the true class. <br />
<br />
<gallery mode="slideshow"><br />
File:tables.jpg<br />
File:1px_3nets.jpg|frameless|Figure 2. One-pixel attack on three networks (Su, Vargas and Sakurai, 2018)<br />
File:3attacks.png|frameless|Figure 3. 1, 3 and 5-pixel attacks (Su, Vargas and Sakurai, 2018)<br />
</gallery><br />
<br />
By comparing the non-targeted attack effectiveness between the proposed method and two previous works, single dimensional perturbation vector is enough to delivery the corresponding adversarial images for most of the samples (Figure 4).<br />
<br />
[[File:Capture.PNG|thumb|center|400px|Figure 4. Comparison of non-targeted attack effectiveness between proposed method, LSA and FGSM]]<br />
<br />
= Discussion and Future Work =<br />
<br />
In general, this paper illustrates the possibility of finding point boundaries where class labels changes by moving data points along few dimensions and analyzing the frequency of changes in class labels quantitatively; it also proves that single pixel change could perturb a considerable portion of images. According to the experimental results, one pixel attack can further work on different network structures or image sizes. Given the time constraints, the conducted experiments used a low number of DE iterations with a relatively small set of initial candidate solutions; by increasing the number of DE iterations or the initial candidate solutions, the perturbation success rate can be further improved. <br />
<br />
For the future work, it is suggested to use the proposed algorithms and natural image samples for further development of advanced models and better artificial adversarial samples.<br />
<br />
= Critique =<br />
<br />
* When the authors launched 1, 3 and 5-pixel attacks, they did it on different target networks, i.e. 1 and 3-pixel attacks on AllConv and 5-pixel attack on NiN, then compared the corresponding success rates. Such a comparison is not rigorous as different networks might lead to varied classification results. Although we understand that the authors wanted to show the best attack results among those networks, we think all three attacks should be conducted on the same defense systems respectively and consequently we can see the performance of each attack on each target network. The conclusion from these results would be more convincing.<br />
<br />
* Regarding to ImageNet, only AlexNet was used to attack but AllConv, NiN and VGG16. The papers in which AllConv and VGG16 were introduced have good experiment results on ImageNet. Attack experiments towards these networks should be considered to show the effectiveness of one-pixel attack.<br />
<br />
= Reference =<br />
<br />
Carlini.N,2017 https://www.youtube.com/watch?v=yIXNL88JBWQ<br />
<br />
Jiawei Su, Danilo Vasconcellos Vargas, Sakurai Kouichi, 2017 https://arxiv.org/pdf/1710.08864.pdf<br />
<br />
Roman Trusov,2017 https://blog.xix.ai/how-adversarial-attacks-work-87495b81da2d</div>Z3chuhttp://wiki.math.uwaterloo.ca/statwiki/index.php?title=One_pixel_attack_for_fooling_deep_neural_networks&diff=35718One pixel attack for fooling deep neural networks2018-03-27T17:29:27Z<p>Z3chu: /* Results */</p>
<hr />
<div>= Presented by = <br />
1. Ziheng Chu<br />
<br />
2. Minghao Lu<br />
<br />
3. Qi Mai<br />
<br />
4. Qici Tan<br />
<br />
= Introduction =<br />
Neural network first caught many people’s attention in imageNet contest in 2012. Neural network increased accuracy to 85% from 75%. The following year, it is increased to 89%. From no one used neural network to everyone uses the neural network. Today we have 97% accuracy in using deep neural network (DNN). So the problem of image recognition by are artificial intelligence is solved. However, there is one catch(Carlini.N,2017).<br />
<br />
The catch is that the DNN is really easy to be fooled. Here is an example. An image of a dog is classified as a hummingbird. Research studies by Google Brian, which is a deep learning artificial intelligence research team at Google, showed that any machine learning classifier can be tricked to give wrong predictions. The action of designing an input in a specific way to get the wrong result from the model is called an adversarial attack (Roman Trusov,2017). The input image is the adversarial image. This image is created by adding a tiny amount of perturbation, which is not so imperceptible to human eyes as shown in Figure 1. After zooming into Figure 1, as shown in Figure 2, a small amount of perturbation led to misclassify a dog as a hummingbird.<br />
<br />
[[File:abcd.png|thumb|center|400px|Figure 1 Adversarial Example]]<br />
<br />
[[File:zoom.png|thumb|center|400px|Image title or explanation]]<br />
<br />
How is it possible? DNN models consist of transformation. Most of those transformations are sometimes very sensitive to a small change. Think of the DNN as a set of high-dimensional decision boundary. When an input is not perfect, if the decision boundary is too simple and linear, mostly it leads to misclassify. <br />
<br />
Harnessing this sensitivity is a way to better understand and product robust algorithm in AI security. This paper aims to demonstrate the vulnerability of DNN by presenting some extreme scenarios - one pixel attack. As shown in figure3, only one pixel was perturbed, the classification was wrong in each image. Although, there is no profound defense to the attack as of current state, the investigation of one-pixel attack may shield lights on the behavior of DNN. Ultimately, it leads to the discussion of the security implications to future solution.<br />
<br />
This paper proposed one pixel attack in a scenario where the only information available is the probability labels. Comparing to previous work, this proposal showed its effectiveness of successful attack rate up to 73%, its simplicity of semi-black-box which only required probability label no need inner information, and its flexibility in attacking more models, especially the networks that are not differentiable and the gradient calculation is difficult.<br />
<br />
With the intension of creating an adversarial attack for better understanding the security of DNN, one pixel attack should be considered. Two main reasons: 1) a new way of exploring the high dimensional DNN by using fewer and lower dimensional slices. It is different from previous work, where perturbation was done by adding small value to each pixel. 2) a measure of perceptiveness to demonstrate the severity of one-pixel attack as comparing to a few pixel examples.<br />
<br />
= Related works =<br />
The sensitivity to well-turned artificial perturbation were investigated in various related work.<br />
<br />
1. First perturbation was crafted by several gradient-based algorithms using back- propagation for obtaining gradient information. (C. Szegedy et al. )<br />
<br />
2. Fast gradient sign algorithm for calculating effective perturbation<br />
It was with the hypotheses of the linearity and high-dimensions of inputs were the main reason of vulnerability. (I.J.Goodfellow et al.) <br />
<br />
3. Greedy perturbation searching method by assuming the linearity of DNN decision boundary (S.M Moosavi-Dezfooli et al.) Jacobian matrix to build “Adversarial Saliency Map” which indicates the effectiveness of conducting a fixed length perturbation through the direction of each axis (N. Papernot et al. )<br />
<br />
4. The images can hardly be recognized by human eyes but nevertheless classified by the network with high confi- dence. (A. Nguyen et al. )<br />
<br />
5. Several black-box attacks that require no internal knowledge about the target systems such as gradients, have also been proposed. only utilized it as a starting point to derive a further semi black-box attack which needs to modify more pixels (N. Narodytska et al)<br />
<br />
6. Both natural and random images are found to be vulnerable to adversarial perturbation. Assuming these images are evenly distributed, it suggests that most data points in the input space are gathered near to the boundaries. ( A. Fawzi, S. M. Moosavi Dezfooli, and P. Frossard.)<br />
<br />
7. A curvature analysis region along most directions around natural images are flat with only few directions where the space is curved and the images are sensitive to perturbation. (A. Fawzi et al.)<br />
<br />
8. Universal perturbations (i.e. a perturbation that when added to any natural image can generate adversarial samples with high effectiveness) were shown possible and to achieve a high effectiveness when compared to random perturbation. This indicates that the diversity of boundaries might be low while the boundaries’ shapes near different data points are similar (S. M. Moosavi Dezfooli, A. Fawzi, O. Fawzi, and P. Frossard.)<br />
<br />
= Methodology =<br />
<br />
== Problem Description ==<br />
<br />
We can formalize the generation of adversarial images as an constrained optimization problem. We are given a classifier '''F''' and targeted adversarial class '''adv'''. Let '''x''' be the vectorized form of an image. Let F_adv('''x''') represent the probability assigned to the targeted class '''adv''' for vector '''x''' by the given classifier. Let e('''x''') represent an additive adversarial perturbation vector for image '''x'''. The goal of the targeted attack adversary is to find the perturbation vector with constrained size(norm) that maximizes the probability assigned to the targeted class by the classifier. Formally <br />
<br />
[[File:ONE_PIXEL_CSP1.png | center ]]<br />
<br />
<br />
For few-pixel attack, the problem statement is changed slightly. We constrain on the number of non-zero elements of the perturbation vector instead of the norm of the vector.<br />
<br />
[[File:ONE_PIXEL_CSP2.png | center ]]<br />
<br />
<br />
For the usual adversarial case, we can shift '''x''' in all dimensions, but the strength(norm) of the shift is bounded by L. For our one-pixel attack case, we are using the second equation with d=1. '''x''' is only allowed to be perturbed along a single axis, so the degree of freedom of the shift is greatly reduces. However, the shift in the one axis can be of arbitrary strength.<br />
<br />
<br />
== Differential Evolution ==<br />
Differential evolution(DE) is a population based optimization algorithm belonging to the class of evolutionary algorithms(EA). For DE, in the selection process, the population is in a way "segmented" into families( offspring and parents ), and the most optimal from each family is chosen. This segmentation allows DE to keep more diversity in each iteration than other EAs and makes it more suitable for complex multi-modal optimization problems. Additionally, DE does not use gradient information, which makes it suitable for our problem.<br />
<br />
= Evaluation and Results =<br />
<br />
=== Measurement Metrics and Datasets ===<br />
<br />
The authors used 4 measurement metrics to evaluate the effectiveness of the proposed attacks. <br />
* '''Success Rate''' <br /> ''Non-targeted attack'': the percentage of adversarial images were successfully classified to any possible classes other than the true one. <br /><br />
:<math> \textrm{success rate }=\displaystyle\sum_{k=1}^N I(\textrm{Network}(\textrm{Attack}(\textrm{Image}_k))\neq \textrm{TrueClass}_k) </math> <br />''Targeted attack'': the probability of successfully classifying a perturbed image to a targeted class.<br />
:<math> \textrm{success rate }=\displaystyle\sum_{k=1}^N I(\textrm{Network}(\textrm{TargetedAttack}(\textrm{Image}_k))= \textrm{TrueClass}_k) </math><br />
* '''Adversarial Probability Labels (Confidence)'''<br />The ratio of sum of probability level of the target class for each successful perturbation and the total number of successful attacks. This gives the mean confidence of the successful attacks on the target classification system.<br />
* '''Number of Target Classes'''<br />The number of images after perturbations cannot be classified to any other classes.<br />
* '''Number of Original-Target Class Pairs'''<br />The number of times of each pair being attacked.<br />
<br />
<br />
''Evaluation setups on CIFAR-10 test dataset'':<br />
<br />
Three types of networks played as defense systems: All Convolutional Networks (AllConv), Network in Network (NiN) and VGG16 network. (Figure 1)<br />
<br />
The authors randomly sampled 500 images, with resolution 32x32x3, from the dataset to perform one-pixel attacks and generated 500 samples with 3 and 5 pixel-modification respectively to conduct three-pixel and five-pixel attacks. The effectiveness of one-pixel attack was evaluated on all three introduced networks and the performance comparison, using success rate, was performed as the following: 1-pixel attack and 3-pixel attack on AllConv and 5-pixel attack on NiN. <br />
<br />
Both targeted attacks and non-targeted attacks were considered but only targeted attacks were conducted since the performance of non-targeted attack results could be obtained from the result of targeted attacks by applying a fitness function to increase the probability level of the target class.<br />
<br />
<br />
<br />
''Evaluation setups on ImageNet validation dataset (ILSVRC 2012)'':<br />
<br />
BVLC AlexNet played as defense system. (Figure 1)<br />
<br />
600 sample images were randomly sampled from the dataset. Due to the relatively high resolution of the images (224x224x3), only one-pixel attacked was carried out aiming to verify if extremely small pixel modification in a relatively large image can alternate the classification result.<br />
<br />
Since the number of classes in this dataset is way larger than the one of CIFAR-10, the authors only launched non-targeted attacks and applied a fitness function to decrease the probability level of the true class. <br />
<br />
<br />
<br />
[[File:1px_attack.jpg|thumb|center|400px|Figure 1. Networks Architecture]]<br />
<br />
=== Results ===<br />
<br />
On CIFAR-10, the proposed one-pixel attack succeeded (Table 1) in general and success rate and confidence increased by a significant amount when more pixels were modified (Table 2). Moreover, with one-pixel modification, each image can be perturbed to 2~4 classes in all 3 defense systems, i.e. AllConv, NiN and VGG16 (Figure 2); in particular, VGG16 is slightly more robust than the other two. Generally, all these three networks are vulnerable to one-pixel attack. When more pixels were allowed modifications, a significant number of images were successfully perturbed to up to 8 classes (Figure 3).<br />
<br />
On ImageNet, one-pixel attack performed well regarding to relatively large images (Table 1). The low confidence 5.53% is due to the large number of classes and the main efforts in decreasing the probability level of the true class. <br />
<br />
<gallery mode="slideshow"><br />
File:tables.jpg<br />
File:1px_3nets.jpg|frameless|Figure 2. One-pixel attack on three networks (Su, Vargas and Sakurai, 2018)<br />
File:3attacks.png|frameless|Figure 3. 1, 3 and 5-pixel attacks (Su, Vargas and Sakurai, 2018)<br />
</gallery><br />
<br />
By comparing the non-targeted attack effectiveness between the proposed method and two previous works, single dimensional perturbation vector is enough to delivery the corresponding adversarial images for most of the samples. (Figure 4)<br />
<br />
[[File:Capture.PNG|thumb|center|400px|Figure 4. Comparison of non-targeted attack effectiveness between proposed method, LSA and FGSM]]<br />
<br />
= Discussion and Future Work =<br />
<br />
In general, this paper illustrates the possibility of finding point boundaries where class labels changes by moving data points along few dimensions and analyzing the frequency of changes in class labels quantitatively; it also proves that single pixel change could perturb a considerable portion of images. According to the experimental results, one pixel attack can further work on different network structures or image sizes. Given the time constraints, the conducted experiments used a low number of DE iterations with a relatively small set of initial candidate solutions; by increasing the number of DE iterations or the initial candidate solutions, the perturbation success rate can be further improved. <br />
<br />
For the future work, it is suggested to use the proposed algorithms and natural image samples for further development of advanced models and better artificial adversarial samples.<br />
<br />
= Critique =<br />
<br />
* When the authors launched 1, 3 and 5-pixel attacks, they did it on different target networks, i.e. 1 and 3-pixel attacks on AllConv and 5-pixel attack on NiN, then compared the corresponding success rates. Such a comparison is not rigorous as different networks might lead to varied classification results. Although we understand that the authors wanted to show the best attack results among those networks, we think all three attacks should be conducted on the same defense systems respectively and consequently we can see the performance of each attack on each target network. The conclusion from these results would be more convincing.<br />
<br />
* Regarding to ImageNet, only AlexNet was used to attack but AllConv, NiN and VGG16. The papers in which AllConv and VGG16 were introduced have good experiment results on ImageNet. Attack experiments towards these networks should be considered to show the effectiveness of one-pixel attack.<br />
<br />
= Reference =<br />
<br />
Carlini.N,2017 https://www.youtube.com/watch?v=yIXNL88JBWQ<br />
<br />
Jiawei Su, Danilo Vasconcellos Vargas, Sakurai Kouichi, 2017 https://arxiv.org/pdf/1710.08864.pdf<br />
<br />
Roman Trusov,2017 https://blog.xix.ai/how-adversarial-attacks-work-87495b81da2d</div>Z3chuhttp://wiki.math.uwaterloo.ca/statwiki/index.php?title=One_pixel_attack_for_fooling_deep_neural_networks&diff=35710One pixel attack for fooling deep neural networks2018-03-27T17:10:48Z<p>Z3chu: /* Results */</p>
<hr />
<div>= Presented by = <br />
1. Ziheng Chu<br />
<br />
2. Minghao Lu<br />
<br />
3. Qi Mai<br />
<br />
4. Qici Tan<br />
<br />
= Introduction =<br />
Neural network first caught many people’s attention in imageNet contest in 2012. Neural network increased accuracy to 85% from 75%. The following year, it is increased to 89%. From no one used neural network to everyone uses the neural network. Today we have 97% accuracy in using deep neural network (DNN). So the problem of image recognition by are artificial intelligence is solved. However, there is one catch(Carlini.N,2017).<br />
<br />
The catch is that the DNN is really easy to be fooled. Here is an example. An image of a dog is classified as a hummingbird. Research studies by Google Brian, which is a deep learning artificial intelligence research team at Google, showed that any machine learning classifier can be tricked to give wrong predictions. The action of designing an input in a specific way to get the wrong result from the model is called an adversarial attack (Roman Trusov,2017). The input image is the adversarial image. This image is created by adding a tiny amount of perturbation, which is not so imperceptible to human eyes. After zooming into figure 1, a small amount of perturbation led to misclassify a dog as a hummingbird.<br />
<br />
How does it possible? DNN models consist of transformation. Most of those transformations are sometimes very sensitive to a small change. Think of the DNN as a set of high-dimensional decision boundary. When an input is not perfect, if the decision boundary is too simple and linear, mostly it leads to misclassify. <br />
<br />
Harnessing this sensitivity is a way to better understand and product robust algorithm in AI security. This paper aims to demonstrate the vulnerability of DNN by presenting some extreme scenarios - one pixel attack. As shown in figure3, only one pixel was perturbed, the classification was wrong in each image. Although, there is no profound defense to the attack as of current state, the investigation of one-pixel attack may shield lights on the behavior of DNN. Ultimately, it leads to the discussion of the security implications to future solution.<br />
<br />
This paper proposed one pixel attack in a scenario where the only information available is the probability labels. Comparing to previous work, this proposal showed its effectiveness of successful attack rate up to 73%, its simplicity of semi-black-box which only required probability label no need inner information, and its flexibility in attacking more models, especially the networks that are not differentiable and the gradient calculation is difficult.<br />
<br />
With the intension of creating an adversarial attack for better understanding the security of DNN, one pixel attack should be considered. Two main reasons: 1) a new way of exploring the high dimensional DNN by using fewer and lower dimensional slices. It is different from previous work, where perturbation was done by adding small value to each pixel. 2) a measure of perceptiveness to demonstrate the severity of one-pixel attack as comparing to a few pixel examples.<br />
<br />
[[File:abcd.png|thumb|center|400px|Image title or explanation]]<br />
<br />
= Related works =<br />
The sensitivity to well-turned artificial perturbation were investigated in various related work.<br />
<br />
1. First perturbation was crafted by several gradient-based algorithms using back- propagation for obtaining gradient information. (C. Szegedy et al. )<br />
<br />
2. Fast gradient sign algorithm for calculating effective perturbation<br />
It was with the hypotheses of the linearity and high-dimensions of inputs were the main reason of vulnerability. (I.J.Goodfellow et al.) <br />
<br />
3. Greedy perturbation searching method by assuming the linearity of DNN decision boundary (S.M Moosavi-Dezfooli et al.) Jacobian matrix to build “Adversarial Saliency Map” which indicates the effectiveness of conducting a fixed length perturbation through the direction of each axis (N. Papernot et al. )<br />
<br />
4. The images can hardly be recognized by human eyes but nevertheless classified by the network with high confi- dence. (A. Nguyen et al. )<br />
<br />
5. Several black-box attacks that require no internal knowledge about the target systems such as gradients, have also been proposed. only utilized it as a starting point to derive a further semi black-box attack which needs to modify more pixels (N. Narodytska et al)<br />
<br />
6. Both natural and random images are found to be vulnerable to adversarial perturbation. Assuming these images are evenly distributed, it suggests that most data points in the input space are gathered near to the boundaries. ( A. Fawzi, S. M. Moosavi Dezfooli, and P. Frossard.)<br />
<br />
7. A curvature analysis region along most directions around natural images are flat with only few directions where the space is curved and the images are sensitive to perturbation. (A. Fawzi et al.)<br />
<br />
8. Universal perturbations (i.e. a perturbation that when added to any natural image can generate adversarial samples with high effectiveness) were shown possible and to achieve a high effectiveness when compared to random perturbation. This indicates that the diversity of boundaries might be low while the boundaries’ shapes near different data points are similar (S. M. Moosavi Dezfooli, A. Fawzi, O. Fawzi, and P. Frossard.)<br />
<br />
= Methodology =<br />
<br />
== Problem Description ==<br />
<br />
We can formalize the generation of adversarial images as an constrained optimization problem. We are given a classifier '''F''' and targeted adversarial class '''adv'''. Let '''x''' be the vectorized form of an image. Let F_adv('''x''') represent the probability assigned to the targeted class '''adv''' for vector '''x''' by the given classifier. Let e('''x''') represent an additive adversarial perturbation vector for image '''x'''. The goal of the targeted attack adversary is to find the perturbation vector with constrained size(norm) that maximizes the probability assigned to the targeted class by the classifier. Formally <br />
<br />
[[File:ONE_PIXEL_CSP1.png | center ]]<br />
<br />
<br />
For few-pixel attack, the problem statement is changed slightly. We constrain on the number of non-zero elements of the perturbation vector instead of the norm of the vector.<br />
<br />
[[File:ONE_PIXEL_CSP2.png | center ]]<br />
<br />
<br />
For the usual adversarial case, we can shift '''x''' in all dimensions, but the strength(norm) of the shift is bounded by L. For our one-pixel attack case, we are using the second equation with d=1. '''x''' is only allowed to be perturbed along a single axis, so the degree of freedom of the shift is greatly reduces. However, the shift in the one axis can be of arbitrary strength.<br />
<br />
<br />
== Differential Evolution ==<br />
Differential evolution(DE) is a population based optimization algorithm belonging to the class of evolutionary algorithms(EA). For DE, in the selection process, the population is in a way "segmented" into families( offspring and parents ), and the most optimal from each family is chosen. This segmentation allows DE to keep more diversity in each iteration than other EAs and makes it more suitable for complex multi-modal optimization problems. Additionally, DE does not use gradient information, which makes it suitable for our problem.<br />
<br />
= Evaluation and Results =<br />
<br />
=== Measurement Metrics and Datasets ===<br />
<br />
The authors used 4 measurement metrics to evaluate the effectiveness of the proposed attacks. <br />
* '''Success Rate''' <br /> ''Non-targeted attack'': the percentage of adversarial images were successfully classified to any possible classes other than the true one. <br /><br />
:<math> \textrm{success rate }=\displaystyle\sum_{k=1}^N I(\textrm{Network}(\textrm{Attack}(\textrm{Image}_k))\neq \textrm{TrueClass}_k) </math> <br />''Targeted attack'': the probability of successfully classifying a perturbed image to a targeted class.<br />
:<math> \textrm{success rate }=\displaystyle\sum_{k=1}^N I(\textrm{Network}(\textrm{TargetedAttack}(\textrm{Image}_k))= \textrm{TrueClass}_k) </math><br />
* '''Adversarial Probability Labels (Confidence)'''<br />The ratio of sum of probability level of the target class for each successful perturbation and the total number of successful attacks. This gives the mean confidence of the successful attacks on the target classification system.<br />
* '''Number of Target Classes'''<br />The number of images after perturbations cannot be classified to any other classes.<br />
* '''Number of Original-Target Class Pairs'''<br />The number of times of each pair being attacked.<br />
<br />
<br />
''Evaluation setups on CIFAR-10 test dataset'':<br />
<br />
Three types of networks played as defense systems: All Convolutional Networks (AllConv), Network in Network (NiN) and VGG16 network. (Figure 1)<br />
<br />
The authors randomly sampled 500 images, with resolution 32x32x3, from the dataset to perform one-pixel attacks and generated 500 samples with 3 and 5 pixel-modification respectively to conduct three-pixel and five-pixel attacks. The effectiveness of one-pixel attack was evaluated on all three introduced networks and the performance comparison, using success rate, was performed as the following: 1-pixel attack and 3-pixel attack on AllConv and 5-pixel attack on NiN. <br />
<br />
Both targeted attacks and non-targeted attacks were considered but only targeted attacks were conducted since the performance of non-targeted attack results could be obtained from the result of targeted attacks by applying a fitness function to increase the probability level of the target class.<br />
<br />
<br />
<br />
''Evaluation setups on ImageNet validation dataset (ILSVRC 2012)'':<br />
<br />
BVLC AlexNet played as defense system. (Figure 1)<br />
<br />
600 sample images were randomly sampled from the dataset. Due to the relatively high resolution of the images (224x224x3), only one-pixel attacked was carried out aiming to verify if extremely small pixel modification in a relatively large image can alternate the classification result.<br />
<br />
Since the number of classes in this dataset is way larger than the one of CIFAR-10, the authors only launched non-targeted attacks and applied a fitness function to decrease the probability level of the true class. <br />
<br />
<br />
<br />
[[File:1px_attack.jpg|thumb|center|400px|Figure 1. Networks Architecture]]<br />
<br />
=== Results ===<br />
<br />
On CIFAR-10, the proposed one-pixel attack succeeded (Table 1) in general and success rate and confidence increased by a significant amount when more pixels were modified (Table 2). Moreover, with one-pixel modification, each image can be perturbed to 2~4 classes in all 3 defense systems, i.e. AllConv, NiN and VGG16 (Figure 2); in particular, VGG16 is slightly more robust than the other two. Generally, all these three networks are vulnerable to one-pixel attack. When more pixels were allowed modifications, a significant number of images were successfully perturbed to up to 8 classes (Figure 3).<br />
<br />
On ImageNet, one-pixel attack performed well regarding to relatively large images (Table 1). The low confidence 5.53% is due to the large number of classes and the main efforts in decreasing the probability level of the true class. <br />
<br />
<gallery mode="slideshow"><br />
File:tables.jpg<br />
File:1px_3nets.jpg|frameless|Figure 2. One-pixel attack on three networks (Su, Vargas and Sakurai, 2018)<br />
File:3attacks.png|frameless|Figure 3. 1, 3 and 5-pixel attacks (Su, Vargas and Sakurai, 2018)<br />
</gallery><br />
<br />
By comparing the non-targeted attack effectiveness between the proposed method and two previous works, single dimensional perturbation vector is enough to find the corresponding adversarial images for most natural images. (Figure 4)<br />
<br />
[[File:Capture.PNG|thumb|center|400px|Figure 4. Comparison of non-targeted attack effectiveness between proposed method, LSA and FGSM]]<br />
<br />
= Discussion and Future Work =<br />
<br />
In general, this paper illustrates the possibility of finding point boundaries where class labels changes by moving data points along few dimensions and analyzing the frequency of changes in class labels quantitatively; it also proves that single pixel change could perturb a considerable portion of images. According to the experimental results, one pixel attack can further work on different network structures or image sizes. Given the time constraints, the conducted experiments used a low number of DE iterations with a relatively small set of initial candidate solutions; by increasing the number of DE iterations or the initial candidate solutions, the perturbation success rate can be further improved. <br />
<br />
For the future work, it is suggested to use the proposed algorithms and natural image samples for further development of advanced models and better artificial adversarial samples.<br />
<br />
= Critique =<br />
<br />
* When the authors launched 1, 3 and 5-pixel attacks, they did it on different target networks, i.e. 1 and 3-pixel attacks on AllConv and 5-pixel attack on NiN, then compared the corresponding success rates. Such a comparison is not rigorous as different networks might lead to varied classification results. Although we understand that the authors wanted to show the best attack results among those networks, we think all three attacks should be conducted on the same defense systems respectively and consequently we can see the performance of each attack on each target network. The conclusion from these results would be more convincing.<br />
<br />
* Regarding to ImageNet, only AlexNet was used to attack but AllConv, NiN and VGG16. The papers in which AllConv and VGG16 were introduced have good experiment results on ImageNet. Attack experiments towards these networks should be considered to show the effectiveness of one-pixel attack.<br />
<br />
= Reference =<br />
<br />
Carlini.N,2017 https://www.youtube.com/watch?v=yIXNL88JBWQ<br />
<br />
Jiawei Su, Danilo Vasconcellos Vargas, Sakurai Kouichi, 2017 https://arxiv.org/pdf/1710.08864.pdf<br />
<br />
Roman Trusov,2017 https://blog.xix.ai/how-adversarial-attacks-work-87495b81da2d</div>Z3chuhttp://wiki.math.uwaterloo.ca/statwiki/index.php?title=One_pixel_attack_for_fooling_deep_neural_networks&diff=35709One pixel attack for fooling deep neural networks2018-03-27T16:54:14Z<p>Z3chu: /* Results */</p>
<hr />
<div>= Presented by = <br />
1. Ziheng Chu<br />
<br />
2. Minghao Lu<br />
<br />
3. Qi Mai<br />
<br />
4. Qici Tan<br />
<br />
= Introduction =<br />
Neural network first caught many people’s attention in imageNet contest in 2012. Neural network increased accuracy to 85% from 75%. The following year, it is increased to 89%. From no one used neural network to everyone uses the neural network. Today we have 97% accuracy in using deep neural network (DNN). So the problem of image recognition by are artificial intelligence is solved. However, there is one catch(Carlini.N,2017).<br />
<br />
The catch is that the DNN is really easy to be fooled. Here is an example. An image of a dog is classified as a hummingbird. Research studies by Google Brian, which is a deep learning artificial intelligence research team at Google, showed that any machine learning classifier can be tricked to give wrong predictions. The action of designing an input in a specific way to get the wrong result from the model is called an adversarial attack (Roman Trusov,2017). The input image is the adversarial image. This image is created by adding a tiny amount of perturbation, which is not so imperceptible to human eyes. After zooming into figure 1, a small amount of perturbation led to misclassify a dog as a hummingbird.<br />
<br />
How does it possible? DNN models consist of transformation. Most of those transformations are sometimes very sensitive to a small change. Think of the DNN as a set of high-dimensional decision boundary. When an input is not perfect, if the decision boundary is too simple and linear, mostly it leads to misclassify. <br />
<br />
Harnessing this sensitivity is a way to better understand and product robust algorithm in AI security. This paper aims to demonstrate the vulnerability of DNN by presenting some extreme scenarios - one pixel attack. As shown in figure3, only one pixel was perturbed, the classification was wrong in each image. Although, there is no profound defense to the attack as of current state, the investigation of one-pixel attack may shield lights on the behavior of DNN. Ultimately, it leads to the discussion of the security implications to future solution.<br />
<br />
This paper proposed one pixel attack in a scenario where the only information available is the probability labels. Comparing to previous work, this proposal showed its effectiveness of successful attack rate up to 73%, its simplicity of semi-black-box which only required probability label no need inner information, and its flexibility in attacking more models, especially the networks that are not differentiable and the gradient calculation is difficult.<br />
<br />
With the intension of creating an adversarial attack for better understanding the security of DNN, one pixel attack should be considered. Two main reasons: 1) a new way of exploring the high dimensional DNN by using fewer and lower dimensional slices. It is different from previous work, where perturbation was done by adding small value to each pixel. 2) a measure of perceptiveness to demonstrate the severity of one-pixel attack as comparing to a few pixel examples.<br />
<br />
[[File:abcd.png|thumb|center|400px|Image title or explanation]]<br />
<br />
= Related works =<br />
The sensitivity to well-turned artificial perturbation were investigated in various related work.<br />
<br />
1. First perturbation was crafted by several gradient-based algorithms using back- propagation for obtaining gradient information. (C. Szegedy et al. )<br />
<br />
2. Fast gradient sign algorithm for calculating effective perturbation<br />
It was with the hypotheses of the linearity and high-dimensions of inputs were the main reason of vulnerability. (I.J.Goodfellow et al.) <br />
<br />
3. Greedy perturbation searching method by assuming the linearity of DNN decision boundary (S.M Moosavi-Dezfooli et al.) Jacobian matrix to build “Adversarial Saliency Map” which indicates the effectiveness of conducting a fixed length perturbation through the direction of each axis (N. Papernot et al. )<br />
<br />
4. The images can hardly be recognized by human eyes but nevertheless classified by the network with high confi- dence. (A. Nguyen et al. )<br />
<br />
5. Several black-box attacks that require no internal knowledge about the target systems such as gradients, have also been proposed. only utilized it as a starting point to derive a further semi black-box attack which needs to modify more pixels (N. Narodytska et al)<br />
<br />
6. Both natural and random images are found to be vulnerable to adversarial perturbation. Assuming these images are evenly distributed, it suggests that most data points in the input space are gathered near to the boundaries. ( A. Fawzi, S. M. Moosavi Dezfooli, and P. Frossard.)<br />
<br />
7. A curvature analysis region along most directions around natural images are flat with only few directions where the space is curved and the images are sensitive to perturbation. (A. Fawzi et al.)<br />
<br />
8. Universal perturbations (i.e. a perturbation that when added to any natural image can generate adversarial samples with high effectiveness) were shown possible and to achieve a high effectiveness when compared to random perturbation. This indicates that the diversity of boundaries might be low while the boundaries’ shapes near different data points are similar (S. M. Moosavi Dezfooli, A. Fawzi, O. Fawzi, and P. Frossard.)<br />
<br />
= Methodology =<br />
<br />
== Problem Description ==<br />
<br />
We can formalize the generation of adversarial images as an constrained optimization problem. We are given a classifier '''F''' and targeted adversarial class '''adv'''. Let '''x''' be the vectorized form of an image. Let F_adv('''x''') represent the probability assigned to the targeted class '''adv''' for vector '''x''' by the given classifier. Let e('''x''') represent an additive adversarial perturbation vector for image '''x'''. The goal of the targeted attack adversary is to find the perturbation vector with constrained size(norm) that maximizes the probability assigned to the targeted class by the classifier. Formally <br />
<br />
[[File:ONE_PIXEL_CSP1.png | center ]]<br />
<br />
<br />
For few-pixel attack, the problem statement is changed slightly. We constrain on the number of non-zero elements of the perturbation vector instead of the norm of the vector.<br />
<br />
[[File:ONE_PIXEL_CSP2.png | center ]]<br />
<br />
<br />
For the usual adversarial case, we can shift '''x''' in all dimensions, but the strength(norm) of the shift is bounded by L. For our one-pixel attack case, we are using the second equation with d=1. '''x''' is only allowed to be perturbed along a single axis, so the degree of freedom of the shift is greatly reduces. However, the shift in the one axis can be of arbitrary strength.<br />
<br />
<br />
== Differential Evolution ==<br />
Differential evolution(DE) is a population based optimization algorithm belonging to the class of evolutionary algorithms(EA). For DE, in the selection process, the population is in a way "segmented" into families( offspring and parents ), and the most optimal from each family is chosen. This segmentation allows DE to keep more diversity in each iteration than other EAs and makes it more suitable for complex multi-modal optimization problems. Additionally, DE does not use gradient information, which makes it suitable for our problem.<br />
<br />
= Evaluation and Results =<br />
<br />
=== Measurement Metrics and Datasets ===<br />
<br />
The authors used 4 measurement metrics to evaluate the effectiveness of the proposed attacks. <br />
* '''Success Rate''' <br /> ''Non-targeted attack'': the percentage of adversarial images were successfully classified to any possible classes other than the true one. <br /><br />
:<math> \textrm{success rate }=\displaystyle\sum_{k=1}^N I(\textrm{Network}(\textrm{Attack}(\textrm{Image}_k))\neq \textrm{TrueClass}_k) </math> <br />''Targeted attack'': the probability of successfully classifying a perturbed image to a targeted class.<br />
:<math> \textrm{success rate }=\displaystyle\sum_{k=1}^N I(\textrm{Network}(\textrm{TargetedAttack}(\textrm{Image}_k))= \textrm{TrueClass}_k) </math><br />
* '''Adversarial Probability Labels (Confidence)'''<br />The ratio of sum of probability level of the target class for each successful perturbation and the total number of successful attacks. This gives the mean confidence of the successful attacks on the target classification system.<br />
* '''Number of Target Classes'''<br />The number of images after perturbations cannot be classified to any other classes.<br />
* '''Number of Original-Target Class Pairs'''<br />The number of times of each pair being attacked.<br />
<br />
<br />
''Evaluation setups on CIFAR-10 test dataset'':<br />
<br />
Three types of networks played as defense systems: All Convolutional Networks (AllConv), Network in Network (NiN) and VGG16 network. (Figure 1)<br />
<br />
The authors randomly sampled 500 images, with resolution 32x32x3, from the dataset to perform one-pixel attacks and generated 500 samples with 3 and 5 pixel-modification respectively to conduct three-pixel and five-pixel attacks. The effectiveness of one-pixel attack was evaluated on all three introduced networks and the performance comparison, using success rate, was performed as the following: 1-pixel attack and 3-pixel attack on AllConv and 5-pixel attack on NiN. <br />
<br />
Both targeted attacks and non-targeted attacks were considered but only targeted attacks were conducted since the performance of non-targeted attack results could be obtained from the result of targeted attacks by applying a fitness function to increase the probability level of the target class.<br />
<br />
<br />
<br />
''Evaluation setups on ImageNet validation dataset (ILSVRC 2012)'':<br />
<br />
BVLC AlexNet played as defense system. (Figure 1)<br />
<br />
600 sample images were randomly sampled from the dataset. Due to the relatively high resolution of the images (224x224x3), only one-pixel attacked was carried out aiming to verify if extremely small pixel modification in a relatively large image can alternate the classification result.<br />
<br />
Since the number of classes in this dataset is way larger than the one of CIFAR-10, the authors only launched non-targeted attacks and applied a fitness function to decrease the probability level of the true class. <br />
<br />
<br />
<br />
[[File:1px_attack.jpg|thumb|center|400px|Figure 1. Networks Architecture]]<br />
<br />
=== Results ===<br />
<br />
On CIFAR-10, the proposed one-pixel attack succeeded (Table 1) in general and success rate and confidence increased by a significant amount when more pixels were modified (Table 2). Moreover, with one-pixel modification, each image can be perturbed to 2~4 classes in all 3 defense systems, i.e. AllConv, NiN and VGG16 (Figure 2); in particular, VGG16 is slightly more robust than the other two. Generally, all these three networks are vulnerable to one-pixel attack. When more pixels were allowed modifications, a significant number of images were successfully perturbed to up to 8 classes (Figure 3).<br />
<br />
On ImageNet, one-pixel attack performed well regarding to relatively large images (Table 1). The low confidence 5.53% is due to the large number of classes and the main efforts in decreasing the probability level of the true class. <br />
<br />
<gallery mode="slideshow"><br />
File:tables.jpg<br />
File:1px_3nets.jpg|frameless|Figure 2. One-pixel attack on three networks (Su, Vargas and Sakurai, 2018)<br />
File:3attacks.png|frameless|Figure 3. 1, 3 and 5-pixel attacks (Su, Vargas and Sakurai, 2018)<br />
</gallery><br />
<br />
By comparing the non-targeted attack effectiveness between the proposed method and two previous works (LAS and FGSM), single dimensional perturbation vector is enough to find the corresponding adversarial images for most natural images. (Figure 4)<br />
<br />
[[File:Capture.PNG|thumb|center|400px|Figure 4. Comparison of non-targeted attack effectiveness between proposed method, LSA and FGSM]]<br />
<br />
= Discussion and Future Work =<br />
<br />
In general, this paper illustrates the possibility of finding point boundaries where class labels changes by moving data points along few dimensions and analyzing the frequency of changes in class labels quantitatively; it also proves that single pixel change could perturb a considerable portion of images. According to the experimental results, one pixel attack can further work on different network structures or image sizes. Given the time constraints, the conducted experiments used a low number of DE iterations with a relatively small set of initial candidate solutions; by increasing the number of DE iterations or the initial candidate solutions, the perturbation success rate can be further improved. <br />
<br />
For the future work, it is suggested to use the proposed algorithms and natural image samples for further development of advanced models and better artificial adversarial samples.<br />
<br />
= Critique =<br />
<br />
* When the authors launched 1, 3 and 5-pixel attacks, they did it on different target networks, i.e. 1 and 3-pixel attacks on AllConv and 5-pixel attack on NiN, then compared the corresponding success rates. Such a comparison is not rigorous as different networks might lead to varied classification results. Although we understand that the authors wanted to show the best attack results among those networks, we think all three attacks should be conducted on the same defense systems respectively and consequently we can see the performance of each attack on each target network. The conclusion from these results would be more convincing.<br />
<br />
* Regarding to ImageNet, only AlexNet was used to attack but AllConv, NiN and VGG16. The papers in which AllConv and VGG16 were introduced have good experiment results on ImageNet. Attack experiments towards these networks should be considered to show the effectiveness of one-pixel attack.<br />
<br />
= Reference =<br />
<br />
Carlini.N,2017 https://www.youtube.com/watch?v=yIXNL88JBWQ<br />
<br />
Jiawei Su, Danilo Vasconcellos Vargas, Sakurai Kouichi, 2017 https://arxiv.org/pdf/1710.08864.pdf<br />
<br />
Roman Trusov,2017 https://blog.xix.ai/how-adversarial-attacks-work-87495b81da2d</div>Z3chuhttp://wiki.math.uwaterloo.ca/statwiki/index.php?title=One_pixel_attack_for_fooling_deep_neural_networks&diff=35708One pixel attack for fooling deep neural networks2018-03-27T16:50:59Z<p>Z3chu: /* Results */</p>
<hr />
<div>= Presented by = <br />
1. Ziheng Chu<br />
<br />
2. Minghao Lu<br />
<br />
3. Qi Mai<br />
<br />
4. Qici Tan<br />
<br />
= Introduction =<br />
Neural network first caught many people’s attention in imageNet contest in 2012. Neural network increased accuracy to 85% from 75%. The following year, it is increased to 89%. From no one used neural network to everyone uses the neural network. Today we have 97% accuracy in using deep neural network (DNN). So the problem of image recognition by are artificial intelligence is solved. However, there is one catch(Carlini.N,2017).<br />
<br />
The catch is that the DNN is really easy to be fooled. Here is an example. An image of a dog is classified as a hummingbird. Research studies by Google Brian, which is a deep learning artificial intelligence research team at Google, showed that any machine learning classifier can be tricked to give wrong predictions. The action of designing an input in a specific way to get the wrong result from the model is called an adversarial attack (Roman Trusov,2017). The input image is the adversarial image. This image is created by adding a tiny amount of perturbation, which is not so imperceptible to human eyes. After zooming into figure 1, a small amount of perturbation led to misclassify a dog as a hummingbird.<br />
<br />
How does it possible? DNN models consist of transformation. Most of those transformations are sometimes very sensitive to a small change. Think of the DNN as a set of high-dimensional decision boundary. When an input is not perfect, if the decision boundary is too simple and linear, mostly it leads to misclassify. <br />
<br />
Harnessing this sensitivity is a way to better understand and product robust algorithm in AI security. This paper aims to demonstrate the vulnerability of DNN by presenting some extreme scenarios - one pixel attack. As shown in figure3, only one pixel was perturbed, the classification was wrong in each image. Although, there is no profound defense to the attack as of current state, the investigation of one-pixel attack may shield lights on the behavior of DNN. Ultimately, it leads to the discussion of the security implications to future solution.<br />
<br />
This paper proposed one pixel attack in a scenario where the only information available is the probability labels. Comparing to previous work, this proposal showed its effectiveness of successful attack rate up to 73%, its simplicity of semi-black-box which only required probability label no need inner information, and its flexibility in attacking more models, especially the networks that are not differentiable and the gradient calculation is difficult.<br />
<br />
With the intension of creating an adversarial attack for better understanding the security of DNN, one pixel attack should be considered. Two main reasons: 1) a new way of exploring the high dimensional DNN by using fewer and lower dimensional slices. It is different from previous work, where perturbation was done by adding small value to each pixel. 2) a measure of perceptiveness to demonstrate the severity of one-pixel attack as comparing to a few pixel examples.<br />
<br />
[[File:abcd.png|thumb|center|400px|Image title or explanation]]<br />
<br />
= Related works =<br />
The sensitivity to well-turned artificial perturbation were investigated in various related work.<br />
<br />
1. First perturbation was crafted by several gradient-based algorithms using back- propagation for obtaining gradient information. (C. Szegedy et al. )<br />
<br />
2. Fast gradient sign algorithm for calculating effective perturbation<br />
It was with the hypotheses of the linearity and high-dimensions of inputs were the main reason of vulnerability. (I.J.Goodfellow et al.) <br />
<br />
3. Greedy perturbation searching method by assuming the linearity of DNN decision boundary (S.M Moosavi-Dezfooli et al.) Jacobian matrix to build “Adversarial Saliency Map” which indicates the effectiveness of conducting a fixed length perturbation through the direction of each axis (N. Papernot et al. )<br />
<br />
4. The images can hardly be recognized by human eyes but nevertheless classified by the network with high confi- dence. (A. Nguyen et al. )<br />
<br />
5. Several black-box attacks that require no internal knowledge about the target systems such as gradients, have also been proposed. only utilized it as a starting point to derive a further semi black-box attack which needs to modify more pixels (N. Narodytska et al)<br />
<br />
6. Both natural and random images are found to be vulnerable to adversarial perturbation. Assuming these images are evenly distributed, it suggests that most data points in the input space are gathered near to the boundaries. ( A. Fawzi, S. M. Moosavi Dezfooli, and P. Frossard.)<br />
<br />
7. A curvature analysis region along most directions around natural images are flat with only few directions where the space is curved and the images are sensitive to perturbation. (A. Fawzi et al.)<br />
<br />
8. Universal perturbations (i.e. a perturbation that when added to any natural image can generate adversarial samples with high effectiveness) were shown possible and to achieve a high effectiveness when compared to random perturbation. This indicates that the diversity of boundaries might be low while the boundaries’ shapes near different data points are similar (S. M. Moosavi Dezfooli, A. Fawzi, O. Fawzi, and P. Frossard.)<br />
<br />
= Methodology =<br />
<br />
== Problem Description ==<br />
<br />
We can formalize the generation of adversarial images as an constrained optimization problem. We are given a classifier '''F''' and targeted adversarial class '''adv'''. Let '''x''' be the vectorized form of an image. Let F_adv('''x''') represent the probability assigned to the targeted class '''adv''' for vector '''x''' by the given classifier. Let e('''x''') represent an additive adversarial perturbation vector for image '''x'''. The goal of the targeted attack adversary is to find the perturbation vector with constrained size(norm) that maximizes the probability assigned to the targeted class by the classifier. Formally <br />
<br />
[[File:ONE_PIXEL_CSP1.png | center ]]<br />
<br />
<br />
For few-pixel attack, the problem statement is changed slightly. We constrain on the number of non-zero elements of the perturbation vector instead of the norm of the vector.<br />
<br />
[[File:ONE_PIXEL_CSP2.png | center ]]<br />
<br />
<br />
For the usual adversarial case, we can shift '''x''' in all dimensions, but the strength(norm) of the shift is bounded by L. For our one-pixel attack case, we are using the second equation with d=1. '''x''' is only allowed to be perturbed along a single axis, so the degree of freedom of the shift is greatly reduces. However, the shift in the one axis can be of arbitrary strength.<br />
<br />
<br />
== Differential Evolution ==<br />
Differential evolution(DE) is a population based optimization algorithm belonging to the class of evolutionary algorithms(EA). For DE, in the selection process, the population is in a way "segmented" into families( offspring and parents ), and the most optimal from each family is chosen. This segmentation allows DE to keep more diversity in each iteration than other EAs and makes it more suitable for complex multi-modal optimization problems. Additionally, DE does not use gradient information, which makes it suitable for our problem.<br />
<br />
= Evaluation and Results =<br />
<br />
=== Measurement Metrics and Datasets ===<br />
<br />
The authors used 4 measurement metrics to evaluate the effectiveness of the proposed attacks. <br />
* '''Success Rate''' <br /> ''Non-targeted attack'': the percentage of adversarial images were successfully classified to any possible classes other than the true one. <br /><br />
:<math> \textrm{success rate }=\displaystyle\sum_{k=1}^N I(\textrm{Network}(\textrm{Attack}(\textrm{Image}_k))\neq \textrm{TrueClass}_k) </math> <br />''Targeted attack'': the probability of successfully classifying a perturbed image to a targeted class.<br />
:<math> \textrm{success rate }=\displaystyle\sum_{k=1}^N I(\textrm{Network}(\textrm{TargetedAttack}(\textrm{Image}_k))= \textrm{TrueClass}_k) </math><br />
* '''Adversarial Probability Labels (Confidence)'''<br />The ratio of sum of probability level of the target class for each successful perturbation and the total number of successful attacks. This gives the mean confidence of the successful attacks on the target classification system.<br />
* '''Number of Target Classes'''<br />The number of images after perturbations cannot be classified to any other classes.<br />
* '''Number of Original-Target Class Pairs'''<br />The number of times of each pair being attacked.<br />
<br />
<br />
''Evaluation setups on CIFAR-10 test dataset'':<br />
<br />
Three types of networks played as defense systems: All Convolutional Networks (AllConv), Network in Network (NiN) and VGG16 network. (Figure 1)<br />
<br />
The authors randomly sampled 500 images, with resolution 32x32x3, from the dataset to perform one-pixel attacks and generated 500 samples with 3 and 5 pixel-modification respectively to conduct three-pixel and five-pixel attacks. The effectiveness of one-pixel attack was evaluated on all three introduced networks and the performance comparison, using success rate, was performed as the following: 1-pixel attack and 3-pixel attack on AllConv and 5-pixel attack on NiN. <br />
<br />
Both targeted attacks and non-targeted attacks were considered but only targeted attacks were conducted since the performance of non-targeted attack results could be obtained from the result of targeted attacks by applying a fitness function to increase the probability level of the target class.<br />
<br />
<br />
<br />
''Evaluation setups on ImageNet validation dataset (ILSVRC 2012)'':<br />
<br />
BVLC AlexNet played as defense system. (Figure 1)<br />
<br />
600 sample images were randomly sampled from the dataset. Due to the relatively high resolution of the images (224x224x3), only one-pixel attacked was carried out aiming to verify if extremely small pixel modification in a relatively large image can alternate the classification result.<br />
<br />
Since the number of classes in this dataset is way larger than the one of CIFAR-10, the authors only launched non-targeted attacks and applied a fitness function to decrease the probability level of the true class. <br />
<br />
<br />
<br />
[[File:1px_attack.jpg|thumb|center|400px|Figure 1. Networks Architecture]]<br />
<br />
=== Results ===<br />
<br />
On CIFAR-10, the proposed one-pixel attack succeeded (Table 1) in general and success rate and confidence increased by a significant amount when more pixels were modified (Table 2). Moreover, with one-pixel modification, each image can be perturbed to 2~4 classes in all 3 defense systems, i.e. AllConv, NiN and VGG16 (Figure 2); in particular, VGG16 is slightly more robust than the other two. Generally, all these three networks are vulnerable to one-pixel attack. When more pixels were allowed modifications, a significant number of images were successfully perturbed to up to 8 classes (Figure 3).<br />
<br />
On ImageNet, one-pixel attack performed well regarding to relatively large images (Table 1). The low confidence 5.53% is due to the large number of classes and the main efforts in decreasing the probability level of the true class. <br />
<br />
<gallery mode="slideshow"><br />
File:tables.jpg<br />
File:1px_3nets.jpg|frameless|Figure 2. One-pixel attack on three networks (Su, Vargas and Sakurai, 2018)<br />
File:3attacks.png|frameless|Figure 3. 1, 3 and 5-pixel attacks (Su, Vargas and Sakurai, 2018)<br />
</gallery><br />
<br />
By comparing the non-targeted attack effectiveness between the proposed method and two previous works (LAS and FGSM), single dimensional perturbation vector finds the corresponding adversarial images for most natural images. (Figure 4)<br />
<br />
[[File:Capture.PNG|thumb|center|400px|Figure 4. Comparison of non-targeted attack effectiveness between proposed method, LSA and FGSM]]<br />
<br />
= Discussion and Future Work =<br />
<br />
In general, this paper illustrates the possibility of finding point boundaries where class labels changes by moving data points along few dimensions and analyzing the frequency of changes in class labels quantitatively; it also proves that single pixel change could perturb a considerable portion of images. According to the experimental results, one pixel attack can further work on different network structures or image sizes. Given the time constraints, the conducted experiments used a low number of DE iterations with a relatively small set of initial candidate solutions; by increasing the number of DE iterations or the initial candidate solutions, the perturbation success rate can be further improved. <br />
<br />
For the future work, it is suggested to use the proposed algorithms and natural image samples for further development of advanced models and better artificial adversarial samples.<br />
<br />
= Critique =<br />
<br />
* When the authors launched 1, 3 and 5-pixel attacks, they did it on different target networks, i.e. 1 and 3-pixel attacks on AllConv and 5-pixel attack on NiN, then compared the corresponding success rates. Such a comparison is not rigorous as different networks might lead to varied classification results. Although we understand that the authors wanted to show the best attack results among those networks, we think all three attacks should be conducted on the same defense systems respectively and consequently we can see the performance of each attack on each target network. The conclusion from these results would be more convincing.<br />
<br />
* Regarding to ImageNet, only AlexNet was used to attack but AllConv, NiN and VGG16. The papers in which AllConv and VGG16 were introduced have good experiment results on ImageNet. Attack experiments towards these networks should be considered to show the effectiveness of one-pixel attack.<br />
<br />
= Reference =<br />
<br />
Carlini.N,2017 https://www.youtube.com/watch?v=yIXNL88JBWQ<br />
<br />
Jiawei Su, Danilo Vasconcellos Vargas, Sakurai Kouichi, 2017 https://arxiv.org/pdf/1710.08864.pdf<br />
<br />
Roman Trusov,2017 https://blog.xix.ai/how-adversarial-attacks-work-87495b81da2d</div>Z3chuhttp://wiki.math.uwaterloo.ca/statwiki/index.php?title=One_pixel_attack_for_fooling_deep_neural_networks&diff=35707One pixel attack for fooling deep neural networks2018-03-27T16:50:18Z<p>Z3chu: /* Results */</p>
<hr />
<div>= Presented by = <br />
1. Ziheng Chu<br />
<br />
2. Minghao Lu<br />
<br />
3. Qi Mai<br />
<br />
4. Qici Tan<br />
<br />
= Introduction =<br />
Neural network first caught many people’s attention in imageNet contest in 2012. Neural network increased accuracy to 85% from 75%. The following year, it is increased to 89%. From no one used neural network to everyone uses the neural network. Today we have 97% accuracy in using deep neural network (DNN). So the problem of image recognition by are artificial intelligence is solved. However, there is one catch(Carlini.N,2017).<br />
<br />
The catch is that the DNN is really easy to be fooled. Here is an example. An image of a dog is classified as a hummingbird. Research studies by Google Brian, which is a deep learning artificial intelligence research team at Google, showed that any machine learning classifier can be tricked to give wrong predictions. The action of designing an input in a specific way to get the wrong result from the model is called an adversarial attack (Roman Trusov,2017). The input image is the adversarial image. This image is created by adding a tiny amount of perturbation, which is not so imperceptible to human eyes. After zooming into figure 1, a small amount of perturbation led to misclassify a dog as a hummingbird.<br />
<br />
How does it possible? DNN models consist of transformation. Most of those transformations are sometimes very sensitive to a small change. Think of the DNN as a set of high-dimensional decision boundary. When an input is not perfect, if the decision boundary is too simple and linear, mostly it leads to misclassify. <br />
<br />
Harnessing this sensitivity is a way to better understand and product robust algorithm in AI security. This paper aims to demonstrate the vulnerability of DNN by presenting some extreme scenarios - one pixel attack. As shown in figure3, only one pixel was perturbed, the classification was wrong in each image. Although, there is no profound defense to the attack as of current state, the investigation of one-pixel attack may shield lights on the behavior of DNN. Ultimately, it leads to the discussion of the security implications to future solution.<br />
<br />
This paper proposed one pixel attack in a scenario where the only information available is the probability labels. Comparing to previous work, this proposal showed its effectiveness of successful attack rate up to 73%, its simplicity of semi-black-box which only required probability label no need inner information, and its flexibility in attacking more models, especially the networks that are not differentiable and the gradient calculation is difficult.<br />
<br />
With the intension of creating an adversarial attack for better understanding the security of DNN, one pixel attack should be considered. Two main reasons: 1) a new way of exploring the high dimensional DNN by using fewer and lower dimensional slices. It is different from previous work, where perturbation was done by adding small value to each pixel. 2) a measure of perceptiveness to demonstrate the severity of one-pixel attack as comparing to a few pixel examples.<br />
<br />
[[File:abcd.png|thumb|center|400px|Image title or explanation]]<br />
<br />
= Related works =<br />
The sensitivity to well-turned artificial perturbation were investigated in various related work.<br />
<br />
1. First perturbation was crafted by several gradient-based algorithms using back- propagation for obtaining gradient information. (C. Szegedy et al. )<br />
<br />
2. Fast gradient sign algorithm for calculating effective perturbation<br />
It was with the hypotheses of the linearity and high-dimensions of inputs were the main reason of vulnerability. (I.J.Goodfellow et al.) <br />
<br />
3. Greedy perturbation searching method by assuming the linearity of DNN decision boundary (S.M Moosavi-Dezfooli et al.) Jacobian matrix to build “Adversarial Saliency Map” which indicates the effectiveness of conducting a fixed length perturbation through the direction of each axis (N. Papernot et al. )<br />
<br />
4. The images can hardly be recognized by human eyes but nevertheless classified by the network with high confi- dence. (A. Nguyen et al. )<br />
<br />
5. Several black-box attacks that require no internal knowledge about the target systems such as gradients, have also been proposed. only utilized it as a starting point to derive a further semi black-box attack which needs to modify more pixels (N. Narodytska et al)<br />
<br />
6. Both natural and random images are found to be vulnerable to adversarial perturbation. Assuming these images are evenly distributed, it suggests that most data points in the input space are gathered near to the boundaries. ( A. Fawzi, S. M. Moosavi Dezfooli, and P. Frossard.)<br />
<br />
7. A curvature analysis region along most directions around natural images are flat with only few directions where the space is curved and the images are sensitive to perturbation. (A. Fawzi et al.)<br />
<br />
8. Universal perturbations (i.e. a perturbation that when added to any natural image can generate adversarial samples with high effectiveness) were shown possible and to achieve a high effectiveness when compared to random perturbation. This indicates that the diversity of boundaries might be low while the boundaries’ shapes near different data points are similar (S. M. Moosavi Dezfooli, A. Fawzi, O. Fawzi, and P. Frossard.)<br />
<br />
= Methodology =<br />
<br />
== Problem Description ==<br />
<br />
We can formalize the generation of adversarial images as an constrained optimization problem. We are given a classifier '''F''' and targeted adversarial class '''adv'''. Let '''x''' be the vectorized form of an image. Let F_adv('''x''') represent the probability assigned to the targeted class '''adv''' for vector '''x''' by the given classifier. Let e('''x''') represent an additive adversarial perturbation vector for image '''x'''. The goal of the targeted attack adversary is to find the perturbation vector with constrained size(norm) that maximizes the probability assigned to the targeted class by the classifier. Formally <br />
<br />
[[File:ONE_PIXEL_CSP1.png | center ]]<br />
<br />
<br />
For few-pixel attack, the problem statement is changed slightly. We constrain on the number of non-zero elements of the perturbation vector instead of the norm of the vector.<br />
<br />
[[File:ONE_PIXEL_CSP2.png | center ]]<br />
<br />
<br />
For the usual adversarial case, we can shift '''x''' in all dimensions, but the strength(norm) of the shift is bounded by L. For our one-pixel attack case, we are using the second equation with d=1. '''x''' is only allowed to be perturbed along a single axis, so the degree of freedom of the shift is greatly reduces. However, the shift in the one axis can be of arbitrary strength.<br />
<br />
<br />
== Differential Evolution ==<br />
Differential evolution(DE) is a population based optimization algorithm belonging to the class of evolutionary algorithms(EA). For DE, in the selection process, the population is in a way "segmented" into families( offspring and parents ), and the most optimal from each family is chosen. This segmentation allows DE to keep more diversity in each iteration than other EAs and makes it more suitable for complex multi-modal optimization problems. Additionally, DE does not use gradient information, which makes it suitable for our problem.<br />
<br />
= Evaluation and Results =<br />
<br />
=== Measurement Metrics and Datasets ===<br />
<br />
The authors used 4 measurement metrics to evaluate the effectiveness of the proposed attacks. <br />
* '''Success Rate''' <br /> ''Non-targeted attack'': the percentage of adversarial images were successfully classified to any possible classes other than the true one. <br /><br />
:<math> \textrm{success rate }=\displaystyle\sum_{k=1}^N I(\textrm{Network}(\textrm{Attack}(\textrm{Image}_k))\neq \textrm{TrueClass}_k) </math> <br />''Targeted attack'': the probability of successfully classifying a perturbed image to a targeted class.<br />
:<math> \textrm{success rate }=\displaystyle\sum_{k=1}^N I(\textrm{Network}(\textrm{TargetedAttack}(\textrm{Image}_k))= \textrm{TrueClass}_k) </math><br />
* '''Adversarial Probability Labels (Confidence)'''<br />The ratio of sum of probability level of the target class for each successful perturbation and the total number of successful attacks. This gives the mean confidence of the successful attacks on the target classification system.<br />
* '''Number of Target Classes'''<br />The number of images after perturbations cannot be classified to any other classes.<br />
* '''Number of Original-Target Class Pairs'''<br />The number of times of each pair being attacked.<br />
<br />
<br />
''Evaluation setups on CIFAR-10 test dataset'':<br />
<br />
Three types of networks played as defense systems: All Convolutional Networks (AllConv), Network in Network (NiN) and VGG16 network. (Figure 1)<br />
<br />
The authors randomly sampled 500 images, with resolution 32x32x3, from the dataset to perform one-pixel attacks and generated 500 samples with 3 and 5 pixel-modification respectively to conduct three-pixel and five-pixel attacks. The effectiveness of one-pixel attack was evaluated on all three introduced networks and the performance comparison, using success rate, was performed as the following: 1-pixel attack and 3-pixel attack on AllConv and 5-pixel attack on NiN. <br />
<br />
Both targeted attacks and non-targeted attacks were considered but only targeted attacks were conducted since the performance of non-targeted attack results could be obtained from the result of targeted attacks by applying a fitness function to increase the probability level of the target class.<br />
<br />
<br />
<br />
''Evaluation setups on ImageNet validation dataset (ILSVRC 2012)'':<br />
<br />
BVLC AlexNet played as defense system. (Figure 1)<br />
<br />
600 sample images were randomly sampled from the dataset. Due to the relatively high resolution of the images (224x224x3), only one-pixel attacked was carried out aiming to verify if extremely small pixel modification in a relatively large image can alternate the classification result.<br />
<br />
Since the number of classes in this dataset is way larger than the one of CIFAR-10, the authors only launched non-targeted attacks and applied a fitness function to decrease the probability level of the true class. <br />
<br />
<br />
<br />
[[File:1px_attack.jpg|thumb|center|400px|Figure 1. Networks Architecture]]<br />
<br />
=== Results ===<br />
<br />
On CIFAR-10, the proposed one-pixel attack succeeded (Table 1) in general and success rate and confidence increased by a significant amount when more pixels were modified (Table 2). Moreover, with one-pixel modification, each image can be perturbed to 2~4 classes in all 3 defense systems, i.e. AllConv, NiN and VGG16 (Figure 2); in particular, VGG16 is slightly more robust than the other two. Generally, all these three networks are vulnerable to one-pixel attack. When more pixels were allowed modifications, a significant number of images were successfully perturbed to up to 8 classes (Figure 3).<br />
<br />
On ImageNet, one-pixel attack performed well regarding to relatively large images (Table 1). The low confidence 5.53% is due to the large number of classes and the main efforts in decreasing the probability level of the true class. <br />
<br />
<gallery mode="slideshow"><br />
File:tables.jpg<br />
File:1px_3nets.jpg|frameless|Figure 2. One-pixel attack on three networks (Su, Vargas and Sakurai, 2018)<br />
File:3attacks.png|frameless|Figure 3. 1, 3 and 5-pixel attacks (Su, Vargas and Sakurai, 2018)<br />
</gallery><br />
<br />
By comparing the non-targeted attack effectiveness between the proposed method and two previous works (LAS and FGSM), single dimensional perturbation vector finds the corresponding adversarial images for most natural images. (Figure 4)<br />
<br />
[[File:Capture.png|thumb|center|400px|Figure 4. Comparison of non-targeted attack effectiveness between proposed method, LSA and FGSM]]<br />
<br />
= Discussion and Future Work =<br />
<br />
In general, this paper illustrates the possibility of finding point boundaries where class labels changes by moving data points along few dimensions and analyzing the frequency of changes in class labels quantitatively; it also proves that single pixel change could perturb a considerable portion of images. According to the experimental results, one pixel attack can further work on different network structures or image sizes. Given the time constraints, the conducted experiments used a low number of DE iterations with a relatively small set of initial candidate solutions; by increasing the number of DE iterations or the initial candidate solutions, the perturbation success rate can be further improved. <br />
<br />
For the future work, it is suggested to use the proposed algorithms and natural image samples for further development of advanced models and better artificial adversarial samples.<br />
<br />
= Critique =<br />
<br />
* When the authors launched 1, 3 and 5-pixel attacks, they did it on different target networks, i.e. 1 and 3-pixel attacks on AllConv and 5-pixel attack on NiN, then compared the corresponding success rates. Such a comparison is not rigorous as different networks might lead to varied classification results. Although we understand that the authors wanted to show the best attack results among those networks, we think all three attacks should be conducted on the same defense systems respectively and consequently we can see the performance of each attack on each target network. The conclusion from these results would be more convincing.<br />
<br />
* Regarding to ImageNet, only AlexNet was used to attack but AllConv, NiN and VGG16. The papers in which AllConv and VGG16 were introduced have good experiment results on ImageNet. Attack experiments towards these networks should be considered to show the effectiveness of one-pixel attack.<br />
<br />
= Reference =<br />
<br />
Carlini.N,2017 https://www.youtube.com/watch?v=yIXNL88JBWQ<br />
<br />
Jiawei Su, Danilo Vasconcellos Vargas, Sakurai Kouichi, 2017 https://arxiv.org/pdf/1710.08864.pdf<br />
<br />
Roman Trusov,2017 https://blog.xix.ai/how-adversarial-attacks-work-87495b81da2d</div>Z3chuhttp://wiki.math.uwaterloo.ca/statwiki/index.php?title=One_pixel_attack_for_fooling_deep_neural_networks&diff=35706One pixel attack for fooling deep neural networks2018-03-27T16:49:27Z<p>Z3chu: /* Results */</p>
<hr />
<div>= Presented by = <br />
1. Ziheng Chu<br />
<br />
2. Minghao Lu<br />
<br />
3. Qi Mai<br />
<br />
4. Qici Tan<br />
<br />
= Introduction =<br />
Neural network first caught many people’s attention in imageNet contest in 2012. Neural network increased accuracy to 85% from 75%. The following year, it is increased to 89%. From no one used neural network to everyone uses the neural network. Today we have 97% accuracy in using deep neural network (DNN). So the problem of image recognition by are artificial intelligence is solved. However, there is one catch(Carlini.N,2017).<br />
<br />
The catch is that the DNN is really easy to be fooled. Here is an example. An image of a dog is classified as a hummingbird. Research studies by Google Brian, which is a deep learning artificial intelligence research team at Google, showed that any machine learning classifier can be tricked to give wrong predictions. The action of designing an input in a specific way to get the wrong result from the model is called an adversarial attack (Roman Trusov,2017). The input image is the adversarial image. This image is created by adding a tiny amount of perturbation, which is not so imperceptible to human eyes. After zooming into figure 1, a small amount of perturbation led to misclassify a dog as a hummingbird.<br />
<br />
How does it possible? DNN models consist of transformation. Most of those transformations are sometimes very sensitive to a small change. Think of the DNN as a set of high-dimensional decision boundary. When an input is not perfect, if the decision boundary is too simple and linear, mostly it leads to misclassify. <br />
<br />
Harnessing this sensitivity is a way to better understand and product robust algorithm in AI security. This paper aims to demonstrate the vulnerability of DNN by presenting some extreme scenarios - one pixel attack. As shown in figure3, only one pixel was perturbed, the classification was wrong in each image. Although, there is no profound defense to the attack as of current state, the investigation of one-pixel attack may shield lights on the behavior of DNN. Ultimately, it leads to the discussion of the security implications to future solution.<br />
<br />
This paper proposed one pixel attack in a scenario where the only information available is the probability labels. Comparing to previous work, this proposal showed its effectiveness of successful attack rate up to 73%, its simplicity of semi-black-box which only required probability label no need inner information, and its flexibility in attacking more models, especially the networks that are not differentiable and the gradient calculation is difficult.<br />
<br />
With the intension of creating an adversarial attack for better understanding the security of DNN, one pixel attack should be considered. Two main reasons: 1) a new way of exploring the high dimensional DNN by using fewer and lower dimensional slices. It is different from previous work, where perturbation was done by adding small value to each pixel. 2) a measure of perceptiveness to demonstrate the severity of one-pixel attack as comparing to a few pixel examples.<br />
<br />
[[File:abcd.png|thumb|center|400px|Image title or explanation]]<br />
<br />
= Related works =<br />
The sensitivity to well-turned artificial perturbation were investigated in various related work.<br />
<br />
1. First perturbation was crafted by several gradient-based algorithms using back- propagation for obtaining gradient information. (C. Szegedy et al. )<br />
<br />
2. Fast gradient sign algorithm for calculating effective perturbation<br />
It was with the hypotheses of the linearity and high-dimensions of inputs were the main reason of vulnerability. (I.J.Goodfellow et al.) <br />
<br />
3. Greedy perturbation searching method by assuming the linearity of DNN decision boundary (S.M Moosavi-Dezfooli et al.) Jacobian matrix to build “Adversarial Saliency Map” which indicates the effectiveness of conducting a fixed length perturbation through the direction of each axis (N. Papernot et al. )<br />
<br />
4. The images can hardly be recognized by human eyes but nevertheless classified by the network with high confi- dence. (A. Nguyen et al. )<br />
<br />
5. Several black-box attacks that require no internal knowledge about the target systems such as gradients, have also been proposed. only utilized it as a starting point to derive a further semi black-box attack which needs to modify more pixels (N. Narodytska et al)<br />
<br />
6. Both natural and random images are found to be vulnerable to adversarial perturbation. Assuming these images are evenly distributed, it suggests that most data points in the input space are gathered near to the boundaries. ( A. Fawzi, S. M. Moosavi Dezfooli, and P. Frossard.)<br />
<br />
7. A curvature analysis region along most directions around natural images are flat with only few directions where the space is curved and the images are sensitive to perturbation. (A. Fawzi et al.)<br />
<br />
8. Universal perturbations (i.e. a perturbation that when added to any natural image can generate adversarial samples with high effectiveness) were shown possible and to achieve a high effectiveness when compared to random perturbation. This indicates that the diversity of boundaries might be low while the boundaries’ shapes near different data points are similar (S. M. Moosavi Dezfooli, A. Fawzi, O. Fawzi, and P. Frossard.)<br />
<br />
= Methodology =<br />
<br />
== Problem Description ==<br />
<br />
We can formalize the generation of adversarial images as an constrained optimization problem. We are given a classifier '''F''' and targeted adversarial class '''adv'''. Let '''x''' be the vectorized form of an image. Let F_adv('''x''') represent the probability assigned to the targeted class '''adv''' for vector '''x''' by the given classifier. Let e('''x''') represent an additive adversarial perturbation vector for image '''x'''. The goal of the targeted attack adversary is to find the perturbation vector with constrained size(norm) that maximizes the probability assigned to the targeted class by the classifier. Formally <br />
<br />
[[File:ONE_PIXEL_CSP1.png | center ]]<br />
<br />
<br />
For few-pixel attack, the problem statement is changed slightly. We constrain on the number of non-zero elements of the perturbation vector instead of the norm of the vector.<br />
<br />
[[File:ONE_PIXEL_CSP2.png | center ]]<br />
<br />
<br />
For the usual adversarial case, we can shift '''x''' in all dimensions, but the strength(norm) of the shift is bounded by L. For our one-pixel attack case, we are using the second equation with d=1. '''x''' is only allowed to be perturbed along a single axis, so the degree of freedom of the shift is greatly reduces. However, the shift in the one axis can be of arbitrary strength.<br />
<br />
<br />
== Differential Evolution ==<br />
Differential evolution(DE) is a population based optimization algorithm belonging to the class of evolutionary algorithms(EA). For DE, in the selection process, the population is in a way "segmented" into families( offspring and parents ), and the most optimal from each family is chosen. This segmentation allows DE to keep more diversity in each iteration than other EAs and makes it more suitable for complex multi-modal optimization problems. Additionally, DE does not use gradient information, which makes it suitable for our problem.<br />
<br />
= Evaluation and Results =<br />
<br />
=== Measurement Metrics and Datasets ===<br />
<br />
The authors used 4 measurement metrics to evaluate the effectiveness of the proposed attacks. <br />
* '''Success Rate''' <br /> ''Non-targeted attack'': the percentage of adversarial images were successfully classified to any possible classes other than the true one. <br /><br />
:<math> \textrm{success rate }=\displaystyle\sum_{k=1}^N I(\textrm{Network}(\textrm{Attack}(\textrm{Image}_k))\neq \textrm{TrueClass}_k) </math> <br />''Targeted attack'': the probability of successfully classifying a perturbed image to a targeted class.<br />
:<math> \textrm{success rate }=\displaystyle\sum_{k=1}^N I(\textrm{Network}(\textrm{TargetedAttack}(\textrm{Image}_k))= \textrm{TrueClass}_k) </math><br />
* '''Adversarial Probability Labels (Confidence)'''<br />The ratio of sum of probability level of the target class for each successful perturbation and the total number of successful attacks. This gives the mean confidence of the successful attacks on the target classification system.<br />
* '''Number of Target Classes'''<br />The number of images after perturbations cannot be classified to any other classes.<br />
* '''Number of Original-Target Class Pairs'''<br />The number of times of each pair being attacked.<br />
<br />
<br />
''Evaluation setups on CIFAR-10 test dataset'':<br />
<br />
Three types of networks played as defense systems: All Convolutional Networks (AllConv), Network in Network (NiN) and VGG16 network. (Figure 1)<br />
<br />
The authors randomly sampled 500 images, with resolution 32x32x3, from the dataset to perform one-pixel attacks and generated 500 samples with 3 and 5 pixel-modification respectively to conduct three-pixel and five-pixel attacks. The effectiveness of one-pixel attack was evaluated on all three introduced networks and the performance comparison, using success rate, was performed as the following: 1-pixel attack and 3-pixel attack on AllConv and 5-pixel attack on NiN. <br />
<br />
Both targeted attacks and non-targeted attacks were considered but only targeted attacks were conducted since the performance of non-targeted attack results could be obtained from the result of targeted attacks by applying a fitness function to increase the probability level of the target class.<br />
<br />
<br />
<br />
''Evaluation setups on ImageNet validation dataset (ILSVRC 2012)'':<br />
<br />
BVLC AlexNet played as defense system. (Figure 1)<br />
<br />
600 sample images were randomly sampled from the dataset. Due to the relatively high resolution of the images (224x224x3), only one-pixel attacked was carried out aiming to verify if extremely small pixel modification in a relatively large image can alternate the classification result.<br />
<br />
Since the number of classes in this dataset is way larger than the one of CIFAR-10, the authors only launched non-targeted attacks and applied a fitness function to decrease the probability level of the true class. <br />
<br />
<br />
<br />
[[File:1px_attack.jpg|thumb|center|400px|Figure 1. Networks Architecture]]<br />
<br />
=== Results ===<br />
<br />
On CIFAR-10, the proposed one-pixel attack succeeded (Table 1) in general and success rate and confidence increased by a significant amount when more pixels were modified (Table 2). Moreover, with one-pixel modification, each image can be perturbed to 2~4 classes in all 3 defense systems, i.e. AllConv, NiN and VGG16 (Figure 2); in particular, VGG16 is slightly more robust than the other two. Generally, all these three networks are vulnerable to one-pixel attack. When more pixels were allowed modifications, a significant number of images were successfully perturbed to up to 8 classes (Figure 3).<br />
<br />
On ImageNet, one-pixel attack performed well regarding to relatively large images (Table 1). The low confidence 5.53% is due to the large number of classes and the main efforts in decreasing the probability level of the true class. <br />
<br />
<gallery mode="slideshow"><br />
File:tables.jpg<br />
File:1px_3nets.jpg|frameless|Figure 2. One-pixel attack on three networks (Su, Vargas and Sakurai, 2018)<br />
File:3attacks.png|frameless|Figure 3. 1, 3 and 5-pixel attacks (Su, Vargas and Sakurai, 2018)<br />
</gallery><br />
<br />
By comparing the non-targeted attack effectiveness between the proposed method and two previous works (LAS and FGSM), single dimensional perturbation vector finds the corresponding adversarial images for most natural images. (Figure 4)<br />
<br />
<gallery mode="slideshow"><br />
File:Capture.png|frameless|Figure 4. Comparison of non-targeted attack effectiveness between proposed method, LSA and FGSM<br />
</gallery><br />
<br />
= Discussion and Future Work =<br />
<br />
In general, this paper illustrates the possibility of finding point boundaries where class labels changes by moving data points along few dimensions and analyzing the frequency of changes in class labels quantitatively; it also proves that single pixel change could perturb a considerable portion of images. According to the experimental results, one pixel attack can further work on different network structures or image sizes. Given the time constraints, the conducted experiments used a low number of DE iterations with a relatively small set of initial candidate solutions; by increasing the number of DE iterations or the initial candidate solutions, the perturbation success rate can be further improved. <br />
<br />
For the future work, it is suggested to use the proposed algorithms and natural image samples for further development of advanced models and better artificial adversarial samples.<br />
<br />
= Critique =<br />
<br />
* When the authors launched 1, 3 and 5-pixel attacks, they did it on different target networks, i.e. 1 and 3-pixel attacks on AllConv and 5-pixel attack on NiN, then compared the corresponding success rates. Such a comparison is not rigorous as different networks might lead to varied classification results. Although we understand that the authors wanted to show the best attack results among those networks, we think all three attacks should be conducted on the same defense systems respectively and consequently we can see the performance of each attack on each target network. The conclusion from these results would be more convincing.<br />
<br />
* Regarding to ImageNet, only AlexNet was used to attack but AllConv, NiN and VGG16. The papers in which AllConv and VGG16 were introduced have good experiment results on ImageNet. Attack experiments towards these networks should be considered to show the effectiveness of one-pixel attack.<br />
<br />
= Reference =<br />
<br />
Carlini.N,2017 https://www.youtube.com/watch?v=yIXNL88JBWQ<br />
<br />
Jiawei Su, Danilo Vasconcellos Vargas, Sakurai Kouichi, 2017 https://arxiv.org/pdf/1710.08864.pdf<br />
<br />
Roman Trusov,2017 https://blog.xix.ai/how-adversarial-attacks-work-87495b81da2d</div>Z3chuhttp://wiki.math.uwaterloo.ca/statwiki/index.php?title=One_pixel_attack_for_fooling_deep_neural_networks&diff=35705One pixel attack for fooling deep neural networks2018-03-27T16:49:02Z<p>Z3chu: /* Results */</p>
<hr />
<div>= Presented by = <br />
1. Ziheng Chu<br />
<br />
2. Minghao Lu<br />
<br />
3. Qi Mai<br />
<br />
4. Qici Tan<br />
<br />
= Introduction =<br />
Neural network first caught many people’s attention in imageNet contest in 2012. Neural network increased accuracy to 85% from 75%. The following year, it is increased to 89%. From no one used neural network to everyone uses the neural network. Today we have 97% accuracy in using deep neural network (DNN). So the problem of image recognition by are artificial intelligence is solved. However, there is one catch(Carlini.N,2017).<br />
<br />
The catch is that the DNN is really easy to be fooled. Here is an example. An image of a dog is classified as a hummingbird. Research studies by Google Brian, which is a deep learning artificial intelligence research team at Google, showed that any machine learning classifier can be tricked to give wrong predictions. The action of designing an input in a specific way to get the wrong result from the model is called an adversarial attack (Roman Trusov,2017). The input image is the adversarial image. This image is created by adding a tiny amount of perturbation, which is not so imperceptible to human eyes. After zooming into figure 1, a small amount of perturbation led to misclassify a dog as a hummingbird.<br />
<br />
How does it possible? DNN models consist of transformation. Most of those transformations are sometimes very sensitive to a small change. Think of the DNN as a set of high-dimensional decision boundary. When an input is not perfect, if the decision boundary is too simple and linear, mostly it leads to misclassify. <br />
<br />
Harnessing this sensitivity is a way to better understand and product robust algorithm in AI security. This paper aims to demonstrate the vulnerability of DNN by presenting some extreme scenarios - one pixel attack. As shown in figure3, only one pixel was perturbed, the classification was wrong in each image. Although, there is no profound defense to the attack as of current state, the investigation of one-pixel attack may shield lights on the behavior of DNN. Ultimately, it leads to the discussion of the security implications to future solution.<br />
<br />
This paper proposed one pixel attack in a scenario where the only information available is the probability labels. Comparing to previous work, this proposal showed its effectiveness of successful attack rate up to 73%, its simplicity of semi-black-box which only required probability label no need inner information, and its flexibility in attacking more models, especially the networks that are not differentiable and the gradient calculation is difficult.<br />
<br />
With the intension of creating an adversarial attack for better understanding the security of DNN, one pixel attack should be considered. Two main reasons: 1) a new way of exploring the high dimensional DNN by using fewer and lower dimensional slices. It is different from previous work, where perturbation was done by adding small value to each pixel. 2) a measure of perceptiveness to demonstrate the severity of one-pixel attack as comparing to a few pixel examples.<br />
<br />
[[File:abcd.png|thumb|center|400px|Image title or explanation]]<br />
<br />
= Related works =<br />
The sensitivity to well-turned artificial perturbation were investigated in various related work.<br />
<br />
1. First perturbation was crafted by several gradient-based algorithms using back- propagation for obtaining gradient information. (C. Szegedy et al. )<br />
<br />
2. Fast gradient sign algorithm for calculating effective perturbation<br />
It was with the hypotheses of the linearity and high-dimensions of inputs were the main reason of vulnerability. (I.J.Goodfellow et al.) <br />
<br />
3. Greedy perturbation searching method by assuming the linearity of DNN decision boundary (S.M Moosavi-Dezfooli et al.) Jacobian matrix to build “Adversarial Saliency Map” which indicates the effectiveness of conducting a fixed length perturbation through the direction of each axis (N. Papernot et al. )<br />
<br />
4. The images can hardly be recognized by human eyes but nevertheless classified by the network with high confi- dence. (A. Nguyen et al. )<br />
<br />
5. Several black-box attacks that require no internal knowledge about the target systems such as gradients, have also been proposed. only utilized it as a starting point to derive a further semi black-box attack which needs to modify more pixels (N. Narodytska et al)<br />
<br />
6. Both natural and random images are found to be vulnerable to adversarial perturbation. Assuming these images are evenly distributed, it suggests that most data points in the input space are gathered near to the boundaries. ( A. Fawzi, S. M. Moosavi Dezfooli, and P. Frossard.)<br />
<br />
7. A curvature analysis region along most directions around natural images are flat with only few directions where the space is curved and the images are sensitive to perturbation. (A. Fawzi et al.)<br />
<br />
8. Universal perturbations (i.e. a perturbation that when added to any natural image can generate adversarial samples with high effectiveness) were shown possible and to achieve a high effectiveness when compared to random perturbation. This indicates that the diversity of boundaries might be low while the boundaries’ shapes near different data points are similar (S. M. Moosavi Dezfooli, A. Fawzi, O. Fawzi, and P. Frossard.)<br />
<br />
= Methodology =<br />
<br />
== Problem Description ==<br />
<br />
We can formalize the generation of adversarial images as an constrained optimization problem. We are given a classifier '''F''' and targeted adversarial class '''adv'''. Let '''x''' be the vectorized form of an image. Let F_adv('''x''') represent the probability assigned to the targeted class '''adv''' for vector '''x''' by the given classifier. Let e('''x''') represent an additive adversarial perturbation vector for image '''x'''. The goal of the targeted attack adversary is to find the perturbation vector with constrained size(norm) that maximizes the probability assigned to the targeted class by the classifier. Formally <br />
<br />
[[File:ONE_PIXEL_CSP1.png | center ]]<br />
<br />
<br />
For few-pixel attack, the problem statement is changed slightly. We constrain on the number of non-zero elements of the perturbation vector instead of the norm of the vector.<br />
<br />
[[File:ONE_PIXEL_CSP2.png | center ]]<br />
<br />
<br />
For the usual adversarial case, we can shift '''x''' in all dimensions, but the strength(norm) of the shift is bounded by L. For our one-pixel attack case, we are using the second equation with d=1. '''x''' is only allowed to be perturbed along a single axis, so the degree of freedom of the shift is greatly reduces. However, the shift in the one axis can be of arbitrary strength.<br />
<br />
<br />
== Differential Evolution ==<br />
Differential evolution(DE) is a population based optimization algorithm belonging to the class of evolutionary algorithms(EA). For DE, in the selection process, the population is in a way "segmented" into families( offspring and parents ), and the most optimal from each family is chosen. This segmentation allows DE to keep more diversity in each iteration than other EAs and makes it more suitable for complex multi-modal optimization problems. Additionally, DE does not use gradient information, which makes it suitable for our problem.<br />
<br />
= Evaluation and Results =<br />
<br />
=== Measurement Metrics and Datasets ===<br />
<br />
The authors used 4 measurement metrics to evaluate the effectiveness of the proposed attacks. <br />
* '''Success Rate''' <br /> ''Non-targeted attack'': the percentage of adversarial images were successfully classified to any possible classes other than the true one. <br /><br />
:<math> \textrm{success rate }=\displaystyle\sum_{k=1}^N I(\textrm{Network}(\textrm{Attack}(\textrm{Image}_k))\neq \textrm{TrueClass}_k) </math> <br />''Targeted attack'': the probability of successfully classifying a perturbed image to a targeted class.<br />
:<math> \textrm{success rate }=\displaystyle\sum_{k=1}^N I(\textrm{Network}(\textrm{TargetedAttack}(\textrm{Image}_k))= \textrm{TrueClass}_k) </math><br />
* '''Adversarial Probability Labels (Confidence)'''<br />The ratio of sum of probability level of the target class for each successful perturbation and the total number of successful attacks. This gives the mean confidence of the successful attacks on the target classification system.<br />
* '''Number of Target Classes'''<br />The number of images after perturbations cannot be classified to any other classes.<br />
* '''Number of Original-Target Class Pairs'''<br />The number of times of each pair being attacked.<br />
<br />
<br />
''Evaluation setups on CIFAR-10 test dataset'':<br />
<br />
Three types of networks played as defense systems: All Convolutional Networks (AllConv), Network in Network (NiN) and VGG16 network. (Figure 1)<br />
<br />
The authors randomly sampled 500 images, with resolution 32x32x3, from the dataset to perform one-pixel attacks and generated 500 samples with 3 and 5 pixel-modification respectively to conduct three-pixel and five-pixel attacks. The effectiveness of one-pixel attack was evaluated on all three introduced networks and the performance comparison, using success rate, was performed as the following: 1-pixel attack and 3-pixel attack on AllConv and 5-pixel attack on NiN. <br />
<br />
Both targeted attacks and non-targeted attacks were considered but only targeted attacks were conducted since the performance of non-targeted attack results could be obtained from the result of targeted attacks by applying a fitness function to increase the probability level of the target class.<br />
<br />
<br />
<br />
''Evaluation setups on ImageNet validation dataset (ILSVRC 2012)'':<br />
<br />
BVLC AlexNet played as defense system. (Figure 1)<br />
<br />
600 sample images were randomly sampled from the dataset. Due to the relatively high resolution of the images (224x224x3), only one-pixel attacked was carried out aiming to verify if extremely small pixel modification in a relatively large image can alternate the classification result.<br />
<br />
Since the number of classes in this dataset is way larger than the one of CIFAR-10, the authors only launched non-targeted attacks and applied a fitness function to decrease the probability level of the true class. <br />
<br />
<br />
<br />
[[File:1px_attack.jpg|thumb|center|400px|Figure 1. Networks Architecture]]<br />
<br />
=== Results ===<br />
<br />
On CIFAR-10, the proposed one-pixel attack succeeded (Table 1) in general and success rate and confidence increased by a significant amount when more pixels were modified (Table 2). Moreover, with one-pixel modification, each image can be perturbed to 2~4 classes in all 3 defense systems, i.e. AllConv, NiN and VGG16 (Figure 2); in particular, VGG16 is slightly more robust than the other two. Generally, all these three networks are vulnerable to one-pixel attack. When more pixels were allowed modifications, a significant number of images were successfully perturbed to up to 8 classes (Figure 3).<br />
<br />
On ImageNet, one-pixel attack performed well regarding to relatively large images (Table 1). The low confidence 5.53% is due to the large number of classes and the main efforts in decreasing the probability level of the true class. <br />
<br />
<gallery mode="slideshow"><br />
File:tables.jpg<br />
File:1px_3nets.jpg|frameless|Figure 2. One-pixel attack on three networks (Su, Vargas and Sakurai, 2018)<br />
File:3attacks.png|frameless|Figure 3. 1, 3 and 5-pixel attacks (Su, Vargas and Sakurai, 2018)<br />
</gallery><br />
<br />
By comparing the non-targeted attack effectiveness between the proposed method and two previous works (LAS and FGSM), single dimensional perturbation vector finds the corresponding adversarial images for most natural images. (Figure 4)<br />
<gallery mode="slideshow"><br />
File:Capture.png|frameless|Figure 4. Comparison of non-targeted attack effectiveness between proposed method, LSA and FGSM<br />
</gallery><br />
<br />
= Discussion and Future Work =<br />
<br />
In general, this paper illustrates the possibility of finding point boundaries where class labels changes by moving data points along few dimensions and analyzing the frequency of changes in class labels quantitatively; it also proves that single pixel change could perturb a considerable portion of images. According to the experimental results, one pixel attack can further work on different network structures or image sizes. Given the time constraints, the conducted experiments used a low number of DE iterations with a relatively small set of initial candidate solutions; by increasing the number of DE iterations or the initial candidate solutions, the perturbation success rate can be further improved. <br />
<br />
For the future work, it is suggested to use the proposed algorithms and natural image samples for further development of advanced models and better artificial adversarial samples.<br />
<br />
= Critique =<br />
<br />
* When the authors launched 1, 3 and 5-pixel attacks, they did it on different target networks, i.e. 1 and 3-pixel attacks on AllConv and 5-pixel attack on NiN, then compared the corresponding success rates. Such a comparison is not rigorous as different networks might lead to varied classification results. Although we understand that the authors wanted to show the best attack results among those networks, we think all three attacks should be conducted on the same defense systems respectively and consequently we can see the performance of each attack on each target network. The conclusion from these results would be more convincing.<br />
<br />
* Regarding to ImageNet, only AlexNet was used to attack but AllConv, NiN and VGG16. The papers in which AllConv and VGG16 were introduced have good experiment results on ImageNet. Attack experiments towards these networks should be considered to show the effectiveness of one-pixel attack.<br />
<br />
= Reference =<br />
<br />
Carlini.N,2017 https://www.youtube.com/watch?v=yIXNL88JBWQ<br />
<br />
Jiawei Su, Danilo Vasconcellos Vargas, Sakurai Kouichi, 2017 https://arxiv.org/pdf/1710.08864.pdf<br />
<br />
Roman Trusov,2017 https://blog.xix.ai/how-adversarial-attacks-work-87495b81da2d</div>Z3chuhttp://wiki.math.uwaterloo.ca/statwiki/index.php?title=One_pixel_attack_for_fooling_deep_neural_networks&diff=35703One pixel attack for fooling deep neural networks2018-03-27T16:44:24Z<p>Z3chu: /* Results */</p>
<hr />
<div>= Presented by = <br />
1. Ziheng Chu<br />
<br />
2. Minghao Lu<br />
<br />
3. Qi Mai<br />
<br />
4. Qici Tan<br />
<br />
= Introduction =<br />
Neural network first caught many people’s attention in imageNet contest in 2012. Neural network increased accuracy to 85% from 75%. The following year, it is increased to 89%. From no one used neural network to everyone uses the neural network. Today we have 97% accuracy in using deep neural network (DNN). So the problem of image recognition by are artificial intelligence is solved. However, there is one catch(Carlini.N,2017).<br />
<br />
The catch is that the DNN is really easy to be fooled. Here is an example. An image of a dog is classified as a hummingbird. Research studies by Google Brian, which is a deep learning artificial intelligence research team at Google, showed that any machine learning classifier can be tricked to give wrong predictions. The action of designing an input in a specific way to get the wrong result from the model is called an adversarial attack (Roman Trusov,2017). The input image is the adversarial image. This image is created by adding a tiny amount of perturbation, which is not so imperceptible to human eyes. After zooming into figure 1, a small amount of perturbation led to misclassify a dog as a hummingbird.<br />
<br />
How does it possible? DNN models consist of transformation. Most of those transformations are sometimes very sensitive to a small change. Think of the DNN as a set of high-dimensional decision boundary. When an input is not perfect, if the decision boundary is too simple and linear, mostly it leads to misclassify. <br />
<br />
Harnessing this sensitivity is a way to better understand and product robust algorithm in AI security. This paper aims to demonstrate the vulnerability of DNN by presenting some extreme scenarios - one pixel attack. As shown in figure3, only one pixel was perturbed, the classification was wrong in each image. Although, there is no profound defense to the attack as of current state, the investigation of one-pixel attack may shield lights on the behavior of DNN. Ultimately, it leads to the discussion of the security implications to future solution.<br />
<br />
This paper proposed one pixel attack in a scenario where the only information available is the probability labels. Comparing to previous work, this proposal showed its effectiveness of successful attack rate up to 73%, its simplicity of semi-black-box which only required probability label no need inner information, and its flexibility in attacking more models, especially the networks that are not differentiable and the gradient calculation is difficult.<br />
<br />
With the intension of creating an adversarial attack for better understanding the security of DNN, one pixel attack should be considered. Two main reasons: 1) a new way of exploring the high dimensional DNN by using fewer and lower dimensional slices. It is different from previous work, where perturbation was done by adding small value to each pixel. 2) a measure of perceptiveness to demonstrate the severity of one-pixel attack as comparing to a few pixel examples.<br />
<br />
[[File:abcd.png|thumb|center|400px|Image title or explanation]]<br />
<br />
= Related works =<br />
The sensitivity to well-turned artificial perturbation were investigated in various related work.<br />
<br />
1. First perturbation was crafted by several gradient-based algorithms using back- propagation for obtaining gradient information. (C. Szegedy et al. )<br />
<br />
2. Fast gradient sign algorithm for calculating effective perturbation<br />
It was with the hypotheses of the linearity and high-dimensions of inputs were the main reason of vulnerability. (I.J.Goodfellow et al.) <br />
<br />
3. Greedy perturbation searching method by assuming the linearity of DNN decision boundary (S.M Moosavi-Dezfooli et al.) Jacobian matrix to build “Adversarial Saliency Map” which indicates the effectiveness of conducting a fixed length perturbation through the direction of each axis (N. Papernot et al. )<br />
<br />
4. The images can hardly be recognized by human eyes but nevertheless classified by the network with high confi- dence. (A. Nguyen et al. )<br />
<br />
5. Several black-box attacks that require no internal knowledge about the target systems such as gradients, have also been proposed. only utilized it as a starting point to derive a further semi black-box attack which needs to modify more pixels (N. Narodytska et al)<br />
<br />
6. Both natural and random images are found to be vulnerable to adversarial perturbation. Assuming these images are evenly distributed, it suggests that most data points in the input space are gathered near to the boundaries. ( A. Fawzi, S. M. Moosavi Dezfooli, and P. Frossard.)<br />
<br />
7. A curvature analysis region along most directions around natural images are flat with only few directions where the space is curved and the images are sensitive to perturbation. (A. Fawzi et al.)<br />
<br />
8. Universal perturbations (i.e. a perturbation that when added to any natural image can generate adversarial samples with high effectiveness) were shown possible and to achieve a high effectiveness when compared to random perturbation. This indicates that the diversity of boundaries might be low while the boundaries’ shapes near different data points are similar (S. M. Moosavi Dezfooli, A. Fawzi, O. Fawzi, and P. Frossard.)<br />
<br />
= Methodology =<br />
<br />
== Problem Description ==<br />
<br />
We can formalize the generation of adversarial images as an constrained optimization problem. We are given a classifier '''F''' and targeted adversarial class '''adv'''. Let '''x''' be the vectorized form of an image. Let F_adv('''x''') represent the probability assigned to the targeted class '''adv''' for vector '''x''' by the given classifier. Let e('''x''') represent an additive adversarial perturbation vector for image '''x'''. The goal of the targeted attack adversary is to find the perturbation vector with constrained size(norm) that maximizes the probability assigned to the targeted class by the classifier. Formally <br />
<br />
[[File:ONE_PIXEL_CSP1.png | center ]]<br />
<br />
<br />
For few-pixel attack, the problem statement is changed slightly. We constrain on the number of non-zero elements of the perturbation vector instead of the norm of the vector.<br />
<br />
[[File:ONE_PIXEL_CSP2.png | center ]]<br />
<br />
<br />
For the usual adversarial case, we can shift '''x''' in all dimensions, but the strength(norm) of the shift is bounded by L. For our one-pixel attack case, we are using the second equation with d=1. '''x''' is only allowed to be perturbed along a single axis, so the degree of freedom of the shift is greatly reduces. However, the shift in the one axis can be of arbitrary strength.<br />
<br />
<br />
== Differential Evolution ==<br />
Differential evolution(DE) is a population based optimization algorithm belonging to the class of evolutionary algorithms(EA). For DE, in the selection process, the population is in a way "segmented" into families( offspring and parents ), and the most optimal from each family is chosen. This segmentation allows DE to keep more diversity in each iteration than other EAs and makes it more suitable for complex multi-modal optimization problems. Additionally, DE does not use gradient information, which makes it suitable for our problem.<br />
<br />
= Evaluation and Results =<br />
<br />
=== Measurement Metrics and Datasets ===<br />
<br />
The authors used 4 measurement metrics to evaluate the effectiveness of the proposed attacks. <br />
* '''Success Rate''' <br /> ''Non-targeted attack'': the percentage of adversarial images were successfully classified to any possible classes other than the true one. <br /><br />
:<math> \textrm{success rate }=\displaystyle\sum_{k=1}^N I(\textrm{Network}(\textrm{Attack}(\textrm{Image}_k))\neq \textrm{TrueClass}_k) </math> <br />''Targeted attack'': the probability of successfully classifying a perturbed image to a targeted class.<br />
:<math> \textrm{success rate }=\displaystyle\sum_{k=1}^N I(\textrm{Network}(\textrm{TargetedAttack}(\textrm{Image}_k))= \textrm{TrueClass}_k) </math><br />
* '''Adversarial Probability Labels (Confidence)'''<br />The ratio of sum of probability level of the target class for each successful perturbation and the total number of successful attacks. This gives the mean confidence of the successful attacks on the target classification system.<br />
* '''Number of Target Classes'''<br />The number of images after perturbations cannot be classified to any other classes.<br />
* '''Number of Original-Target Class Pairs'''<br />The number of times of each pair being attacked.<br />
<br />
<br />
''Evaluation setups on CIFAR-10 test dataset'':<br />
<br />
Three types of networks played as defense systems: All Convolutional Networks (AllConv), Network in Network (NiN) and VGG16 network. (Figure 1)<br />
<br />
The authors randomly sampled 500 images, with resolution 32x32x3, from the dataset to perform one-pixel attacks and generated 500 samples with 3 and 5 pixel-modification respectively to conduct three-pixel and five-pixel attacks. The effectiveness of one-pixel attack was evaluated on all three introduced networks and the performance comparison, using success rate, was performed as the following: 1-pixel attack and 3-pixel attack on AllConv and 5-pixel attack on NiN. <br />
<br />
Both targeted attacks and non-targeted attacks were considered but only targeted attacks were conducted since the performance of non-targeted attack results could be obtained from the result of targeted attacks by applying a fitness function to increase the probability level of the target class.<br />
<br />
<br />
<br />
''Evaluation setups on ImageNet validation dataset (ILSVRC 2012)'':<br />
<br />
BVLC AlexNet played as defense system. (Figure 1)<br />
<br />
600 sample images were randomly sampled from the dataset. Due to the relatively high resolution of the images (224x224x3), only one-pixel attacked was carried out aiming to verify if extremely small pixel modification in a relatively large image can alternate the classification result.<br />
<br />
Since the number of classes in this dataset is way larger than the one of CIFAR-10, the authors only launched non-targeted attacks and applied a fitness function to decrease the probability level of the true class. <br />
<br />
<br />
<br />
[[File:1px_attack.jpg|thumb|center|400px|Figure 1. Networks Architecture]]<br />
<br />
=== Results ===<br />
<br />
On CIFAR-10, the proposed one-pixel attack succeeded (Table 1) in general and success rate and confidence increased by a significant amount when more pixels were modified (Table 2). Moreover, with one-pixel modification, each image can be perturbed to 2~4 classes in all 3 defense systems, i.e. AllConv, NiN and VGG16 (Figure 2); in particular, VGG16 is slightly more robust than the other two. Generally, all these three networks are vulnerable to one-pixel attack. When more pixels were allowed modifications, a significant number of images were successfully perturbed to up to 8 classes (Figure 3).<br />
<br />
On ImageNet, one-pixel attack performed well regarding to relatively large images (Table 1). The low confidence 5.53% is due to the large number of classes and the main efforts in decreasing the probability level of the true class. <br />
<br />
<gallery mode="slideshow"><br />
File:tables.jpg<br />
File:1px_3nets.jpg|frameless|Figure 2. One-pixel attack on three networks (Su, Vargas and Sakurai, 2018)<br />
File:3attacks.png|frameless|Figure 3. 1, 3 and 5-pixel attacks (Su, Vargas and Sakurai, 2018)<br />
</gallery><br />
<br />
Using <br />
<gallery mode="slideshow"><br />
File:Capture.png|frameless|Figure 4. Comparison of non-targeted attack effectiveness between proposed method, LSA and FGSM<br />
</gallery><br />
<br />
= Discussion and Future Work =<br />
<br />
In general, this paper illustrates the possibility of finding point boundaries where class labels changes by moving data points along few dimensions and analyzing the frequency of changes in class labels quantitatively; it also proves that single pixel change could perturb a considerable portion of images. According to the experimental results, one pixel attack can further work on different network structures or image sizes. Given the time constraints, the conducted experiments used a low number of DE iterations with a relatively small set of initial candidate solutions; by increasing the number of DE iterations or the initial candidate solutions, the perturbation success rate can be further improved. <br />
<br />
For the future work, it is suggested to use the proposed algorithms and natural image samples for further development of advanced models and better artificial adversarial samples.<br />
<br />
= Critique =<br />
<br />
* When the authors launched 1, 3 and 5-pixel attacks, they did it on different target networks, i.e. 1 and 3-pixel attacks on AllConv and 5-pixel attack on NiN, then compared the corresponding success rates. Such a comparison is not rigorous as different networks might lead to varied classification results. Although we understand that the authors wanted to show the best attack results among those networks, we think all three attacks should be conducted on the same defense systems respectively and consequently we can see the performance of each attack on each target network. The conclusion from these results would be more convincing.<br />
<br />
* Regarding to ImageNet, only AlexNet was used to attack but AllConv, NiN and VGG16. The papers in which AllConv and VGG16 were introduced have good experiment results on ImageNet. Attack experiments towards these networks should be considered to show the effectiveness of one-pixel attack.<br />
<br />
= Reference =<br />
<br />
Carlini.N,2017 https://www.youtube.com/watch?v=yIXNL88JBWQ<br />
<br />
Jiawei Su, Danilo Vasconcellos Vargas, Sakurai Kouichi, 2017 https://arxiv.org/pdf/1710.08864.pdf<br />
<br />
Roman Trusov,2017 https://blog.xix.ai/how-adversarial-attacks-work-87495b81da2d</div>Z3chuhttp://wiki.math.uwaterloo.ca/statwiki/index.php?title=One_pixel_attack_for_fooling_deep_neural_networks&diff=35702One pixel attack for fooling deep neural networks2018-03-27T16:41:47Z<p>Z3chu: /* Results */</p>
<hr />
<div>= Presented by = <br />
1. Ziheng Chu<br />
<br />
2. Minghao Lu<br />
<br />
3. Qi Mai<br />
<br />
4. Qici Tan<br />
<br />
= Introduction =<br />
Neural network first caught many people’s attention in imageNet contest in 2012. Neural network increased accuracy to 85% from 75%. The following year, it is increased to 89%. From no one used neural network to everyone uses the neural network. Today we have 97% accuracy in using deep neural network (DNN). So the problem of image recognition by are artificial intelligence is solved. However, there is one catch(Carlini.N,2017).<br />
<br />
The catch is that the DNN is really easy to be fooled. Here is an example. An image of a dog is classified as a hummingbird. Research studies by Google Brian, which is a deep learning artificial intelligence research team at Google, showed that any machine learning classifier can be tricked to give wrong predictions. The action of designing an input in a specific way to get the wrong result from the model is called an adversarial attack (Roman Trusov,2017). The input image is the adversarial image. This image is created by adding a tiny amount of perturbation, which is not so imperceptible to human eyes. After zooming into figure 1, a small amount of perturbation led to misclassify a dog as a hummingbird.<br />
<br />
How does it possible? DNN models consist of transformation. Most of those transformations are sometimes very sensitive to a small change. Think of the DNN as a set of high-dimensional decision boundary. When an input is not perfect, if the decision boundary is too simple and linear, mostly it leads to misclassify. <br />
<br />
Harnessing this sensitivity is a way to better understand and product robust algorithm in AI security. This paper aims to demonstrate the vulnerability of DNN by presenting some extreme scenarios - one pixel attack. As shown in figure3, only one pixel was perturbed, the classification was wrong in each image. Although, there is no profound defense to the attack as of current state, the investigation of one-pixel attack may shield lights on the behavior of DNN. Ultimately, it leads to the discussion of the security implications to future solution.<br />
<br />
This paper proposed one pixel attack in a scenario where the only information available is the probability labels. Comparing to previous work, this proposal showed its effectiveness of successful attack rate up to 73%, its simplicity of semi-black-box which only required probability label no need inner information, and its flexibility in attacking more models, especially the networks that are not differentiable and the gradient calculation is difficult.<br />
<br />
With the intension of creating an adversarial attack for better understanding the security of DNN, one pixel attack should be considered. Two main reasons: 1) a new way of exploring the high dimensional DNN by using fewer and lower dimensional slices. It is different from previous work, where perturbation was done by adding small value to each pixel. 2) a measure of perceptiveness to demonstrate the severity of one-pixel attack as comparing to a few pixel examples.<br />
<br />
[[File:abcd.png|thumb|center|400px|Image title or explanation]]<br />
<br />
= Related works =<br />
The sensitivity to well-turned artificial perturbation were investigated in various related work.<br />
<br />
1. First perturbation was crafted by several gradient-based algorithms using back- propagation for obtaining gradient information. (C. Szegedy et al. )<br />
<br />
2. Fast gradient sign algorithm for calculating effective perturbation<br />
It was with the hypotheses of the linearity and high-dimensions of inputs were the main reason of vulnerability. (I.J.Goodfellow et al.) <br />
<br />
3. Greedy perturbation searching method by assuming the linearity of DNN decision boundary (S.M Moosavi-Dezfooli et al.) Jacobian matrix to build “Adversarial Saliency Map” which indicates the effectiveness of conducting a fixed length perturbation through the direction of each axis (N. Papernot et al. )<br />
<br />
4. The images can hardly be recognized by human eyes but nevertheless classified by the network with high confi- dence. (A. Nguyen et al. )<br />
<br />
5. Several black-box attacks that require no internal knowledge about the target systems such as gradients, have also been proposed. only utilized it as a starting point to derive a further semi black-box attack which needs to modify more pixels (N. Narodytska et al)<br />
<br />
6. Both natural and random images are found to be vulnerable to adversarial perturbation. Assuming these images are evenly distributed, it suggests that most data points in the input space are gathered near to the boundaries. ( A. Fawzi, S. M. Moosavi Dezfooli, and P. Frossard.)<br />
<br />
7. A curvature analysis region along most directions around natural images are flat with only few directions where the space is curved and the images are sensitive to perturbation. (A. Fawzi et al.)<br />
<br />
8. Universal perturbations (i.e. a perturbation that when added to any natural image can generate adversarial samples with high effectiveness) were shown possible and to achieve a high effectiveness when compared to random perturbation. This indicates that the diversity of boundaries might be low while the boundaries’ shapes near different data points are similar (S. M. Moosavi Dezfooli, A. Fawzi, O. Fawzi, and P. Frossard.)<br />
<br />
= Methodology =<br />
<br />
== Problem Description ==<br />
<br />
We can formalize the generation of adversarial images as an constrained optimization problem. We are given a classifier '''F''' and targeted adversarial class '''adv'''. Let '''x''' be the vectorized form of an image. Let F_adv('''x''') represent the probability assigned to the targeted class '''adv''' for vector '''x''' by the given classifier. Let e('''x''') represent an additive adversarial perturbation vector for image '''x'''. The goal of the targeted attack adversary is to find the perturbation vector with constrained size(norm) that maximizes the probability assigned to the targeted class by the classifier. Formally <br />
<br />
[[File:ONE_PIXEL_CSP1.png | center ]]<br />
<br />
<br />
For few-pixel attack, the problem statement is changed slightly. We constrain on the number of non-zero elements of the perturbation vector instead of the norm of the vector.<br />
<br />
[[File:ONE_PIXEL_CSP2.png | center ]]<br />
<br />
<br />
For the usual adversarial case, we can shift '''x''' in all dimensions, but the strength(norm) of the shift is bounded by L. For our one-pixel attack case, we are using the second equation with d=1. '''x''' is only allowed to be perturbed along a single axis, so the degree of freedom of the shift is greatly reduces. However, the shift in the one axis can be of arbitrary strength.<br />
<br />
<br />
== Differential Evolution ==<br />
Differential evolution(DE) is a population based optimization algorithm belonging to the class of evolutionary algorithms(EA). For DE, in the selection process, the population is in a way "segmented" into families( offspring and parents ), and the most optimal from each family is chosen. This segmentation allows DE to keep more diversity in each iteration than other EAs and makes it more suitable for complex multi-modal optimization problems. Additionally, DE does not use gradient information, which makes it suitable for our problem.<br />
<br />
= Evaluation and Results =<br />
<br />
=== Measurement Metrics and Datasets ===<br />
<br />
The authors used 4 measurement metrics to evaluate the effectiveness of the proposed attacks. <br />
* '''Success Rate''' <br /> ''Non-targeted attack'': the percentage of adversarial images were successfully classified to any possible classes other than the true one. <br /><br />
:<math> \textrm{success rate }=\displaystyle\sum_{k=1}^N I(\textrm{Network}(\textrm{Attack}(\textrm{Image}_k))\neq \textrm{TrueClass}_k) </math> <br />''Targeted attack'': the probability of successfully classifying a perturbed image to a targeted class.<br />
:<math> \textrm{success rate }=\displaystyle\sum_{k=1}^N I(\textrm{Network}(\textrm{TargetedAttack}(\textrm{Image}_k))= \textrm{TrueClass}_k) </math><br />
* '''Adversarial Probability Labels (Confidence)'''<br />The ratio of sum of probability level of the target class for each successful perturbation and the total number of successful attacks. This gives the mean confidence of the successful attacks on the target classification system.<br />
* '''Number of Target Classes'''<br />The number of images after perturbations cannot be classified to any other classes.<br />
* '''Number of Original-Target Class Pairs'''<br />The number of times of each pair being attacked.<br />
<br />
<br />
''Evaluation setups on CIFAR-10 test dataset'':<br />
<br />
Three types of networks played as defense systems: All Convolutional Networks (AllConv), Network in Network (NiN) and VGG16 network. (Figure 1)<br />
<br />
The authors randomly sampled 500 images, with resolution 32x32x3, from the dataset to perform one-pixel attacks and generated 500 samples with 3 and 5 pixel-modification respectively to conduct three-pixel and five-pixel attacks. The effectiveness of one-pixel attack was evaluated on all three introduced networks and the performance comparison, using success rate, was performed as the following: 1-pixel attack and 3-pixel attack on AllConv and 5-pixel attack on NiN. <br />
<br />
Both targeted attacks and non-targeted attacks were considered but only targeted attacks were conducted since the performance of non-targeted attack results could be obtained from the result of targeted attacks by applying a fitness function to increase the probability level of the target class.<br />
<br />
<br />
<br />
''Evaluation setups on ImageNet validation dataset (ILSVRC 2012)'':<br />
<br />
BVLC AlexNet played as defense system. (Figure 1)<br />
<br />
600 sample images were randomly sampled from the dataset. Due to the relatively high resolution of the images (224x224x3), only one-pixel attacked was carried out aiming to verify if extremely small pixel modification in a relatively large image can alternate the classification result.<br />
<br />
Since the number of classes in this dataset is way larger than the one of CIFAR-10, the authors only launched non-targeted attacks and applied a fitness function to decrease the probability level of the true class. <br />
<br />
<br />
<br />
[[File:1px_attack.jpg|thumb|center|400px|Figure 1. Networks Architecture]]<br />
<br />
=== Results ===<br />
<br />
On CIFAR-10, the proposed one-pixel attack succeeded (Table 1) in general and success rate and confidence increased by a significant amount when more pixels were modified (Table 2). Moreover, with one-pixel modification, each image can be perturbed to 2~4 classes in all 3 defense systems, i.e. AllConv, NiN and VGG16 (Figure 2); in particular, VGG16 is slightly more robust than the other two. Generally, all these three networks are vulnerable to one-pixel attack. When more pixels were allowed modifications, a significant number of images were successfully perturbed to up to 8 classes (Figure 3).<br />
<br />
On ImageNet, one-pixel attack performed well regarding to relatively large images (Table 1). The low confidence 5.53% is due to the large number of classes and the main efforts in decreasing the probability level of the true class. <br />
<br />
<gallery mode="slideshow"><br />
File:tables.jpg<br />
File:1px_3nets.jpg|frameless|Figure 2. One-pixel attack on three networks (Su, Vargas and Sakurai, 2018)<br />
File:3attacks.png|frameless|Figure 3. 1, 3 and 5-pixel attacks (Su, Vargas and Sakurai, 2018)<br />
</gallery><br />
<br />
<br />
<gallery mode="slideshow"><br />
File:Capture.png|frameless|Figure 4. Comparison of non-targeted attack effectiveness between proposed method, LSA and FGSM<br />
</gallery><br />
<br />
= Discussion and Future Work =<br />
<br />
In general, this paper illustrates the possibility of finding point boundaries where class labels changes by moving data points along few dimensions and analyzing the frequency of changes in class labels quantitatively; it also proves that single pixel change could perturb a considerable portion of images. According to the experimental results, one pixel attack can further work on different network structures or image sizes. Given the time constraints, the conducted experiments used a low number of DE iterations with a relatively small set of initial candidate solutions; by increasing the number of DE iterations or the initial candidate solutions, the perturbation success rate can be further improved. <br />
<br />
For the future work, it is suggested to use the proposed algorithms and natural image samples for further development of advanced models and better artificial adversarial samples.<br />
<br />
= Critique =<br />
<br />
* When the authors launched 1, 3 and 5-pixel attacks, they did it on different target networks, i.e. 1 and 3-pixel attacks on AllConv and 5-pixel attack on NiN, then compared the corresponding success rates. Such a comparison is not rigorous as different networks might lead to varied classification results. Although we understand that the authors wanted to show the best attack results among those networks, we think all three attacks should be conducted on the same defense systems respectively and consequently we can see the performance of each attack on each target network. The conclusion from these results would be more convincing.<br />
<br />
* Regarding to ImageNet, only AlexNet was used to attack but AllConv, NiN and VGG16. The papers in which AllConv and VGG16 were introduced have good experiment results on ImageNet. Attack experiments towards these networks should be considered to show the effectiveness of one-pixel attack.<br />
<br />
= Reference =<br />
<br />
Carlini.N,2017 https://www.youtube.com/watch?v=yIXNL88JBWQ<br />
<br />
Jiawei Su, Danilo Vasconcellos Vargas, Sakurai Kouichi, 2017 https://arxiv.org/pdf/1710.08864.pdf<br />
<br />
Roman Trusov,2017 https://blog.xix.ai/how-adversarial-attacks-work-87495b81da2d</div>Z3chuhttp://wiki.math.uwaterloo.ca/statwiki/index.php?title=One_pixel_attack_for_fooling_deep_neural_networks&diff=35701One pixel attack for fooling deep neural networks2018-03-27T16:41:16Z<p>Z3chu: /* Results */</p>
<hr />
<div>= Presented by = <br />
1. Ziheng Chu<br />
<br />
2. Minghao Lu<br />
<br />
3. Qi Mai<br />
<br />
4. Qici Tan<br />
<br />
= Introduction =<br />
Neural network first caught many people’s attention in imageNet contest in 2012. Neural network increased accuracy to 85% from 75%. The following year, it is increased to 89%. From no one used neural network to everyone uses the neural network. Today we have 97% accuracy in using deep neural network (DNN). So the problem of image recognition by are artificial intelligence is solved. However, there is one catch(Carlini.N,2017).<br />
<br />
The catch is that the DNN is really easy to be fooled. Here is an example. An image of a dog is classified as a hummingbird. Research studies by Google Brian, which is a deep learning artificial intelligence research team at Google, showed that any machine learning classifier can be tricked to give wrong predictions. The action of designing an input in a specific way to get the wrong result from the model is called an adversarial attack (Roman Trusov,2017). The input image is the adversarial image. This image is created by adding a tiny amount of perturbation, which is not so imperceptible to human eyes. After zooming into figure 1, a small amount of perturbation led to misclassify a dog as a hummingbird.<br />
<br />
How does it possible? DNN models consist of transformation. Most of those transformations are sometimes very sensitive to a small change. Think of the DNN as a set of high-dimensional decision boundary. When an input is not perfect, if the decision boundary is too simple and linear, mostly it leads to misclassify. <br />
<br />
Harnessing this sensitivity is a way to better understand and product robust algorithm in AI security. This paper aims to demonstrate the vulnerability of DNN by presenting some extreme scenarios - one pixel attack. As shown in figure3, only one pixel was perturbed, the classification was wrong in each image. Although, there is no profound defense to the attack as of current state, the investigation of one-pixel attack may shield lights on the behavior of DNN. Ultimately, it leads to the discussion of the security implications to future solution.<br />
<br />
This paper proposed one pixel attack in a scenario where the only information available is the probability labels. Comparing to previous work, this proposal showed its effectiveness of successful attack rate up to 73%, its simplicity of semi-black-box which only required probability label no need inner information, and its flexibility in attacking more models, especially the networks that are not differentiable and the gradient calculation is difficult.<br />
<br />
With the intension of creating an adversarial attack for better understanding the security of DNN, one pixel attack should be considered. Two main reasons: 1) a new way of exploring the high dimensional DNN by using fewer and lower dimensional slices. It is different from previous work, where perturbation was done by adding small value to each pixel. 2) a measure of perceptiveness to demonstrate the severity of one-pixel attack as comparing to a few pixel examples.<br />
<br />
[[File:abcd.png|thumb|center|400px|Image title or explanation]]<br />
<br />
= Related works =<br />
The sensitivity to well-turned artificial perturbation were investigated in various related work.<br />
<br />
1. First perturbation was crafted by several gradient-based algorithms using back- propagation for obtaining gradient information. (C. Szegedy et al. )<br />
<br />
2. Fast gradient sign algorithm for calculating effective perturbation<br />
It was with the hypotheses of the linearity and high-dimensions of inputs were the main reason of vulnerability. (I.J.Goodfellow et al.) <br />
<br />
3. Greedy perturbation searching method by assuming the linearity of DNN decision boundary (S.M Moosavi-Dezfooli et al.) Jacobian matrix to build “Adversarial Saliency Map” which indicates the effectiveness of conducting a fixed length perturbation through the direction of each axis (N. Papernot et al. )<br />
<br />
4. The images can hardly be recognized by human eyes but nevertheless classified by the network with high confi- dence. (A. Nguyen et al. )<br />
<br />
5. Several black-box attacks that require no internal knowledge about the target systems such as gradients, have also been proposed. only utilized it as a starting point to derive a further semi black-box attack which needs to modify more pixels (N. Narodytska et al)<br />
<br />
6. Both natural and random images are found to be vulnerable to adversarial perturbation. Assuming these images are evenly distributed, it suggests that most data points in the input space are gathered near to the boundaries. ( A. Fawzi, S. M. Moosavi Dezfooli, and P. Frossard.)<br />
<br />
7. A curvature analysis region along most directions around natural images are flat with only few directions where the space is curved and the images are sensitive to perturbation. (A. Fawzi et al.)<br />
<br />
8. Universal perturbations (i.e. a perturbation that when added to any natural image can generate adversarial samples with high effectiveness) were shown possible and to achieve a high effectiveness when compared to random perturbation. This indicates that the diversity of boundaries might be low while the boundaries’ shapes near different data points are similar (S. M. Moosavi Dezfooli, A. Fawzi, O. Fawzi, and P. Frossard.)<br />
<br />
= Methodology =<br />
<br />
== Problem Description ==<br />
<br />
We can formalize the generation of adversarial images as an constrained optimization problem. We are given a classifier '''F''' and targeted adversarial class '''adv'''. Let '''x''' be the vectorized form of an image. Let F_adv('''x''') represent the probability assigned to the targeted class '''adv''' for vector '''x''' by the given classifier. Let e('''x''') represent an additive adversarial perturbation vector for image '''x'''. The goal of the targeted attack adversary is to find the perturbation vector with constrained size(norm) that maximizes the probability assigned to the targeted class by the classifier. Formally <br />
<br />
[[File:ONE_PIXEL_CSP1.png | center ]]<br />
<br />
<br />
For few-pixel attack, the problem statement is changed slightly. We constrain on the number of non-zero elements of the perturbation vector instead of the norm of the vector.<br />
<br />
[[File:ONE_PIXEL_CSP2.png | center ]]<br />
<br />
<br />
For the usual adversarial case, we can shift '''x''' in all dimensions, but the strength(norm) of the shift is bounded by L. For our one-pixel attack case, we are using the second equation with d=1. '''x''' is only allowed to be perturbed along a single axis, so the degree of freedom of the shift is greatly reduces. However, the shift in the one axis can be of arbitrary strength.<br />
<br />
<br />
== Differential Evolution ==<br />
Differential evolution(DE) is a population based optimization algorithm belonging to the class of evolutionary algorithms(EA). For DE, in the selection process, the population is in a way "segmented" into families( offspring and parents ), and the most optimal from each family is chosen. This segmentation allows DE to keep more diversity in each iteration than other EAs and makes it more suitable for complex multi-modal optimization problems. Additionally, DE does not use gradient information, which makes it suitable for our problem.<br />
<br />
= Evaluation and Results =<br />
<br />
=== Measurement Metrics and Datasets ===<br />
<br />
The authors used 4 measurement metrics to evaluate the effectiveness of the proposed attacks. <br />
* '''Success Rate''' <br /> ''Non-targeted attack'': the percentage of adversarial images were successfully classified to any possible classes other than the true one. <br /><br />
:<math> \textrm{success rate }=\displaystyle\sum_{k=1}^N I(\textrm{Network}(\textrm{Attack}(\textrm{Image}_k))\neq \textrm{TrueClass}_k) </math> <br />''Targeted attack'': the probability of successfully classifying a perturbed image to a targeted class.<br />
:<math> \textrm{success rate }=\displaystyle\sum_{k=1}^N I(\textrm{Network}(\textrm{TargetedAttack}(\textrm{Image}_k))= \textrm{TrueClass}_k) </math><br />
* '''Adversarial Probability Labels (Confidence)'''<br />The ratio of sum of probability level of the target class for each successful perturbation and the total number of successful attacks. This gives the mean confidence of the successful attacks on the target classification system.<br />
* '''Number of Target Classes'''<br />The number of images after perturbations cannot be classified to any other classes.<br />
* '''Number of Original-Target Class Pairs'''<br />The number of times of each pair being attacked.<br />
<br />
<br />
''Evaluation setups on CIFAR-10 test dataset'':<br />
<br />
Three types of networks played as defense systems: All Convolutional Networks (AllConv), Network in Network (NiN) and VGG16 network. (Figure 1)<br />
<br />
The authors randomly sampled 500 images, with resolution 32x32x3, from the dataset to perform one-pixel attacks and generated 500 samples with 3 and 5 pixel-modification respectively to conduct three-pixel and five-pixel attacks. The effectiveness of one-pixel attack was evaluated on all three introduced networks and the performance comparison, using success rate, was performed as the following: 1-pixel attack and 3-pixel attack on AllConv and 5-pixel attack on NiN. <br />
<br />
Both targeted attacks and non-targeted attacks were considered but only targeted attacks were conducted since the performance of non-targeted attack results could be obtained from the result of targeted attacks by applying a fitness function to increase the probability level of the target class.<br />
<br />
<br />
<br />
''Evaluation setups on ImageNet validation dataset (ILSVRC 2012)'':<br />
<br />
BVLC AlexNet played as defense system. (Figure 1)<br />
<br />
600 sample images were randomly sampled from the dataset. Due to the relatively high resolution of the images (224x224x3), only one-pixel attacked was carried out aiming to verify if extremely small pixel modification in a relatively large image can alternate the classification result.<br />
<br />
Since the number of classes in this dataset is way larger than the one of CIFAR-10, the authors only launched non-targeted attacks and applied a fitness function to decrease the probability level of the true class. <br />
<br />
<br />
<br />
[[File:1px_attack.jpg|thumb|center|400px|Figure 1. Networks Architecture]]<br />
<br />
=== Results ===<br />
<br />
On CIFAR-10, the proposed one-pixel attack succeeded (Table 1) in general and success rate and confidence increased by a significant amount when more pixels were modified (Table 2). Moreover, with one-pixel modification, each image can be perturbed to 2~4 classes in all 3 defense systems, i.e. AllConv, NiN and VGG16 (Figure 2); in particular, VGG16 is slightly more robust than the other two. Generally, all these three networks are vulnerable to one-pixel attack. When more pixels were allowed modifications, a significant number of images were successfully perturbed to up to 8 classes (Figure 3).<br />
<br />
On ImageNet, one-pixel attack performed well regarding to relatively large images (Table 1). The low confidence 5.53% is due to the large number of classes and the main efforts in decreasing the probability level of the true class. <br />
<br />
<gallery mode="slideshow"><br />
File:tables.jpg<br />
File:1px_3nets.jpg|frameless|Figure 2. One-pixel attack on three networks (Su, Vargas and Sakurai, 2018)<br />
File:3attacks.png|frameless|Figure 3. 1, 3 and 5-pixel attacks (Su, Vargas and Sakurai, 2018)<br />
File:Capture.png|frameless|Figure 4. Comparison of non-targeted attack effectiveness between proposed method, LSA and FGSM<br />
</gallery><br />
<br />
= Discussion and Future Work =<br />
<br />
In general, this paper illustrates the possibility of finding point boundaries where class labels changes by moving data points along few dimensions and analyzing the frequency of changes in class labels quantitatively; it also proves that single pixel change could perturb a considerable portion of images. According to the experimental results, one pixel attack can further work on different network structures or image sizes. Given the time constraints, the conducted experiments used a low number of DE iterations with a relatively small set of initial candidate solutions; by increasing the number of DE iterations or the initial candidate solutions, the perturbation success rate can be further improved. <br />
<br />
For the future work, it is suggested to use the proposed algorithms and natural image samples for further development of advanced models and better artificial adversarial samples.<br />
<br />
= Critique =<br />
<br />
* When the authors launched 1, 3 and 5-pixel attacks, they did it on different target networks, i.e. 1 and 3-pixel attacks on AllConv and 5-pixel attack on NiN, then compared the corresponding success rates. Such a comparison is not rigorous as different networks might lead to varied classification results. Although we understand that the authors wanted to show the best attack results among those networks, we think all three attacks should be conducted on the same defense systems respectively and consequently we can see the performance of each attack on each target network. The conclusion from these results would be more convincing.<br />
<br />
* Regarding to ImageNet, only AlexNet was used to attack but AllConv, NiN and VGG16. The papers in which AllConv and VGG16 were introduced have good experiment results on ImageNet. Attack experiments towards these networks should be considered to show the effectiveness of one-pixel attack.<br />
<br />
= Reference =<br />
<br />
Carlini.N,2017 https://www.youtube.com/watch?v=yIXNL88JBWQ<br />
<br />
Jiawei Su, Danilo Vasconcellos Vargas, Sakurai Kouichi, 2017 https://arxiv.org/pdf/1710.08864.pdf<br />
<br />
Roman Trusov,2017 https://blog.xix.ai/how-adversarial-attacks-work-87495b81da2d</div>Z3chuhttp://wiki.math.uwaterloo.ca/statwiki/index.php?title=File:Capture.PNG&diff=35700File:Capture.PNG2018-03-27T16:40:37Z<p>Z3chu: </p>
<hr />
<div></div>Z3chuhttp://wiki.math.uwaterloo.ca/statwiki/index.php?title=One_pixel_attack_for_fooling_deep_neural_networks&diff=35699One pixel attack for fooling deep neural networks2018-03-27T16:39:41Z<p>Z3chu: /* Results */</p>
<hr />
<div>= Presented by = <br />
1. Ziheng Chu<br />
<br />
2. Minghao Lu<br />
<br />
3. Qi Mai<br />
<br />
4. Qici Tan<br />
<br />
= Introduction =<br />
Neural network first caught many people’s attention in imageNet contest in 2012. Neural network increased accuracy to 85% from 75%. The following year, it is increased to 89%. From no one used neural network to everyone uses the neural network. Today we have 97% accuracy in using deep neural network (DNN). So the problem of image recognition by are artificial intelligence is solved. However, there is one catch(Carlini.N,2017).<br />
<br />
The catch is that the DNN is really easy to be fooled. Here is an example. An image of a dog is classified as a hummingbird. Research studies by Google Brian, which is a deep learning artificial intelligence research team at Google, showed that any machine learning classifier can be tricked to give wrong predictions. The action of designing an input in a specific way to get the wrong result from the model is called an adversarial attack (Roman Trusov,2017). The input image is the adversarial image. This image is created by adding a tiny amount of perturbation, which is not so imperceptible to human eyes. After zooming into figure 1, a small amount of perturbation led to misclassify a dog as a hummingbird.<br />
<br />
How does it possible? DNN models consist of transformation. Most of those transformations are sometimes very sensitive to a small change. Think of the DNN as a set of high-dimensional decision boundary. When an input is not perfect, if the decision boundary is too simple and linear, mostly it leads to misclassify. <br />
<br />
Harnessing this sensitivity is a way to better understand and product robust algorithm in AI security. This paper aims to demonstrate the vulnerability of DNN by presenting some extreme scenarios - one pixel attack. As shown in figure3, only one pixel was perturbed, the classification was wrong in each image. Although, there is no profound defense to the attack as of current state, the investigation of one-pixel attack may shield lights on the behavior of DNN. Ultimately, it leads to the discussion of the security implications to future solution.<br />
<br />
This paper proposed one pixel attack in a scenario where the only information available is the probability labels. Comparing to previous work, this proposal showed its effectiveness of successful attack rate up to 73%, its simplicity of semi-black-box which only required probability label no need inner information, and its flexibility in attacking more models, especially the networks that are not differentiable and the gradient calculation is difficult.<br />
<br />
With the intension of creating an adversarial attack for better understanding the security of DNN, one pixel attack should be considered. Two main reasons: 1) a new way of exploring the high dimensional DNN by using fewer and lower dimensional slices. It is different from previous work, where perturbation was done by adding small value to each pixel. 2) a measure of perceptiveness to demonstrate the severity of one-pixel attack as comparing to a few pixel examples.<br />
<br />
[[File:abcd.png|thumb|center|400px|Image title or explanation]]<br />
<br />
= Related works =<br />
The sensitivity to well-turned artificial perturbation were investigated in various related work.<br />
<br />
1. First perturbation was crafted by several gradient-based algorithms using back- propagation for obtaining gradient information. (C. Szegedy et al. )<br />
<br />
2. Fast gradient sign algorithm for calculating effective perturbation<br />
It was with the hypotheses of the linearity and high-dimensions of inputs were the main reason of vulnerability. (I.J.Goodfellow et al.) <br />
<br />
3. Greedy perturbation searching method by assuming the linearity of DNN decision boundary (S.M Moosavi-Dezfooli et al.) Jacobian matrix to build “Adversarial Saliency Map” which indicates the effectiveness of conducting a fixed length perturbation through the direction of each axis (N. Papernot et al. )<br />
<br />
4. The images can hardly be recognized by human eyes but nevertheless classified by the network with high confi- dence. (A. Nguyen et al. )<br />
<br />
5. Several black-box attacks that require no internal knowledge about the target systems such as gradients, have also been proposed. only utilized it as a starting point to derive a further semi black-box attack which needs to modify more pixels (N. Narodytska et al)<br />
<br />
6. Both natural and random images are found to be vulnerable to adversarial perturbation. Assuming these images are evenly distributed, it suggests that most data points in the input space are gathered near to the boundaries. ( A. Fawzi, S. M. Moosavi Dezfooli, and P. Frossard.)<br />
<br />
7. A curvature analysis region along most directions around natural images are flat with only few directions where the space is curved and the images are sensitive to perturbation. (A. Fawzi et al.)<br />
<br />
8. Universal perturbations (i.e. a perturbation that when added to any natural image can generate adversarial samples with high effectiveness) were shown possible and to achieve a high effectiveness when compared to random perturbation. This indicates that the diversity of boundaries might be low while the boundaries’ shapes near different data points are similar (S. M. Moosavi Dezfooli, A. Fawzi, O. Fawzi, and P. Frossard.)<br />
<br />
= Methodology =<br />
<br />
== Problem Description ==<br />
<br />
We can formalize the generation of adversarial images as an constrained optimization problem. We are given a classifier '''F''' and targeted adversarial class '''adv'''. Let '''x''' be the vectorized form of an image. Let F_adv('''x''') represent the probability assigned to the targeted class '''adv''' for vector '''x''' by the given classifier. Let e('''x''') represent an additive adversarial perturbation vector for image '''x'''. The goal of the targeted attack adversary is to find the perturbation vector with constrained size(norm) that maximizes the probability assigned to the targeted class by the classifier. Formally <br />
<br />
[[File:ONE_PIXEL_CSP1.png | center ]]<br />
<br />
<br />
For few-pixel attack, the problem statement is changed slightly. We constrain on the number of non-zero elements of the perturbation vector instead of the norm of the vector.<br />
<br />
[[File:ONE_PIXEL_CSP2.png | center ]]<br />
<br />
<br />
For the usual adversarial case, we can shift '''x''' in all dimensions, but the strength(norm) of the shift is bounded by L. For our one-pixel attack case, we are using the second equation with d=1. '''x''' is only allowed to be perturbed along a single axis, so the degree of freedom of the shift is greatly reduces. However, the shift in the one axis can be of arbitrary strength.<br />
<br />
<br />
== Differential Evolution ==<br />
Differential evolution(DE) is a population based optimization algorithm belonging to the class of evolutionary algorithms(EA). For DE, in the selection process, the population is in a way "segmented" into families( offspring and parents ), and the most optimal from each family is chosen. This segmentation allows DE to keep more diversity in each iteration than other EAs and makes it more suitable for complex multi-modal optimization problems. Additionally, DE does not use gradient information, which makes it suitable for our problem.<br />
<br />
= Evaluation and Results =<br />
<br />
=== Measurement Metrics and Datasets ===<br />
<br />
The authors used 4 measurement metrics to evaluate the effectiveness of the proposed attacks. <br />
* '''Success Rate''' <br /> ''Non-targeted attack'': the percentage of adversarial images were successfully classified to any possible classes other than the true one. <br /><br />
:<math> \textrm{success rate }=\displaystyle\sum_{k=1}^N I(\textrm{Network}(\textrm{Attack}(\textrm{Image}_k))\neq \textrm{TrueClass}_k) </math> <br />''Targeted attack'': the probability of successfully classifying a perturbed image to a targeted class.<br />
:<math> \textrm{success rate }=\displaystyle\sum_{k=1}^N I(\textrm{Network}(\textrm{TargetedAttack}(\textrm{Image}_k))= \textrm{TrueClass}_k) </math><br />
* '''Adversarial Probability Labels (Confidence)'''<br />The ratio of sum of probability level of the target class for each successful perturbation and the total number of successful attacks. This gives the mean confidence of the successful attacks on the target classification system.<br />
* '''Number of Target Classes'''<br />The number of images after perturbations cannot be classified to any other classes.<br />
* '''Number of Original-Target Class Pairs'''<br />The number of times of each pair being attacked.<br />
<br />
<br />
''Evaluation setups on CIFAR-10 test dataset'':<br />
<br />
Three types of networks played as defense systems: All Convolutional Networks (AllConv), Network in Network (NiN) and VGG16 network. (Figure 1)<br />
<br />
The authors randomly sampled 500 images, with resolution 32x32x3, from the dataset to perform one-pixel attacks and generated 500 samples with 3 and 5 pixel-modification respectively to conduct three-pixel and five-pixel attacks. The effectiveness of one-pixel attack was evaluated on all three introduced networks and the performance comparison, using success rate, was performed as the following: 1-pixel attack and 3-pixel attack on AllConv and 5-pixel attack on NiN. <br />
<br />
Both targeted attacks and non-targeted attacks were considered but only targeted attacks were conducted since the performance of non-targeted attack results could be obtained from the result of targeted attacks by applying a fitness function to increase the probability level of the target class.<br />
<br />
<br />
<br />
''Evaluation setups on ImageNet validation dataset (ILSVRC 2012)'':<br />
<br />
BVLC AlexNet played as defense system. (Figure 1)<br />
<br />
600 sample images were randomly sampled from the dataset. Due to the relatively high resolution of the images (224x224x3), only one-pixel attacked was carried out aiming to verify if extremely small pixel modification in a relatively large image can alternate the classification result.<br />
<br />
Since the number of classes in this dataset is way larger than the one of CIFAR-10, the authors only launched non-targeted attacks and applied a fitness function to decrease the probability level of the true class. <br />
<br />
<br />
<br />
[[File:1px_attack.jpg|thumb|center|400px|Figure 1. Networks Architecture]]<br />
<br />
=== Results ===<br />
<br />
On CIFAR-10, the proposed one-pixel attack succeeded (Table 1) in general and success rate and confidence increased by a significant amount when more pixels were modified (Table 2). Moreover, with one-pixel modification, each image can be perturbed to 2~4 classes in all 3 defense systems, i.e. AllConv, NiN and VGG16 (Figure 2); in particular, VGG16 is slightly more robust than the other two. Generally, all these three networks are vulnerable to one-pixel attack. When more pixels were allowed modifications, a significant number of images were successfully perturbed to up to 8 classes (Figure 3).<br />
<br />
On ImageNet, one-pixel attack performed well regarding to relatively large images (Table 1). The low confidence 5.53% is due to the large number of classes and the main efforts in decreasing the probability level of the true class. <br />
<br />
<gallery mode="slideshow"><br />
File:tables.jpg<br />
File:1px_3nets.jpg|frameless|Figure 2. One-pixel attack on three networks (Su, Vargas and Sakurai, 2018)<br />
File:3attacks.png|frameless|Figure 3. 1, 3 and 5-pixel attacks (Su, Vargas and Sakurai, 2018)<br />
File:Comparison_of_non-targeted_attack_effectiveness_under_3_methods.png|frameless|Figure 4. Comparison of non-targeted attack effectiveness between proposed method, LSA and FGSM<br />
</gallery><br />
<br />
= Discussion and Future Work =<br />
<br />
In general, this paper illustrates the possibility of finding point boundaries where class labels changes by moving data points along few dimensions and analyzing the frequency of changes in class labels quantitatively; it also proves that single pixel change could perturb a considerable portion of images. According to the experimental results, one pixel attack can further work on different network structures or image sizes. Given the time constraints, the conducted experiments used a low number of DE iterations with a relatively small set of initial candidate solutions; by increasing the number of DE iterations or the initial candidate solutions, the perturbation success rate can be further improved. <br />
<br />
For the future work, it is suggested to use the proposed algorithms and natural image samples for further development of advanced models and better artificial adversarial samples.<br />
<br />
= Critique =<br />
<br />
* When the authors launched 1, 3 and 5-pixel attacks, they did it on different target networks, i.e. 1 and 3-pixel attacks on AllConv and 5-pixel attack on NiN, then compared the corresponding success rates. Such a comparison is not rigorous as different networks might lead to varied classification results. Although we understand that the authors wanted to show the best attack results among those networks, we think all three attacks should be conducted on the same defense systems respectively and consequently we can see the performance of each attack on each target network. The conclusion from these results would be more convincing.<br />
<br />
* Regarding to ImageNet, only AlexNet was used to attack but AllConv, NiN and VGG16. The papers in which AllConv and VGG16 were introduced have good experiment results on ImageNet. Attack experiments towards these networks should be considered to show the effectiveness of one-pixel attack.<br />
<br />
= Reference =<br />
<br />
Carlini.N,2017 https://www.youtube.com/watch?v=yIXNL88JBWQ<br />
<br />
Jiawei Su, Danilo Vasconcellos Vargas, Sakurai Kouichi, 2017 https://arxiv.org/pdf/1710.08864.pdf<br />
<br />
Roman Trusov,2017 https://blog.xix.ai/how-adversarial-attacks-work-87495b81da2d</div>Z3chuhttp://wiki.math.uwaterloo.ca/statwiki/index.php?title=One_pixel_attack_for_fooling_deep_neural_networks&diff=35698One pixel attack for fooling deep neural networks2018-03-27T16:38:33Z<p>Z3chu: /* Results */</p>
<hr />
<div>= Presented by = <br />
1. Ziheng Chu<br />
<br />
2. Minghao Lu<br />
<br />
3. Qi Mai<br />
<br />
4. Qici Tan<br />
<br />
= Introduction =<br />
Neural network first caught many people’s attention in imageNet contest in 2012. Neural network increased accuracy to 85% from 75%. The following year, it is increased to 89%. From no one used neural network to everyone uses the neural network. Today we have 97% accuracy in using deep neural network (DNN). So the problem of image recognition by are artificial intelligence is solved. However, there is one catch(Carlini.N,2017).<br />
<br />
The catch is that the DNN is really easy to be fooled. Here is an example. An image of a dog is classified as a hummingbird. Research studies by Google Brian, which is a deep learning artificial intelligence research team at Google, showed that any machine learning classifier can be tricked to give wrong predictions. The action of designing an input in a specific way to get the wrong result from the model is called an adversarial attack (Roman Trusov,2017). The input image is the adversarial image. This image is created by adding a tiny amount of perturbation, which is not so imperceptible to human eyes. After zooming into figure 1, a small amount of perturbation led to misclassify a dog as a hummingbird.<br />
<br />
How does it possible? DNN models consist of transformation. Most of those transformations are sometimes very sensitive to a small change. Think of the DNN as a set of high-dimensional decision boundary. When an input is not perfect, if the decision boundary is too simple and linear, mostly it leads to misclassify. <br />
<br />
Harnessing this sensitivity is a way to better understand and product robust algorithm in AI security. This paper aims to demonstrate the vulnerability of DNN by presenting some extreme scenarios - one pixel attack. As shown in figure3, only one pixel was perturbed, the classification was wrong in each image. Although, there is no profound defense to the attack as of current state, the investigation of one-pixel attack may shield lights on the behavior of DNN. Ultimately, it leads to the discussion of the security implications to future solution.<br />
<br />
This paper proposed one pixel attack in a scenario where the only information available is the probability labels. Comparing to previous work, this proposal showed its effectiveness of successful attack rate up to 73%, its simplicity of semi-black-box which only required probability label no need inner information, and its flexibility in attacking more models, especially the networks that are not differentiable and the gradient calculation is difficult.<br />
<br />
With the intension of creating an adversarial attack for better understanding the security of DNN, one pixel attack should be considered. Two main reasons: 1) a new way of exploring the high dimensional DNN by using fewer and lower dimensional slices. It is different from previous work, where perturbation was done by adding small value to each pixel. 2) a measure of perceptiveness to demonstrate the severity of one-pixel attack as comparing to a few pixel examples.<br />
<br />
[[File:abcd.png|thumb|center|400px|Image title or explanation]]<br />
<br />
= Related works =<br />
The sensitivity to well-turned artificial perturbation were investigated in various related work.<br />
<br />
1. First perturbation was crafted by several gradient-based algorithms using back- propagation for obtaining gradient information. (C. Szegedy et al. )<br />
<br />
2. Fast gradient sign algorithm for calculating effective perturbation<br />
It was with the hypotheses of the linearity and high-dimensions of inputs were the main reason of vulnerability. (I.J.Goodfellow et al.) <br />
<br />
3. Greedy perturbation searching method by assuming the linearity of DNN decision boundary (S.M Moosavi-Dezfooli et al.) Jacobian matrix to build “Adversarial Saliency Map” which indicates the effectiveness of conducting a fixed length perturbation through the direction of each axis (N. Papernot et al. )<br />
<br />
4. The images can hardly be recognized by human eyes but nevertheless classified by the network with high confi- dence. (A. Nguyen et al. )<br />
<br />
5. Several black-box attacks that require no internal knowledge about the target systems such as gradients, have also been proposed. only utilized it as a starting point to derive a further semi black-box attack which needs to modify more pixels (N. Narodytska et al)<br />
<br />
6. Both natural and random images are found to be vulnerable to adversarial perturbation. Assuming these images are evenly distributed, it suggests that most data points in the input space are gathered near to the boundaries. ( A. Fawzi, S. M. Moosavi Dezfooli, and P. Frossard.)<br />
<br />
7. A curvature analysis region along most directions around natural images are flat with only few directions where the space is curved and the images are sensitive to perturbation. (A. Fawzi et al.)<br />
<br />
8. Universal perturbations (i.e. a perturbation that when added to any natural image can generate adversarial samples with high effectiveness) were shown possible and to achieve a high effectiveness when compared to random perturbation. This indicates that the diversity of boundaries might be low while the boundaries’ shapes near different data points are similar (S. M. Moosavi Dezfooli, A. Fawzi, O. Fawzi, and P. Frossard.)<br />
<br />
= Methodology =<br />
<br />
== Problem Description ==<br />
<br />
We can formalize the generation of adversarial images as an constrained optimization problem. We are given a classifier '''F''' and targeted adversarial class '''adv'''. Let '''x''' be the vectorized form of an image. Let F_adv('''x''') represent the probability assigned to the targeted class '''adv''' for vector '''x''' by the given classifier. Let e('''x''') represent an additive adversarial perturbation vector for image '''x'''. The goal of the targeted attack adversary is to find the perturbation vector with constrained size(norm) that maximizes the probability assigned to the targeted class by the classifier. Formally <br />
<br />
[[File:ONE_PIXEL_CSP1.png | center ]]<br />
<br />
<br />
For few-pixel attack, the problem statement is changed slightly. We constrain on the number of non-zero elements of the perturbation vector instead of the norm of the vector.<br />
<br />
[[File:ONE_PIXEL_CSP2.png | center ]]<br />
<br />
<br />
For the usual adversarial case, we can shift '''x''' in all dimensions, but the strength(norm) of the shift is bounded by L. For our one-pixel attack case, we are using the second equation with d=1. '''x''' is only allowed to be perturbed along a single axis, so the degree of freedom of the shift is greatly reduces. However, the shift in the one axis can be of arbitrary strength.<br />
<br />
<br />
== Differential Evolution ==<br />
Differential evolution(DE) is a population based optimization algorithm belonging to the class of evolutionary algorithms(EA). For DE, in the selection process, the population is in a way "segmented" into families( offspring and parents ), and the most optimal from each family is chosen. This segmentation allows DE to keep more diversity in each iteration than other EAs and makes it more suitable for complex multi-modal optimization problems. Additionally, DE does not use gradient information, which makes it suitable for our problem.<br />
<br />
= Evaluation and Results =<br />
<br />
=== Measurement Metrics and Datasets ===<br />
<br />
The authors used 4 measurement metrics to evaluate the effectiveness of the proposed attacks. <br />
* '''Success Rate''' <br /> ''Non-targeted attack'': the percentage of adversarial images were successfully classified to any possible classes other than the true one. <br /><br />
:<math> \textrm{success rate }=\displaystyle\sum_{k=1}^N I(\textrm{Network}(\textrm{Attack}(\textrm{Image}_k))\neq \textrm{TrueClass}_k) </math> <br />''Targeted attack'': the probability of successfully classifying a perturbed image to a targeted class.<br />
:<math> \textrm{success rate }=\displaystyle\sum_{k=1}^N I(\textrm{Network}(\textrm{TargetedAttack}(\textrm{Image}_k))= \textrm{TrueClass}_k) </math><br />
* '''Adversarial Probability Labels (Confidence)'''<br />The ratio of sum of probability level of the target class for each successful perturbation and the total number of successful attacks. This gives the mean confidence of the successful attacks on the target classification system.<br />
* '''Number of Target Classes'''<br />The number of images after perturbations cannot be classified to any other classes.<br />
* '''Number of Original-Target Class Pairs'''<br />The number of times of each pair being attacked.<br />
<br />
<br />
''Evaluation setups on CIFAR-10 test dataset'':<br />
<br />
Three types of networks played as defense systems: All Convolutional Networks (AllConv), Network in Network (NiN) and VGG16 network. (Figure 1)<br />
<br />
The authors randomly sampled 500 images, with resolution 32x32x3, from the dataset to perform one-pixel attacks and generated 500 samples with 3 and 5 pixel-modification respectively to conduct three-pixel and five-pixel attacks. The effectiveness of one-pixel attack was evaluated on all three introduced networks and the performance comparison, using success rate, was performed as the following: 1-pixel attack and 3-pixel attack on AllConv and 5-pixel attack on NiN. <br />
<br />
Both targeted attacks and non-targeted attacks were considered but only targeted attacks were conducted since the performance of non-targeted attack results could be obtained from the result of targeted attacks by applying a fitness function to increase the probability level of the target class.<br />
<br />
<br />
<br />
''Evaluation setups on ImageNet validation dataset (ILSVRC 2012)'':<br />
<br />
BVLC AlexNet played as defense system. (Figure 1)<br />
<br />
600 sample images were randomly sampled from the dataset. Due to the relatively high resolution of the images (224x224x3), only one-pixel attacked was carried out aiming to verify if extremely small pixel modification in a relatively large image can alternate the classification result.<br />
<br />
Since the number of classes in this dataset is way larger than the one of CIFAR-10, the authors only launched non-targeted attacks and applied a fitness function to decrease the probability level of the true class. <br />
<br />
<br />
<br />
[[File:1px_attack.jpg|thumb|center|400px|Figure 1. Networks Architecture]]<br />
<br />
=== Results ===<br />
<br />
On CIFAR-10, the proposed one-pixel attack succeeded (Table 1) in general and success rate and confidence increased by a significant amount when more pixels were modified (Table 2). Moreover, with one-pixel modification, each image can be perturbed to 2~4 classes in all 3 defense systems, i.e. AllConv, NiN and VGG16 (Figure 2); in particular, VGG16 is slightly more robust than the other two. Generally, all these three networks are vulnerable to one-pixel attack. When more pixels were allowed modifications, a significant number of images were successfully perturbed to up to 8 classes (Figure 3).<br />
<br />
On ImageNet, one-pixel attack performed well regarding to relatively large images (Table 1). The low confidence 5.53% is due to the large number of classes and the main efforts in decreasing the probability level of the true class. <br />
<br />
<gallery mode="slideshow"><br />
File:tables.jpg<br />
File:1px_3nets.jpg|frameless|Figure 2. One-pixel attack on three networks (Su, Vargas and Sakurai, 2018)<br />
File:3attacks.png|frameless|Figure 3. 1, 3 and 5-pixel attacks (Su, Vargas and Sakurai, 2018)<br />
</gallery><br />
<br />
<br />
[[File:Comparison of non-targeted attack effectiveness under 3 methods.png|thumb|center|400px|Figure 4. Comparison of non-targeted attack effectiveness between proposed method, LSA and FGSM]]<br />
<br />
= Discussion and Future Work =<br />
<br />
In general, this paper illustrates the possibility of finding point boundaries where class labels changes by moving data points along few dimensions and analyzing the frequency of changes in class labels quantitatively; it also proves that single pixel change could perturb a considerable portion of images. According to the experimental results, one pixel attack can further work on different network structures or image sizes. Given the time constraints, the conducted experiments used a low number of DE iterations with a relatively small set of initial candidate solutions; by increasing the number of DE iterations or the initial candidate solutions, the perturbation success rate can be further improved. <br />
<br />
For the future work, it is suggested to use the proposed algorithms and natural image samples for further development of advanced models and better artificial adversarial samples.<br />
<br />
= Critique =<br />
<br />
* When the authors launched 1, 3 and 5-pixel attacks, they did it on different target networks, i.e. 1 and 3-pixel attacks on AllConv and 5-pixel attack on NiN, then compared the corresponding success rates. Such a comparison is not rigorous as different networks might lead to varied classification results. Although we understand that the authors wanted to show the best attack results among those networks, we think all three attacks should be conducted on the same defense systems respectively and consequently we can see the performance of each attack on each target network. The conclusion from these results would be more convincing.<br />
<br />
* Regarding to ImageNet, only AlexNet was used to attack but AllConv, NiN and VGG16. The papers in which AllConv and VGG16 were introduced have good experiment results on ImageNet. Attack experiments towards these networks should be considered to show the effectiveness of one-pixel attack.<br />
<br />
= Reference =<br />
<br />
Carlini.N,2017 https://www.youtube.com/watch?v=yIXNL88JBWQ<br />
<br />
Jiawei Su, Danilo Vasconcellos Vargas, Sakurai Kouichi, 2017 https://arxiv.org/pdf/1710.08864.pdf<br />
<br />
Roman Trusov,2017 https://blog.xix.ai/how-adversarial-attacks-work-87495b81da2d</div>Z3chuhttp://wiki.math.uwaterloo.ca/statwiki/index.php?title=One_pixel_attack_for_fooling_deep_neural_networks&diff=35697One pixel attack for fooling deep neural networks2018-03-27T16:36:51Z<p>Z3chu: /* Results */</p>
<hr />
<div>= Presented by = <br />
1. Ziheng Chu<br />
<br />
2. Minghao Lu<br />
<br />
3. Qi Mai<br />
<br />
4. Qici Tan<br />
<br />
= Introduction =<br />
Neural network first caught many people’s attention in imageNet contest in 2012. Neural network increased accuracy to 85% from 75%. The following year, it is increased to 89%. From no one used neural network to everyone uses the neural network. Today we have 97% accuracy in using deep neural network (DNN). So the problem of image recognition by are artificial intelligence is solved. However, there is one catch(Carlini.N,2017).<br />
<br />
The catch is that the DNN is really easy to be fooled. Here is an example. An image of a dog is classified as a hummingbird. Research studies by Google Brian, which is a deep learning artificial intelligence research team at Google, showed that any machine learning classifier can be tricked to give wrong predictions. The action of designing an input in a specific way to get the wrong result from the model is called an adversarial attack (Roman Trusov,2017). The input image is the adversarial image. This image is created by adding a tiny amount of perturbation, which is not so imperceptible to human eyes. After zooming into figure 1, a small amount of perturbation led to misclassify a dog as a hummingbird.<br />
<br />
How does it possible? DNN models consist of transformation. Most of those transformations are sometimes very sensitive to a small change. Think of the DNN as a set of high-dimensional decision boundary. When an input is not perfect, if the decision boundary is too simple and linear, mostly it leads to misclassify. <br />
<br />
Harnessing this sensitivity is a way to better understand and product robust algorithm in AI security. This paper aims to demonstrate the vulnerability of DNN by presenting some extreme scenarios - one pixel attack. As shown in figure3, only one pixel was perturbed, the classification was wrong in each image. Although, there is no profound defense to the attack as of current state, the investigation of one-pixel attack may shield lights on the behavior of DNN. Ultimately, it leads to the discussion of the security implications to future solution.<br />
<br />
This paper proposed one pixel attack in a scenario where the only information available is the probability labels. Comparing to previous work, this proposal showed its effectiveness of successful attack rate up to 73%, its simplicity of semi-black-box which only required probability label no need inner information, and its flexibility in attacking more models, especially the networks that are not differentiable and the gradient calculation is difficult.<br />
<br />
With the intension of creating an adversarial attack for better understanding the security of DNN, one pixel attack should be considered. Two main reasons: 1) a new way of exploring the high dimensional DNN by using fewer and lower dimensional slices. It is different from previous work, where perturbation was done by adding small value to each pixel. 2) a measure of perceptiveness to demonstrate the severity of one-pixel attack as comparing to a few pixel examples.<br />
<br />
[[File:abcd.png|thumb|center|400px|Image title or explanation]]<br />
<br />
= Related works =<br />
The sensitivity to well-turned artificial perturbation were investigated in various related work.<br />
<br />
1. First perturbation was crafted by several gradient-based algorithms using back- propagation for obtaining gradient information. (C. Szegedy et al. )<br />
<br />
2. Fast gradient sign algorithm for calculating effective perturbation<br />
It was with the hypotheses of the linearity and high-dimensions of inputs were the main reason of vulnerability. (I.J.Goodfellow et al.) <br />
<br />
3. Greedy perturbation searching method by assuming the linearity of DNN decision boundary (S.M Moosavi-Dezfooli et al.) Jacobian matrix to build “Adversarial Saliency Map” which indicates the effectiveness of conducting a fixed length perturbation through the direction of each axis (N. Papernot et al. )<br />
<br />
4. The images can hardly be recognized by human eyes but nevertheless classified by the network with high confi- dence. (A. Nguyen et al. )<br />
<br />
5. Several black-box attacks that require no internal knowledge about the target systems such as gradients, have also been proposed. only utilized it as a starting point to derive a further semi black-box attack which needs to modify more pixels (N. Narodytska et al)<br />
<br />
6. Both natural and random images are found to be vulnerable to adversarial perturbation. Assuming these images are evenly distributed, it suggests that most data points in the input space are gathered near to the boundaries. ( A. Fawzi, S. M. Moosavi Dezfooli, and P. Frossard.)<br />
<br />
7. A curvature analysis region along most directions around natural images are flat with only few directions where the space is curved and the images are sensitive to perturbation. (A. Fawzi et al.)<br />
<br />
8. Universal perturbations (i.e. a perturbation that when added to any natural image can generate adversarial samples with high effectiveness) were shown possible and to achieve a high effectiveness when compared to random perturbation. This indicates that the diversity of boundaries might be low while the boundaries’ shapes near different data points are similar (S. M. Moosavi Dezfooli, A. Fawzi, O. Fawzi, and P. Frossard.)<br />
<br />
= Methodology =<br />
<br />
== Problem Description ==<br />
<br />
We can formalize the generation of adversarial images as an constrained optimization problem. We are given a classifier '''F''' and targeted adversarial class '''adv'''. Let '''x''' be the vectorized form of an image. Let F_adv('''x''') represent the probability assigned to the targeted class '''adv''' for vector '''x''' by the given classifier. Let e('''x''') represent an additive adversarial perturbation vector for image '''x'''. The goal of the targeted attack adversary is to find the perturbation vector with constrained size(norm) that maximizes the probability assigned to the targeted class by the classifier. Formally <br />
<br />
[[File:ONE_PIXEL_CSP1.png | center ]]<br />
<br />
<br />
For few-pixel attack, the problem statement is changed slightly. We constrain on the number of non-zero elements of the perturbation vector instead of the norm of the vector.<br />
<br />
[[File:ONE_PIXEL_CSP2.png | center ]]<br />
<br />
<br />
For the usual adversarial case, we can shift '''x''' in all dimensions, but the strength(norm) of the shift is bounded by L. For our one-pixel attack case, we are using the second equation with d=1. '''x''' is only allowed to be perturbed along a single axis, so the degree of freedom of the shift is greatly reduces. However, the shift in the one axis can be of arbitrary strength.<br />
<br />
<br />
== Differential Evolution ==<br />
Differential evolution(DE) is a population based optimization algorithm belonging to the class of evolutionary algorithms(EA). For DE, in the selection process, the population is in a way "segmented" into families( offspring and parents ), and the most optimal from each family is chosen. This segmentation allows DE to keep more diversity in each iteration than other EAs and makes it more suitable for complex multi-modal optimization problems. Additionally, DE does not use gradient information, which makes it suitable for our problem.<br />
<br />
= Evaluation and Results =<br />
<br />
=== Measurement Metrics and Datasets ===<br />
<br />
The authors used 4 measurement metrics to evaluate the effectiveness of the proposed attacks. <br />
* '''Success Rate''' <br /> ''Non-targeted attack'': the percentage of adversarial images were successfully classified to any possible classes other than the true one. <br /><br />
:<math> \textrm{success rate }=\displaystyle\sum_{k=1}^N I(\textrm{Network}(\textrm{Attack}(\textrm{Image}_k))\neq \textrm{TrueClass}_k) </math> <br />''Targeted attack'': the probability of successfully classifying a perturbed image to a targeted class.<br />
:<math> \textrm{success rate }=\displaystyle\sum_{k=1}^N I(\textrm{Network}(\textrm{TargetedAttack}(\textrm{Image}_k))= \textrm{TrueClass}_k) </math><br />
* '''Adversarial Probability Labels (Confidence)'''<br />The ratio of sum of probability level of the target class for each successful perturbation and the total number of successful attacks. This gives the mean confidence of the successful attacks on the target classification system.<br />
* '''Number of Target Classes'''<br />The number of images after perturbations cannot be classified to any other classes.<br />
* '''Number of Original-Target Class Pairs'''<br />The number of times of each pair being attacked.<br />
<br />
<br />
''Evaluation setups on CIFAR-10 test dataset'':<br />
<br />
Three types of networks played as defense systems: All Convolutional Networks (AllConv), Network in Network (NiN) and VGG16 network. (Figure 1)<br />
<br />
The authors randomly sampled 500 images, with resolution 32x32x3, from the dataset to perform one-pixel attacks and generated 500 samples with 3 and 5 pixel-modification respectively to conduct three-pixel and five-pixel attacks. The effectiveness of one-pixel attack was evaluated on all three introduced networks and the performance comparison, using success rate, was performed as the following: 1-pixel attack and 3-pixel attack on AllConv and 5-pixel attack on NiN. <br />
<br />
Both targeted attacks and non-targeted attacks were considered but only targeted attacks were conducted since the performance of non-targeted attack results could be obtained from the result of targeted attacks by applying a fitness function to increase the probability level of the target class.<br />
<br />
<br />
<br />
''Evaluation setups on ImageNet validation dataset (ILSVRC 2012)'':<br />
<br />
BVLC AlexNet played as defense system. (Figure 1)<br />
<br />
600 sample images were randomly sampled from the dataset. Due to the relatively high resolution of the images (224x224x3), only one-pixel attacked was carried out aiming to verify if extremely small pixel modification in a relatively large image can alternate the classification result.<br />
<br />
Since the number of classes in this dataset is way larger than the one of CIFAR-10, the authors only launched non-targeted attacks and applied a fitness function to decrease the probability level of the true class. <br />
<br />
<br />
<br />
[[File:1px_attack.jpg|thumb|center|400px|Figure 1. Networks Architecture]]<br />
<br />
=== Results ===<br />
<br />
On CIFAR-10, the proposed one-pixel attack succeeded (Table 1) in general and success rate and confidence increased by a significant amount when more pixels were modified (Table 2). Moreover, with one-pixel modification, each image can be perturbed to 2~4 classes in all 3 defense systems, i.e. AllConv, NiN and VGG16 (Figure 2); in particular, VGG16 is slightly more robust than the other two. Generally, all these three networks are vulnerable to one-pixel attack. When more pixels were allowed modifications, a significant number of images were successfully perturbed to up to 8 classes (Figure 3).<br />
<br />
On ImageNet, one-pixel attack performed well regarding to relatively large images (Table 1). The low confidence 5.53% is due to the large number of classes and the main efforts in decreasing the probability level of the true class. <br />
<br />
<gallery mode="slideshow"><br />
File:tables.jpg<br />
File:1px_3nets.jpg|frameless|Figure 2. One-pixel attack on three networks (Su, Vargas and Sakurai, 2018)<br />
File:3attacks.png|frameless|Figure 3. 1, 3 and 5-pixel attacks (Su, Vargas and Sakurai, 2018)<br />
</gallery><br />
<br />
<br />
<gallery mode="slideshow"><br />
File:Comparison of non-targeted attack effectiveness under 3 methods.png|frameless|Figure 4. Comparison of non-targeted attack effectiveness between proposed method, LSA and FGSM<br />
</gallery><br />
<br />
= Discussion and Future Work =<br />
<br />
In general, this paper illustrates the possibility of finding point boundaries where class labels changes by moving data points along few dimensions and analyzing the frequency of changes in class labels quantitatively; it also proves that single pixel change could perturb a considerable portion of images. According to the experimental results, one pixel attack can further work on different network structures or image sizes. Given the time constraints, the conducted experiments used a low number of DE iterations with a relatively small set of initial candidate solutions; by increasing the number of DE iterations or the initial candidate solutions, the perturbation success rate can be further improved. <br />
<br />
For the future work, it is suggested to use the proposed algorithms and natural image samples for further development of advanced models and better artificial adversarial samples.<br />
<br />
= Critique =<br />
<br />
* When the authors launched 1, 3 and 5-pixel attacks, they did it on different target networks, i.e. 1 and 3-pixel attacks on AllConv and 5-pixel attack on NiN, then compared the corresponding success rates. Such a comparison is not rigorous as different networks might lead to varied classification results. Although we understand that the authors wanted to show the best attack results among those networks, we think all three attacks should be conducted on the same defense systems respectively and consequently we can see the performance of each attack on each target network. The conclusion from these results would be more convincing.<br />
<br />
* Regarding to ImageNet, only AlexNet was used to attack but AllConv, NiN and VGG16. The papers in which AllConv and VGG16 were introduced have good experiment results on ImageNet. Attack experiments towards these networks should be considered to show the effectiveness of one-pixel attack.<br />
<br />
= Reference =<br />
<br />
Carlini.N,2017 https://www.youtube.com/watch?v=yIXNL88JBWQ<br />
<br />
Jiawei Su, Danilo Vasconcellos Vargas, Sakurai Kouichi, 2017 https://arxiv.org/pdf/1710.08864.pdf<br />
<br />
Roman Trusov,2017 https://blog.xix.ai/how-adversarial-attacks-work-87495b81da2d</div>Z3chuhttp://wiki.math.uwaterloo.ca/statwiki/index.php?title=One_pixel_attack_for_fooling_deep_neural_networks&diff=35696One pixel attack for fooling deep neural networks2018-03-27T16:36:37Z<p>Z3chu: /* Results */</p>
<hr />
<div>= Presented by = <br />
1. Ziheng Chu<br />
<br />
2. Minghao Lu<br />
<br />
3. Qi Mai<br />
<br />
4. Qici Tan<br />
<br />
= Introduction =<br />
Neural network first caught many people’s attention in imageNet contest in 2012. Neural network increased accuracy to 85% from 75%. The following year, it is increased to 89%. From no one used neural network to everyone uses the neural network. Today we have 97% accuracy in using deep neural network (DNN). So the problem of image recognition by are artificial intelligence is solved. However, there is one catch(Carlini.N,2017).<br />
<br />
The catch is that the DNN is really easy to be fooled. Here is an example. An image of a dog is classified as a hummingbird. Research studies by Google Brian, which is a deep learning artificial intelligence research team at Google, showed that any machine learning classifier can be tricked to give wrong predictions. The action of designing an input in a specific way to get the wrong result from the model is called an adversarial attack (Roman Trusov,2017). The input image is the adversarial image. This image is created by adding a tiny amount of perturbation, which is not so imperceptible to human eyes. After zooming into figure 1, a small amount of perturbation led to misclassify a dog as a hummingbird.<br />
<br />
How does it possible? DNN models consist of transformation. Most of those transformations are sometimes very sensitive to a small change. Think of the DNN as a set of high-dimensional decision boundary. When an input is not perfect, if the decision boundary is too simple and linear, mostly it leads to misclassify. <br />
<br />
Harnessing this sensitivity is a way to better understand and product robust algorithm in AI security. This paper aims to demonstrate the vulnerability of DNN by presenting some extreme scenarios - one pixel attack. As shown in figure3, only one pixel was perturbed, the classification was wrong in each image. Although, there is no profound defense to the attack as of current state, the investigation of one-pixel attack may shield lights on the behavior of DNN. Ultimately, it leads to the discussion of the security implications to future solution.<br />
<br />
This paper proposed one pixel attack in a scenario where the only information available is the probability labels. Comparing to previous work, this proposal showed its effectiveness of successful attack rate up to 73%, its simplicity of semi-black-box which only required probability label no need inner information, and its flexibility in attacking more models, especially the networks that are not differentiable and the gradient calculation is difficult.<br />
<br />
With the intension of creating an adversarial attack for better understanding the security of DNN, one pixel attack should be considered. Two main reasons: 1) a new way of exploring the high dimensional DNN by using fewer and lower dimensional slices. It is different from previous work, where perturbation was done by adding small value to each pixel. 2) a measure of perceptiveness to demonstrate the severity of one-pixel attack as comparing to a few pixel examples.<br />
<br />
[[File:abcd.png|thumb|center|400px|Image title or explanation]]<br />
<br />
= Related works =<br />
The sensitivity to well-turned artificial perturbation were investigated in various related work.<br />
<br />
1. First perturbation was crafted by several gradient-based algorithms using back- propagation for obtaining gradient information. (C. Szegedy et al. )<br />
<br />
2. Fast gradient sign algorithm for calculating effective perturbation<br />
It was with the hypotheses of the linearity and high-dimensions of inputs were the main reason of vulnerability. (I.J.Goodfellow et al.) <br />
<br />
3. Greedy perturbation searching method by assuming the linearity of DNN decision boundary (S.M Moosavi-Dezfooli et al.) Jacobian matrix to build “Adversarial Saliency Map” which indicates the effectiveness of conducting a fixed length perturbation through the direction of each axis (N. Papernot et al. )<br />
<br />
4. The images can hardly be recognized by human eyes but nevertheless classified by the network with high confi- dence. (A. Nguyen et al. )<br />
<br />
5. Several black-box attacks that require no internal knowledge about the target systems such as gradients, have also been proposed. only utilized it as a starting point to derive a further semi black-box attack which needs to modify more pixels (N. Narodytska et al)<br />
<br />
6. Both natural and random images are found to be vulnerable to adversarial perturbation. Assuming these images are evenly distributed, it suggests that most data points in the input space are gathered near to the boundaries. ( A. Fawzi, S. M. Moosavi Dezfooli, and P. Frossard.)<br />
<br />
7. A curvature analysis region along most directions around natural images are flat with only few directions where the space is curved and the images are sensitive to perturbation. (A. Fawzi et al.)<br />
<br />
8. Universal perturbations (i.e. a perturbation that when added to any natural image can generate adversarial samples with high effectiveness) were shown possible and to achieve a high effectiveness when compared to random perturbation. This indicates that the diversity of boundaries might be low while the boundaries’ shapes near different data points are similar (S. M. Moosavi Dezfooli, A. Fawzi, O. Fawzi, and P. Frossard.)<br />
<br />
= Methodology =<br />
<br />
== Problem Description ==<br />
<br />
We can formalize the generation of adversarial images as an constrained optimization problem. We are given a classifier '''F''' and targeted adversarial class '''adv'''. Let '''x''' be the vectorized form of an image. Let F_adv('''x''') represent the probability assigned to the targeted class '''adv''' for vector '''x''' by the given classifier. Let e('''x''') represent an additive adversarial perturbation vector for image '''x'''. The goal of the targeted attack adversary is to find the perturbation vector with constrained size(norm) that maximizes the probability assigned to the targeted class by the classifier. Formally <br />
<br />
[[File:ONE_PIXEL_CSP1.png | center ]]<br />
<br />
<br />
For few-pixel attack, the problem statement is changed slightly. We constrain on the number of non-zero elements of the perturbation vector instead of the norm of the vector.<br />
<br />
[[File:ONE_PIXEL_CSP2.png | center ]]<br />
<br />
<br />
For the usual adversarial case, we can shift '''x''' in all dimensions, but the strength(norm) of the shift is bounded by L. For our one-pixel attack case, we are using the second equation with d=1. '''x''' is only allowed to be perturbed along a single axis, so the degree of freedom of the shift is greatly reduces. However, the shift in the one axis can be of arbitrary strength.<br />
<br />
<br />
== Differential Evolution ==<br />
Differential evolution(DE) is a population based optimization algorithm belonging to the class of evolutionary algorithms(EA). For DE, in the selection process, the population is in a way "segmented" into families( offspring and parents ), and the most optimal from each family is chosen. This segmentation allows DE to keep more diversity in each iteration than other EAs and makes it more suitable for complex multi-modal optimization problems. Additionally, DE does not use gradient information, which makes it suitable for our problem.<br />
<br />
= Evaluation and Results =<br />
<br />
=== Measurement Metrics and Datasets ===<br />
<br />
The authors used 4 measurement metrics to evaluate the effectiveness of the proposed attacks. <br />
* '''Success Rate''' <br /> ''Non-targeted attack'': the percentage of adversarial images were successfully classified to any possible classes other than the true one. <br /><br />
:<math> \textrm{success rate }=\displaystyle\sum_{k=1}^N I(\textrm{Network}(\textrm{Attack}(\textrm{Image}_k))\neq \textrm{TrueClass}_k) </math> <br />''Targeted attack'': the probability of successfully classifying a perturbed image to a targeted class.<br />
:<math> \textrm{success rate }=\displaystyle\sum_{k=1}^N I(\textrm{Network}(\textrm{TargetedAttack}(\textrm{Image}_k))= \textrm{TrueClass}_k) </math><br />
* '''Adversarial Probability Labels (Confidence)'''<br />The ratio of sum of probability level of the target class for each successful perturbation and the total number of successful attacks. This gives the mean confidence of the successful attacks on the target classification system.<br />
* '''Number of Target Classes'''<br />The number of images after perturbations cannot be classified to any other classes.<br />
* '''Number of Original-Target Class Pairs'''<br />The number of times of each pair being attacked.<br />
<br />
<br />
''Evaluation setups on CIFAR-10 test dataset'':<br />
<br />
Three types of networks played as defense systems: All Convolutional Networks (AllConv), Network in Network (NiN) and VGG16 network. (Figure 1)<br />
<br />
The authors randomly sampled 500 images, with resolution 32x32x3, from the dataset to perform one-pixel attacks and generated 500 samples with 3 and 5 pixel-modification respectively to conduct three-pixel and five-pixel attacks. The effectiveness of one-pixel attack was evaluated on all three introduced networks and the performance comparison, using success rate, was performed as the following: 1-pixel attack and 3-pixel attack on AllConv and 5-pixel attack on NiN. <br />
<br />
Both targeted attacks and non-targeted attacks were considered but only targeted attacks were conducted since the performance of non-targeted attack results could be obtained from the result of targeted attacks by applying a fitness function to increase the probability level of the target class.<br />
<br />
<br />
<br />
''Evaluation setups on ImageNet validation dataset (ILSVRC 2012)'':<br />
<br />
BVLC AlexNet played as defense system. (Figure 1)<br />
<br />
600 sample images were randomly sampled from the dataset. Due to the relatively high resolution of the images (224x224x3), only one-pixel attacked was carried out aiming to verify if extremely small pixel modification in a relatively large image can alternate the classification result.<br />
<br />
Since the number of classes in this dataset is way larger than the one of CIFAR-10, the authors only launched non-targeted attacks and applied a fitness function to decrease the probability level of the true class. <br />
<br />
<br />
<br />
[[File:1px_attack.jpg|thumb|center|400px|Figure 1. Networks Architecture]]<br />
<br />
=== Results ===<br />
<br />
On CIFAR-10, the proposed one-pixel attack succeeded (Table 1) in general and success rate and confidence increased by a significant amount when more pixels were modified (Table 2). Moreover, with one-pixel modification, each image can be perturbed to 2~4 classes in all 3 defense systems, i.e. AllConv, NiN and VGG16 (Figure 2); in particular, VGG16 is slightly more robust than the other two. Generally, all these three networks are vulnerable to one-pixel attack. When more pixels were allowed modifications, a significant number of images were successfully perturbed to up to 8 classes (Figure 3).<br />
<br />
On ImageNet, one-pixel attack performed well regarding to relatively large images (Table 1). The low confidence 5.53% is due to the large number of classes and the main efforts in decreasing the probability level of the true class. <br />
<br />
<gallery mode="slideshow"><br />
File:tables.jpg<br />
File:1px_3nets.jpg|frameless|Figure 2. One-pixel attack on three networks (Su, Vargas and Sakurai, 2018)<br />
File:3attacks.png|frameless|Figure 3. 1, 3 and 5-pixel attacks (Su, Vargas and Sakurai, 2018)<br />
</gallery><br />
<br />
<br />
<gallery mode="slideshow"><br />
File:Comparison of non-targeted attack effectiveness under 3 methods.png|frameless|Figure 4. Comparison of non-targeted attack effectiveness between proposed method, LSA and FGSM<br />
<br />
= Discussion and Future Work =<br />
<br />
In general, this paper illustrates the possibility of finding point boundaries where class labels changes by moving data points along few dimensions and analyzing the frequency of changes in class labels quantitatively; it also proves that single pixel change could perturb a considerable portion of images. According to the experimental results, one pixel attack can further work on different network structures or image sizes. Given the time constraints, the conducted experiments used a low number of DE iterations with a relatively small set of initial candidate solutions; by increasing the number of DE iterations or the initial candidate solutions, the perturbation success rate can be further improved. <br />
<br />
For the future work, it is suggested to use the proposed algorithms and natural image samples for further development of advanced models and better artificial adversarial samples.<br />
<br />
= Critique =<br />
<br />
* When the authors launched 1, 3 and 5-pixel attacks, they did it on different target networks, i.e. 1 and 3-pixel attacks on AllConv and 5-pixel attack on NiN, then compared the corresponding success rates. Such a comparison is not rigorous as different networks might lead to varied classification results. Although we understand that the authors wanted to show the best attack results among those networks, we think all three attacks should be conducted on the same defense systems respectively and consequently we can see the performance of each attack on each target network. The conclusion from these results would be more convincing.<br />
<br />
* Regarding to ImageNet, only AlexNet was used to attack but AllConv, NiN and VGG16. The papers in which AllConv and VGG16 were introduced have good experiment results on ImageNet. Attack experiments towards these networks should be considered to show the effectiveness of one-pixel attack.<br />
<br />
= Reference =<br />
<br />
Carlini.N,2017 https://www.youtube.com/watch?v=yIXNL88JBWQ<br />
<br />
Jiawei Su, Danilo Vasconcellos Vargas, Sakurai Kouichi, 2017 https://arxiv.org/pdf/1710.08864.pdf<br />
<br />
Roman Trusov,2017 https://blog.xix.ai/how-adversarial-attacks-work-87495b81da2d</div>Z3chuhttp://wiki.math.uwaterloo.ca/statwiki/index.php?title=One_pixel_attack_for_fooling_deep_neural_networks&diff=35689One pixel attack for fooling deep neural networks2018-03-27T16:26:27Z<p>Z3chu: /* Discussion and Future Work */</p>
<hr />
<div>= Presented by = <br />
1. Ziheng Chu<br />
<br />
2. Minghao Lu<br />
<br />
3. Qi Mai<br />
<br />
4. Qici Tan<br />
<br />
= Introduction =<br />
Neural network first caught many people’s attention in imageNet contest in 2012. Neural network increased accuracy to 85% from 75%. The following year, it is increased to 89%. From no one used neural network to everyone uses the neural network. Today we have 97% accuracy in using deep neural network (DNN). So the problem of image recognition by are artificial intelligence is solved. However, there is one catch(Carlini.N,2017).<br />
<br />
The catch is that the DNN is really easy to be fooled. Here is an example. An image of a dog is classified as a hummingbird. Research studies by Google Brian, which is a deep learning artificial intelligence research team at Google, showed that any machine learning classifier can be tricked to give wrong predictions. The action of designing an input in a specific way to get the wrong result from the model is called an adversarial attack (Roman Trusov,2017). The input image is the adversarial image. This image is created by adding a tiny amount of perturbation, which is not so imperceptible to human eyes. After zooming into figure 1, a small amount of perturbation led to misclassify a dog as a hummingbird.<br />
<br />
How does it possible? DNN models consist of transformation. Most of those transformations are sometimes very sensitive to a small change. Think of the DNN as a set of high-dimensional decision boundary. When an input is not perfect, if the decision boundary is too simple and linear, mostly it leads to misclassify. <br />
<br />
Harnessing this sensitivity is a way to better understand and product robust algorithm in AI security. This paper aims to demonstrate the vulnerability of DNN by presenting some extreme scenarios - one pixel attack. As shown in figure3, only one pixel was perturbed, the classification was wrong in each image. Although, there is no profound defense to the attack as of current state, the investigation of one-pixel attack may shield lights on the behavior of DNN. Ultimately, it leads to the discussion of the security implications to future solution.<br />
<br />
This paper proposed one pixel attack in a scenario where the only information available is the probability labels. Comparing to previous work, this proposal showed its effectiveness of successful attack rate up to 73%, its simplicity of semi-black-box which only required probability label no need inner information, and its flexibility in attacking more models, especially the networks that are not differentiable and the gradient calculation is difficult.<br />
<br />
With the intension of creating an adversarial attack for better understanding the security of DNN, one pixel attack should be considered. Two main reasons: 1) a new way of exploring the high dimensional DNN by using fewer and lower dimensional slices. It is different from previous work, where perturbation was done by adding small value to each pixel. 2) a measure of perceptiveness to demonstrate the severity of one-pixel attack as comparing to a few pixel examples.<br />
<br />
[[File:abcd.png|thumb|center|400px|Image title or explanation]]<br />
<br />
= Related works =<br />
The sensitivity to well-turned artificial perturbation were investigated in various related work.<br />
<br />
1. First perturbation was crafted by several gradient-based algorithms using back- propagation for obtaining gradient information. (C. Szegedy et al. )<br />
<br />
2. Fast gradient sign algorithm for calculating effective perturbation<br />
It was with the hypotheses of the linearity and high-dimensions of inputs were the main reason of vulnerability. (I.J.Goodfellow et al.) <br />
<br />
3. Greedy perturbation searching method by assuming the linearity of DNN decision boundary (S.M Moosavi-Dezfooli et al.) Jacobian matrix to build “Adversarial Saliency Map” which indicates the effectiveness of conducting a fixed length perturbation through the direction of each axis (N. Papernot et al. )<br />
<br />
4. The images can hardly be recognized by human eyes but nevertheless classified by the network with high confi- dence. (A. Nguyen et al. )<br />
<br />
5. Several black-box attacks that require no internal knowledge about the target systems such as gradients, have also been proposed. only utilized it as a starting point to derive a further semi black-box attack which needs to modify more pixels (N. Narodytska et al)<br />
<br />
6. Both natural and random images are found to be vulnerable to adversarial perturbation. Assuming these images are evenly distributed, it suggests that most data points in the input space are gathered near to the boundaries. ( A. Fawzi, S. M. Moosavi Dezfooli, and P. Frossard.)<br />
<br />
7. A curvature analysis region along most directions around natural images are flat with only few directions where the space is curved and the images are sensitive to perturbation. (A. Fawzi et al.)<br />
<br />
8. Universal perturbations (i.e. a perturbation that when added to any natural image can generate adversarial samples with high effectiveness) were shown possible and to achieve a high effectiveness when compared to random perturbation. This indicates that the diversity of boundaries might be low while the boundaries’ shapes near different data points are similar (S. M. Moosavi Dezfooli, A. Fawzi, O. Fawzi, and P. Frossard.)<br />
<br />
= Methodology =<br />
<br />
== Problem Description ==<br />
<br />
We can formalize the generation of adversarial images as an constrained optimization problem. We are given a classifier '''F''' and targeted adversarial class '''adv'''. Let '''x''' be the vectorized form of an image. Let F_adv('''x''') represent the probability assigned to the targeted class '''adv''' for vector '''x''' by the given classifier. Let e('''x''') represent an additive adversarial perturbation vector for image '''x'''. The goal of the targeted attack adversary is to find the perturbation vector with constrained size(norm) that maximizes the probability assigned to the targeted class by the classifier. Formally <br />
<br />
[[File:ONE_PIXEL_CSP1.png | center ]]<br />
<br />
<br />
For few-pixel attack, the problem statement is changed slightly. We constrain on the number of non-zero elements of the perturbation vector instead of the norm of the vector.<br />
<br />
[[File:ONE_PIXEL_CSP2.png | center ]]<br />
<br />
<br />
For the usual adversarial case, we can shift '''x''' in all dimensions, but the strength(norm) of the shift is bounded by L. For our one-pixel attack case, we are using the second equation with d=1. '''x''' is only allowed to be perturbed along a single axis, so the degree of freedom of the shift is greatly reduces. However, the shift in the one axis can be of arbitrary strength.<br />
<br />
<br />
== Differential Evolution ==<br />
Differential evolution(DE) is a population based optimization algorithm belonging to the class of evolutionary algorithms(EA). For DE, in the selection process, the population is in a way "segmented" into families( offspring and parents ), and the most optimal from each family is chosen. This segmentation allows DE to keep more diversity in each iteration than other EAs and makes it more suitable for complex multi-modal optimization problems. Additionally, DE does not use gradient information, which makes it suitable for our problem.<br />
<br />
= Evaluation and Results =<br />
<br />
=== Measurement Metrics and Datasets ===<br />
<br />
The authors used 4 measurement metrics to evaluate the effectiveness of the proposed attacks. <br />
* '''Success Rate''' <br /> ''Non-targeted attack'': the percentage of adversarial images were successfully classified to any possible classes other than the true one. <br /><br />
:<math> \textrm{success rate }=\displaystyle\sum_{k=1}^N I(\textrm{Network}(\textrm{Attack}(\textrm{Image}_k))\neq \textrm{TrueClass}_k) </math> <br />''Targeted attack'': the probability of successfully classifying a perturbed image to a targeted class.<br />
:<math> \textrm{success rate }=\displaystyle\sum_{k=1}^N I(\textrm{Network}(\textrm{TargetedAttack}(\textrm{Image}_k))= \textrm{TrueClass}_k) </math><br />
* '''Adversarial Probability Labels (Confidence)'''<br />The ratio of sum of probability level of the target class for each successful perturbation and the total number of successful attacks. This gives the mean confidence of the successful attacks on the target classification system.<br />
* '''Number of Target Classes'''<br />The number of images after perturbations cannot be classified to any other classes.<br />
* '''Number of Original-Target Class Pairs'''<br />The number of times of each pair being attacked.<br />
<br />
<br />
''Evaluation setups on CIFAR-10 test dataset'':<br />
<br />
Three types of networks played as defense systems: All Convolutional Networks (AllConv), Network in Network (NiN) and VGG16 network. (Figure 1)<br />
<br />
The authors randomly sampled 500 images, with resolution 32x32x3, from the dataset to perform one-pixel attacks and generated 500 samples with 3 and 5 pixel-modification respectively to conduct three-pixel and five-pixel attacks. The effectiveness of one-pixel attack was evaluated on all three introduced networks and the performance comparison, using success rate, was performed as the following: 1-pixel attack and 3-pixel attack on AllConv and 5-pixel attack on NiN. <br />
<br />
Both targeted attacks and non-targeted attacks were considered but only targeted attacks were conducted since the performance of non-targeted attack results could be obtained from the result of targeted attacks by applying a fitness function to increase the probability level of the target class.<br />
<br />
<br />
<br />
''Evaluation setups on ImageNet validation dataset (ILSVRC 2012)'':<br />
<br />
BVLC AlexNet played as defense system. (Figure 1)<br />
<br />
600 sample images were randomly sampled from the dataset. Due to the relatively high resolution of the images (224x224x3), only one-pixel attacked was carried out aiming to verify if extremely small pixel modification in a relatively large image can alternate the classification result.<br />
<br />
Since the number of classes in this dataset is way larger than the one of CIFAR-10, the authors only launched non-targeted attacks and applied a fitness function to decrease the probability level of the true class. <br />
<br />
<br />
<br />
[[File:1px_attack.jpg|thumb|center|400px|Figure 1. Networks Architecture]]<br />
<br />
=== Results ===<br />
<br />
On CIFAR-10, the proposed one-pixel attack succeeded (Table 1) in general and success rate and confidence increased by a significant amount when more pixels were modified (Table 2). Moreover, with one-pixel modification, each image can be perturbed to 2~4 classes in all 3 defense systems, i.e. AllConv, NiN and VGG16 (Figure 2); in particular, VGG16 is slightly more robust than the other two. Generally, all these three networks are vulnerable to one-pixel attack. When more pixels were allowed modifications, a significant number of images were successfully perturbed to up to 8 classes (Figure 3).<br />
<br />
On ImageNet, one-pixel attack performed well regarding to relatively large images (Table 1). The low confidence 5.53% is due to the large number of classes and the main efforts in decreasing the probability level of the true class. <br />
<br />
<gallery mode="slideshow"><br />
File:tables.jpg<br />
File:1px_3nets.jpg|frameless|Figure 2. One-pixel attack on three networks (Su, Vargas and Sakurai, 2018)<br />
File:3attacks.png|frameless|Figure 3. 1, 3 and 5-pixel attacks (Su, Vargas and Sakurai, 2018)<br />
</gallery><br />
<br />
= Discussion and Future Work =<br />
<br />
In general, this paper illustrates the possibility of finding point boundaries where class labels changes by moving data points along few dimensions and analyzing the frequency of changes in class labels quantitatively; it also proves that single pixel change could perturb a considerable portion of images. According to the experimental results, one pixel attack can further work on different network structures or image sizes. Given the time constraints, the conducted experiments used a low number of DE iterations with a relatively small set of initial candidate solutions; by increasing the number of DE iterations or the initial candidate solutions, the perturbation success rate can be further improved. <br />
<br />
For the future work, it is suggested to use the proposed algorithms and natural image samples for further development of advanced models and better artificial adversarial samples.<br />
<br />
= Critique =<br />
<br />
* When the authors launched 1, 3 and 5-pixel attacks, they did it on different target networks, i.e. 1 and 3-pixel attacks on AllConv and 5-pixel attack on NiN, then compared the corresponding success rates. Such a comparison is not rigorous as different networks might lead to varied classification results. Although we understand that the authors wanted to show the best attack results among those networks, we think all three attacks should be conducted on the same defense systems respectively and consequently we can see the performance of each attack on each target network. The conclusion from these results would be more convincing.<br />
<br />
* Regarding to ImageNet, only AlexNet was used to attack but AllConv, NiN and VGG16. The papers in which AllConv and VGG16 were introduced have good experiment results on ImageNet. Attack experiments towards these networks should be considered to show the effectiveness of one-pixel attack.<br />
<br />
= Reference =<br />
<br />
Carlini.N,2017 https://www.youtube.com/watch?v=yIXNL88JBWQ<br />
<br />
Jiawei Su, Danilo Vasconcellos Vargas, Sakurai Kouichi, 2017 https://arxiv.org/pdf/1710.08864.pdf<br />
<br />
Roman Trusov,2017 https://blog.xix.ai/how-adversarial-attacks-work-87495b81da2d</div>Z3chuhttp://wiki.math.uwaterloo.ca/statwiki/index.php?title=One_pixel_attack_for_fooling_deep_neural_networks&diff=35688One pixel attack for fooling deep neural networks2018-03-27T16:26:03Z<p>Z3chu: /* Discussion and Future Work */</p>
<hr />
<div>= Presented by = <br />
1. Ziheng Chu<br />
<br />
2. Minghao Lu<br />
<br />
3. Qi Mai<br />
<br />
4. Qici Tan<br />
<br />
= Introduction =<br />
Neural network first caught many people’s attention in imageNet contest in 2012. Neural network increased accuracy to 85% from 75%. The following year, it is increased to 89%. From no one used neural network to everyone uses the neural network. Today we have 97% accuracy in using deep neural network (DNN). So the problem of image recognition by are artificial intelligence is solved. However, there is one catch(Carlini.N,2017).<br />
<br />
The catch is that the DNN is really easy to be fooled. Here is an example. An image of a dog is classified as a hummingbird. Research studies by Google Brian, which is a deep learning artificial intelligence research team at Google, showed that any machine learning classifier can be tricked to give wrong predictions. The action of designing an input in a specific way to get the wrong result from the model is called an adversarial attack (Roman Trusov,2017). The input image is the adversarial image. This image is created by adding a tiny amount of perturbation, which is not so imperceptible to human eyes. After zooming into figure 1, a small amount of perturbation led to misclassify a dog as a hummingbird.<br />
<br />
How does it possible? DNN models consist of transformation. Most of those transformations are sometimes very sensitive to a small change. Think of the DNN as a set of high-dimensional decision boundary. When an input is not perfect, if the decision boundary is too simple and linear, mostly it leads to misclassify. <br />
<br />
Harnessing this sensitivity is a way to better understand and product robust algorithm in AI security. This paper aims to demonstrate the vulnerability of DNN by presenting some extreme scenarios - one pixel attack. As shown in figure3, only one pixel was perturbed, the classification was wrong in each image. Although, there is no profound defense to the attack as of current state, the investigation of one-pixel attack may shield lights on the behavior of DNN. Ultimately, it leads to the discussion of the security implications to future solution.<br />
<br />
This paper proposed one pixel attack in a scenario where the only information available is the probability labels. Comparing to previous work, this proposal showed its effectiveness of successful attack rate up to 73%, its simplicity of semi-black-box which only required probability label no need inner information, and its flexibility in attacking more models, especially the networks that are not differentiable and the gradient calculation is difficult.<br />
<br />
With the intension of creating an adversarial attack for better understanding the security of DNN, one pixel attack should be considered. Two main reasons: 1) a new way of exploring the high dimensional DNN by using fewer and lower dimensional slices. It is different from previous work, where perturbation was done by adding small value to each pixel. 2) a measure of perceptiveness to demonstrate the severity of one-pixel attack as comparing to a few pixel examples.<br />
<br />
[[File:abcd.png|thumb|center|400px|Image title or explanation]]<br />
<br />
= Related works =<br />
The sensitivity to well-turned artificial perturbation were investigated in various related work.<br />
<br />
1. First perturbation was crafted by several gradient-based algorithms using back- propagation for obtaining gradient information. (C. Szegedy et al. )<br />
<br />
2. Fast gradient sign algorithm for calculating effective perturbation<br />
It was with the hypotheses of the linearity and high-dimensions of inputs were the main reason of vulnerability. (I.J.Goodfellow et al.) <br />
<br />
3. Greedy perturbation searching method by assuming the linearity of DNN decision boundary (S.M Moosavi-Dezfooli et al.) Jacobian matrix to build “Adversarial Saliency Map” which indicates the effectiveness of conducting a fixed length perturbation through the direction of each axis (N. Papernot et al. )<br />
<br />
4. The images can hardly be recognized by human eyes but nevertheless classified by the network with high confi- dence. (A. Nguyen et al. )<br />
<br />
5. Several black-box attacks that require no internal knowledge about the target systems such as gradients, have also been proposed. only utilized it as a starting point to derive a further semi black-box attack which needs to modify more pixels (N. Narodytska et al)<br />
<br />
6. Both natural and random images are found to be vulnerable to adversarial perturbation. Assuming these images are evenly distributed, it suggests that most data points in the input space are gathered near to the boundaries. ( A. Fawzi, S. M. Moosavi Dezfooli, and P. Frossard.)<br />
<br />
7. A curvature analysis region along most directions around natural images are flat with only few directions where the space is curved and the images are sensitive to perturbation. (A. Fawzi et al.)<br />
<br />
8. Universal perturbations (i.e. a perturbation that when added to any natural image can generate adversarial samples with high effectiveness) were shown possible and to achieve a high effectiveness when compared to random perturbation. This indicates that the diversity of boundaries might be low while the boundaries’ shapes near different data points are similar (S. M. Moosavi Dezfooli, A. Fawzi, O. Fawzi, and P. Frossard.)<br />
<br />
= Methodology =<br />
<br />
== Problem Description ==<br />
<br />
We can formalize the generation of adversarial images as an constrained optimization problem. We are given a classifier '''F''' and targeted adversarial class '''adv'''. Let '''x''' be the vectorized form of an image. Let F_adv('''x''') represent the probability assigned to the targeted class '''adv''' for vector '''x''' by the given classifier. Let e('''x''') represent an additive adversarial perturbation vector for image '''x'''. The goal of the targeted attack adversary is to find the perturbation vector with constrained size(norm) that maximizes the probability assigned to the targeted class by the classifier. Formally <br />
<br />
[[File:ONE_PIXEL_CSP1.png | center ]]<br />
<br />
<br />
For few-pixel attack, the problem statement is changed slightly. We constrain on the number of non-zero elements of the perturbation vector instead of the norm of the vector.<br />
<br />
[[File:ONE_PIXEL_CSP2.png | center ]]<br />
<br />
<br />
For the usual adversarial case, we can shift '''x''' in all dimensions, but the strength(norm) of the shift is bounded by L. For our one-pixel attack case, we are using the second equation with d=1. '''x''' is only allowed to be perturbed along a single axis, so the degree of freedom of the shift is greatly reduces. However, the shift in the one axis can be of arbitrary strength.<br />
<br />
<br />
== Differential Evolution ==<br />
Differential evolution(DE) is a population based optimization algorithm belonging to the class of evolutionary algorithms(EA). For DE, in the selection process, the population is in a way "segmented" into families( offspring and parents ), and the most optimal from each family is chosen. This segmentation allows DE to keep more diversity in each iteration than other EAs and makes it more suitable for complex multi-modal optimization problems. Additionally, DE does not use gradient information, which makes it suitable for our problem.<br />
<br />
= Evaluation and Results =<br />
<br />
=== Measurement Metrics and Datasets ===<br />
<br />
The authors used 4 measurement metrics to evaluate the effectiveness of the proposed attacks. <br />
* '''Success Rate''' <br /> ''Non-targeted attack'': the percentage of adversarial images were successfully classified to any possible classes other than the true one. <br /><br />
:<math> \textrm{success rate }=\displaystyle\sum_{k=1}^N I(\textrm{Network}(\textrm{Attack}(\textrm{Image}_k))\neq \textrm{TrueClass}_k) </math> <br />''Targeted attack'': the probability of successfully classifying a perturbed image to a targeted class.<br />
:<math> \textrm{success rate }=\displaystyle\sum_{k=1}^N I(\textrm{Network}(\textrm{TargetedAttack}(\textrm{Image}_k))= \textrm{TrueClass}_k) </math><br />
* '''Adversarial Probability Labels (Confidence)'''<br />The ratio of sum of probability level of the target class for each successful perturbation and the total number of successful attacks. This gives the mean confidence of the successful attacks on the target classification system.<br />
* '''Number of Target Classes'''<br />The number of images after perturbations cannot be classified to any other classes.<br />
* '''Number of Original-Target Class Pairs'''<br />The number of times of each pair being attacked.<br />
<br />
<br />
''Evaluation setups on CIFAR-10 test dataset'':<br />
<br />
Three types of networks played as defense systems: All Convolutional Networks (AllConv), Network in Network (NiN) and VGG16 network. (Figure 1)<br />
<br />
The authors randomly sampled 500 images, with resolution 32x32x3, from the dataset to perform one-pixel attacks and generated 500 samples with 3 and 5 pixel-modification respectively to conduct three-pixel and five-pixel attacks. The effectiveness of one-pixel attack was evaluated on all three introduced networks and the performance comparison, using success rate, was performed as the following: 1-pixel attack and 3-pixel attack on AllConv and 5-pixel attack on NiN. <br />
<br />
Both targeted attacks and non-targeted attacks were considered but only targeted attacks were conducted since the performance of non-targeted attack results could be obtained from the result of targeted attacks by applying a fitness function to increase the probability level of the target class.<br />
<br />
<br />
<br />
''Evaluation setups on ImageNet validation dataset (ILSVRC 2012)'':<br />
<br />
BVLC AlexNet played as defense system. (Figure 1)<br />
<br />
600 sample images were randomly sampled from the dataset. Due to the relatively high resolution of the images (224x224x3), only one-pixel attacked was carried out aiming to verify if extremely small pixel modification in a relatively large image can alternate the classification result.<br />
<br />
Since the number of classes in this dataset is way larger than the one of CIFAR-10, the authors only launched non-targeted attacks and applied a fitness function to decrease the probability level of the true class. <br />
<br />
<br />
<br />
[[File:1px_attack.jpg|thumb|center|400px|Figure 1. Networks Architecture]]<br />
<br />
=== Results ===<br />
<br />
On CIFAR-10, the proposed one-pixel attack succeeded (Table 1) in general and success rate and confidence increased by a significant amount when more pixels were modified (Table 2). Moreover, with one-pixel modification, each image can be perturbed to 2~4 classes in all 3 defense systems, i.e. AllConv, NiN and VGG16 (Figure 2); in particular, VGG16 is slightly more robust than the other two. Generally, all these three networks are vulnerable to one-pixel attack. When more pixels were allowed modifications, a significant number of images were successfully perturbed to up to 8 classes (Figure 3).<br />
<br />
On ImageNet, one-pixel attack performed well regarding to relatively large images (Table 1). The low confidence 5.53% is due to the large number of classes and the main efforts in decreasing the probability level of the true class. <br />
<br />
<gallery mode="slideshow"><br />
File:tables.jpg<br />
File:1px_3nets.jpg|frameless|Figure 2. One-pixel attack on three networks (Su, Vargas and Sakurai, 2018)<br />
File:3attacks.png|frameless|Figure 3. 1, 3 and 5-pixel attacks (Su, Vargas and Sakurai, 2018)<br />
</gallery><br />
<br />
= Discussion and Future Work =<br />
<br />
In general, this paper illustrates the possibility of finding point boundaries where class labels changes by moving data points along few dimensions and analyzing the frequency of changes in class labels quantitatively; it also proves that single pixel change could perturb a considerable portion of images. According to the experimental results, one pixel attack can further work on different network structures or image sizes.<br />
<br />
Given the time constraints, the conducted experiments used a low number of DE iterations with a relatively small set of initial candidate solutions; by increasing the number of DE iterations or the initial candidate solutions, the perturbation success rate can be further improved. For the future work, it is suggested to use the proposed algorithms and natural image samples for further development of advanced models and better artificial adversarial samples.<br />
<br />
= Critique =<br />
<br />
* When the authors launched 1, 3 and 5-pixel attacks, they did it on different target networks, i.e. 1 and 3-pixel attacks on AllConv and 5-pixel attack on NiN, then compared the corresponding success rates. Such a comparison is not rigorous as different networks might lead to varied classification results. Although we understand that the authors wanted to show the best attack results among those networks, we think all three attacks should be conducted on the same defense systems respectively and consequently we can see the performance of each attack on each target network. The conclusion from these results would be more convincing.<br />
<br />
* Regarding to ImageNet, only AlexNet was used to attack but AllConv, NiN and VGG16. The papers in which AllConv and VGG16 were introduced have good experiment results on ImageNet. Attack experiments towards these networks should be considered to show the effectiveness of one-pixel attack.<br />
<br />
= Reference =<br />
<br />
Carlini.N,2017 https://www.youtube.com/watch?v=yIXNL88JBWQ<br />
<br />
Jiawei Su, Danilo Vasconcellos Vargas, Sakurai Kouichi, 2017 https://arxiv.org/pdf/1710.08864.pdf<br />
<br />
Roman Trusov,2017 https://blog.xix.ai/how-adversarial-attacks-work-87495b81da2d</div>Z3chu