# Difference between revisions of "One pixel attack for fooling deep neural networks"

From statwiki

(→Methodology) |
(→Methodology) |
||

Line 15: | Line 15: | ||

We first formalize the generation of adversarial images as an constrained optimization problem. | We first formalize the generation of adversarial images as an constrained optimization problem. | ||

− | Let <math>f_t</math> | + | Let <math>\bold{x}<math> be the vectorized form of an image. Let <math>f_t(\bold{x})</math> |

= Evaluation and Results = | = Evaluation and Results = |

## Revision as of 20:52, 25 March 2018

## Contents

# Presented by

1. Ziheng Chu

2. Minghao Lu

3. Qi Mai

4. Qici Tan

# Introduction

# Methodology

We first formalize the generation of adversarial images as an constrained optimization problem. Let [math]\bold{x}\lt math\gt be the vectorized form of an image. Let \lt math\gt f_t(\bold{x})[/math]