Augmix: New Data Augmentation method to increase the robustness of the algorithm

From statwiki
Revision as of 04:50, 4 November 2020 by A2chanan (talk | contribs) (Created page with "== Presented by == Abhinav Chanana == Introduction == Often a times machine learning algorithms assume that the training data is the correct representation of the data enco...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search
The printable version is no longer supported and may have rendering errors. Please update your browser bookmarks and please use the default browser print function instead.

Presented by

Abhinav Chanana

Introduction

Often a times machine learning algorithms assume that the training data is the correct representation of the data encountered during deployment. Algorithms generally ignore the chances of receiving little corruption which leads to less robust and reduction in accuracy as the models try to fit the noise as well for predictions. A small amount of corruptions has the potential to reduce the performance of various models like stated in the Hendrycks & Dietterich (2019) showing that the classification error rises from 25% to 62% when some corruption was introduced on the ImageNet test set. The problem with introducing some corruptions is that it encourages the models or network to memorize the specific corruptions and is unable to generalize the corruptions. The paper also provides evidences that networks trained on translation augmentations are highly sensitive to shifting of pixels. The paper comes with a new algorithm known as AugMix, a method which achieves new state-of-the-art results for robustness and uncertainty estimation while maintaining accuracy on standard benchmark datasets. The paper uses CIFAR 10 , CIFAR100 , ImageNet datasets for confirming the results. AUGMIX utilizes stochasticity and diverse augmentations, a Jensen-Shannon Divergence consistency loss, and a formulation to mix multiple augmented images to achieve state-of-the-art performance

Approach

At a high level , AugMix does some basic augementations techniques. These augmentations are often layered to create a high diversity of augmented images. The loss is calculated using the Jensen-Shannon divergence method.



Data Set Used

The authors use the following datasets for conducting the experiment.

1. CIFAR 10 - https://www.cs.toronto.edu/~kriz/cifar.html 2. CIFAR 100 - https://www.cs.toronto.edu/~kriz/cifar.html 3. ImageNet - http://image-net.org/download

Experiments

Conclusion