CRITICAL ANALYSIS OF SELF-SUPERVISION

From statwiki
Revision as of 10:55, 25 November 2020 by Mrasooli (talk | contribs) (→‎Conclusion)
Jump to navigation Jump to search

Presented by

Maral Rasoolijaberi

Introduction

This paper evaluated the performance of state-of-the-art self-supervision techniques on learning different parts of convolutional neural networks (CNNs). The main idea of self-supervised learning to learn from unlabeled data by training CNNs without manual data, e.g., a picture of a dog without the label “dog”. In self-supervised learning, data generate ground truth labels per se by pretext tasks such as rotation estimation.

In this paper, different experiments have been designed to learn deep features without humans providing labelled data by employing only one image as well as the whole dataset.

Previous Work

Motivation

results

Conclusion

This paper revealed that if a strong data-augmentation be employed, as little as a single image is sufficient for self-supervision techniques to learn the first few layers of standard CNNs. The results confirmed that the weights of the first layers of deep networks contain limited information about natural images.

Critiques

References