CRITICAL ANALYSIS OF SELF-SUPERVISION: Difference between revisions

From statwiki
Jump to navigation Jump to search
Line 16: Line 16:


== Conclusion ==
== Conclusion ==
This paper revealed that if a strong data-augmentation be employed, as little as a single image is sufficient for self-supervision techniques to learn the first few layers of standard CNNs. The results confirmed that the weights of the first layers of deep networks contain limited information about natural images.


== Critiques ==
== Critiques ==


== References ==
== References ==

Revision as of 12:09, 25 November 2020

Presented by

Maral Rasoolijaberi

Introduction

This paper evaluated the performance of state-of-the-art self-supervision techniques on learning different parts of convolutional neural networks (CNNs). The main idea of self-supervised learning to learn from unlabeled data by training CNNs without manual data, e.g., a picture of a dog without the label “dog”. In self-supervised learning, data generate ground truth labels per se by pretext tasks such as rotation estimation.

In this paper, different experiments have been designed to learn deep features without humans providing labelled data by employing only one image as well as the whole dataset.

Previous Work

Method

results

Conclusion

Critiques

References