CRITICAL ANALYSIS OF SELF-SUPERVISION: Difference between revisions

From statwiki
Jump to navigation Jump to search
Line 3: Line 3:


== Introduction ==
== Introduction ==
 
This paper evaluated the performance of state-of-the-art self-supervised (unsupervisedmethods on learning weights of convolutional neural networks (CNNs) to figure out whether current self-supervision techniques can learn deep features from only one image.
This paper evaluated the performance of state-of-the-art unsupervised learning methods on learning weights of convolutional neural networks (CNNs) to figure out whether self-supervised methods can learn deep features from only one image.
The main goal of self-supervised learning is to take advantage of vast amount of unlabeled data for training CNNs and finding  a generalized image representation. 
In self-supervised learning, data generate ground truth labels per se by pretext tasks such as rotation estimation. The main goal of self-supervised learning is utilizing unlabeled data, e.g., a picture of a dog without the label “dog”, for training CNNs and finding generalized image representations.
In self-supervised learning, data generate ground truth labels per se by pretext tasks such as rotation estimation.


== Previous Work ==
== Previous Work ==

Revision as of 00:06, 26 November 2020

Presented by

Maral Rasoolijaberi

Introduction

This paper evaluated the performance of state-of-the-art self-supervised (unsupervised) methods on learning weights of convolutional neural networks (CNNs) to figure out whether current self-supervision techniques can learn deep features from only one image. The main goal of self-supervised learning is to take advantage of vast amount of unlabeled data for training CNNs and finding a generalized image representation. In self-supervised learning, data generate ground truth labels per se by pretext tasks such as rotation estimation.

Previous Work

Method

results

Conclusion

Critiques

References