CRITICAL ANALYSIS OF SELF-SUPERVISION: Difference between revisions
Jump to navigation
Jump to search
Line 3: | Line 3: | ||
== Introduction == | == Introduction == | ||
This paper evaluated the performance of state-of-the-art self-supervised (unsupervised) methods on learning weights of convolutional neural networks (CNNs) to figure out whether current self-supervision techniques can learn deep features from only one image. | |||
This paper evaluated the performance of state-of-the-art unsupervised | The main goal of self-supervised learning is to take advantage of vast amount of unlabeled data for training CNNs and finding a generalized image representation. | ||
In self-supervised learning, data generate ground truth labels per se by pretext tasks such as rotation estimation | In self-supervised learning, data generate ground truth labels per se by pretext tasks such as rotation estimation. | ||
== Previous Work == | == Previous Work == |
Revision as of 00:06, 26 November 2020
Presented by
Maral Rasoolijaberi
Introduction
This paper evaluated the performance of state-of-the-art self-supervised (unsupervised) methods on learning weights of convolutional neural networks (CNNs) to figure out whether current self-supervision techniques can learn deep features from only one image. The main goal of self-supervised learning is to take advantage of vast amount of unlabeled data for training CNNs and finding a generalized image representation. In self-supervised learning, data generate ground truth labels per se by pretext tasks such as rotation estimation.