Meta-Learning For Domain Generalization: Difference between revisions
(Created page with "== Presented by == Parsa Ashrafi Fashi == Introduction == Domain Shift problem addresses the problem where a model trained on a data distribution cannot perform well when te...") |
No edit summary |
||
Line 8: | Line 8: | ||
== Previous Work == | == Previous Work == | ||
There were 3 common approaches to Domain Generalization. The simplest way is to train a model for each source domain and estimate which model performs better on a new unseen target domain [1]. The second approach \alpha | |||
== Method == | == Method == | ||
Line 26: | Line 26: | ||
== References == | == References == | ||
[1]: | [1]: [Xu et al. 2014] Xu, Z.; Li, W.; Niu, L.; and Xu, D. 2014. Exploiting low-rank structure from latent domains for domain generalization. In ECCV. | ||
[2]: Finn, C., Abbeel, P., Levine, S.: Model-agnostic meta-learning for fast adaptation of deep networks. In: ICML (2017) | [2]: Finn, C., Abbeel, P., Levine, S.: Model-agnostic meta-learning for fast adaptation of deep networks. In: ICML (2017) |
Revision as of 14:55, 9 November 2020
Presented by
Parsa Ashrafi Fashi
Introduction
Domain Shift problem addresses the problem where a model trained on a data distribution cannot perform well when tested on another domain with different distribution. Domain Generalization tries to tackle this problem by producing models that can perform well on unseen target domains. Several approaches have been adapted for the problem, such as training a model for each source domain, extract a domain agnostic component domains and semantic feature learning. Meta-Learning and specifically Model-Agnostic Meta-Learning which have been widely adapted recently, are models capable of adapting or generalizing to new tasks and new environments that have never been encountered during training time. Here by defining tasks as domains, the paper tries to overcome the problem in a model-agnostic way.
Previous Work
There were 3 common approaches to Domain Generalization. The simplest way is to train a model for each source domain and estimate which model performs better on a new unseen target domain [1]. The second approach \alpha
Method
Experiments
Results
Conclusion
Critiques
References
[1]: [Xu et al. 2014] Xu, Z.; Li, W.; Niu, L.; and Xu, D. 2014. Exploiting low-rank structure from latent domains for domain generalization. In ECCV.
[2]: Finn, C., Abbeel, P., Levine, S.: Model-agnostic meta-learning for fast adaptation of deep networks. In: ICML (2017)
[3]: Kokkinos, I.: Ubernet: Training a universal convolutional neural network for low-, mid-, and high-level vision using diverse datasets and limited memory. In: CVPR (2017)
[4]: Noroozi, M., Favaro, P.: Unsupervised learning of visual representations by solving jigsaw puzzles. In: ECCV (2016)