multi-Task Feature Learning: Difference between revisions

From statwiki
Jump to navigation Jump to search
Line 2: Line 2:


It has been both empirically as well as theoretically shown that learning multiple related tasks simultaneously often significantly improves performance as compared to learning each task independently. Learning multiple related tasks simultaneously is especially beneficial when we only have a few data per task; and this benefit comes from pooling together data across many related tasks. One way that tasks can be related to each other is that some tasks share a common underlying representation; for example, people make product choices (e.g. of books, music CDs, etc.) using a common set of features describing these products. In this paper, the authors explored a way to learn a low-dimensional representation this is shared across multiple related tasks.
It has been both empirically as well as theoretically shown that learning multiple related tasks simultaneously often significantly improves performance as compared to learning each task independently. Learning multiple related tasks simultaneously is especially beneficial when we only have a few data per task; and this benefit comes from pooling together data across many related tasks. One way that tasks can be related to each other is that some tasks share a common underlying representation; for example, people make product choices (e.g. of books, music CDs, etc.) using a common set of features describing these products. In this paper, the authors explored a way to learn a low-dimensional representation this is shared across multiple related tasks.
=Learning sparse multi-task representations=

Revision as of 14:20, 24 November 2010

Introduction

It has been both empirically as well as theoretically shown that learning multiple related tasks simultaneously often significantly improves performance as compared to learning each task independently. Learning multiple related tasks simultaneously is especially beneficial when we only have a few data per task; and this benefit comes from pooling together data across many related tasks. One way that tasks can be related to each other is that some tasks share a common underlying representation; for example, people make product choices (e.g. of books, music CDs, etc.) using a common set of features describing these products. In this paper, the authors explored a way to learn a low-dimensional representation this is shared across multiple related tasks.

Learning sparse multi-task representations