conditional neural process: Difference between revisions

From statwiki
Jump to navigation Jump to search
Line 4: Line 4:
of a generic domain without committing to a specific learning task; the second phase learns a function for a specific task, but does so using only a small number of data points by exploiting the domain-wide statistics already learned.
of a generic domain without committing to a specific learning task; the second phase learns a function for a specific task, but does so using only a small number of data points by exploiting the domain-wide statistics already learned.


For example, consider a data set <math display="inline"> {x_i, y_i} </math> for  <math display="inline">i = 0 to n-1</math>
For example, consider a data set <math display="inline"> \{x_i, y_i\} </math> for  <math display="inline">i = 0,..., n-1</math>

Revision as of 16:34, 18 November 2018

Introduction

To train a model effectively, deep neural networks require large datasets. To mitigate this data efficiency problem, learning in two phases is one approach : the first phase learns the statistics of a generic domain without committing to a specific learning task; the second phase learns a function for a specific task, but does so using only a small number of data points by exploiting the domain-wide statistics already learned.

For example, consider a data set [math]\displaystyle{ \{x_i, y_i\} }[/math] for [math]\displaystyle{ i = 0,..., n-1 }[/math]