conditional neural process: Difference between revisions

From statwiki
Jump to navigation Jump to search
(Created page with "== Introduction == To train a model effectively, deep neural networks require large datasets. To mitigate this data efficiency problem, learning in two phases is one approach...")
 
Line 4: Line 4:
of a generic domain without committing to a specific learning task; the second phase learns a function for a specific task, but does so using only a small number of data points by exploiting the domain-wide statistics already learned.
of a generic domain without committing to a specific learning task; the second phase learns a function for a specific task, but does so using only a small number of data points by exploiting the domain-wide statistics already learned.


For example, consider a data set ${x_i, y_i}$
For example, consider a data set $${x_i, y_i}$$

Revision as of 16:30, 18 November 2018

Introduction

To train a model effectively, deep neural networks require large datasets. To mitigate this data efficiency problem, learning in two phases is one approach : the first phase learns the statistics of a generic domain without committing to a specific learning task; the second phase learns a function for a specific task, but does so using only a small number of data points by exploiting the domain-wide statistics already learned.

For example, consider a data set $${x_i, y_i}$$