Difference between revisions of "conditional neural process"
From statwiki
(→Introduction) |
(→Introduction) |
||
Line 4: | Line 4: | ||
of a generic domain without committing to a specific learning task; the second phase learns a function for a specific task, but does so using only a small number of data points by exploiting the domain-wide statistics already learned. | of a generic domain without committing to a specific learning task; the second phase learns a function for a specific task, but does so using only a small number of data points by exploiting the domain-wide statistics already learned. | ||
− | For example, consider a data set | + | For example, consider a data set <math display="inline"> {x_i, y_i} </math> for <math display="inline">i = 0 to n-1</math> |
Revision as of 17:33, 18 November 2018
Introduction
To train a model effectively, deep neural networks require large datasets. To mitigate this data efficiency problem, learning in two phases is one approach : the first phase learns the statistics of a generic domain without committing to a specific learning task; the second phase learns a function for a specific task, but does so using only a small number of data points by exploiting the domain-wide statistics already learned.
For example, consider a data set [math] {x_i, y_i} [/math] for [math]i = 0 to n-1[/math]