Task Understanding from Confusing Multi-task Data

From statwiki
Jump to navigation Jump to search

Presented By

Qianlin Song, William Loh, Junyue Bai, Phoebe Choi

Introduction

Related Work

Confusing Supervised Learning

Confusing supervised learning (CSL) offers a solution to the issue at hand. A major area of improvement can be seen in the choice of risk measure. In traditional supervised learning, assuming the risk measure is mean squared error (MSE), the expected risk function is

$$ R(g) = \int_x (f(x) - g(x))^2 p(x) \; \mathrm{d}x $$

where [math]\displaystyle{ p(x) }[/math] is the prior distribution of the input variable [math]\displaystyle{ x }[/math]. In practice, model optimizations are performed using the empirical risk

$$ R_e(g) = \sum_{i=1}^n (y_i - g(x_i))^2 $$

When the problem involves different tasks, the model should optimize for each data point depending on the given task. Let [math]\displaystyle{ f_j(x) }[/math] be the target function for each task [math]\displaystyle{ j }[/math].

CSL-Net

Experiment

Conclusion

Critique

References

Su, Xin, et al. "Task Understanding from Confusing Multi-task Data."