Presented By

Qianlin Song, William Loh, Junyue Bai, Phoebe Choi

# Confusing Supervised Learning

Confusing supervised learning (CSL) offers a solution to the issue at hand. A major area of improvement can be seen in the choice of risk measure. In traditional supervised learning, assuming the risk measure is mean squared error (MSE), the expected risk function is

$$R(g) = \int_x (f(x) - g(x))^2 p(x) \; \mathrm{d}x$$

where $p(x)$ is the prior distribution of the input variable $x$. In practice, model optimizations are performed using the empirical risk

$$R_e(g) = \sum_{i=1}^n (y_i - g(x_i))^2$$

When the problem involves different tasks, the model should optimize for each data point depending on the given task. Let $f_j(x)$ be the target function for each task $j$.