Presented By

Qianlin Song, William Loh, Junyue Bai, Phoebe Choi

# Related Work: Types of machine learning problems

Multi-task learning aims to learn multiple tasks simultaneously using a shared feature representation. By exploiting similarities and differences between tasks, the learning from one task can improve the learning of another task. (Caruana, 1997) This results in improved learning efficiency. Multi-task learning is used in disciplines like computer vision, natural language processing, and reinforcement learning. Multi-task learning requires manual task annotation to learn and this paper is interested in machine learning without a clear task definition and manual task annotation.

## Multi-label learning

Multi-label learning aims to assign an input to a set of classes/labels. It is a generalization of multi-class classification, which classifies an input into one class. In multi-label learning, an input can be classified into more than one class. Unlike multi-task learning, multi-label does not consider the relationship between different label judgments.

# Confusing Supervised Learning

## Description of the Problem

Confusing supervised learning (CSL) offers a solution to the issue at hand. A major area of improvement can be seen in the choice of risk measure. In traditional supervised learning, assuming the risk measure is mean squared error (MSE), the expected risk functional is

$$R(g) = \int_x (f(x) - g(x))^2 p(x) \; \mathrm{d}x$$

where $p(x)$ is the prior distribution of the input variable $x$. In practice, model optimizations are performed using the empirical risk

$$R_e(g) = \sum_{i=1}^n (y_i - g(x_i))^2$$

When the problem involves different tasks, the model should optimize for each data point depending on the given task. Let $f_j(x)$ be the true ground-truth function for each task $j$. Therefore, for some input variable $x_i$, an ideal model $g$ would predict $g(x_i) = f_j(x_i)$. With this, the risk functional can be modified to fit this new task for traditional supervised learning methods.

$$R(g) = \int_x \sum_{j=1}^n (f_j(x) - g(x))^2 p(f_j) p(x) \; \mathrm{d}x$$

We call $(f_j(x) - g(x))^2 p(f_j)$ the confusing multiple mappings. Then the optimal solution $g^*(x)$ to the mapping is $\bar{f}(x) = \sum_{j=1}^n p(f_j) f_j(x)$ under this risk functional. However, the optimal solution is not conditional on the specific task at hand but rather on the entire ground-truth functions. Therefore, for every non-trivial set of tasks where $f_u(x) \neq f_v(x)$ for some input $x$ and $u \neq v$, $R(g^*) \gt 0$ which implies that there is an unavoidable confusion risk.

## Learning Functions of CSL

To overcome this issue, the authors introduce two types of learning functions:

• Deconfusing function — allocation of which samples come from the same task
• Mapping function — mapping relation from input to output of every learned task

Suppose there are $n$ ground-truth mappings $\{f_j : 1 \leq j \leq n\}$ that we wish to approximate with a set of mapping functions $\{g_k : 1 \leq k \leq l\}$. The authors define the deconfusing function as an indicator function $h(x, y, g_k)$ which takes some sample $(x,y)$ and determines whether the sample is assigned to task $g_k$. Under the CSL framework, the risk functional (mean squared loss) is

$$R(g,h) = \int_x \sum_{j,k} (f_j(x) - g_k(x))^2 \; h(x, f_j(x), g_k) \;p(f_j) \; p(x) \;\mathrm{d}x$$

which can be estimated empirically with

$$R_e(g,h) = \sum_{i=1}^m \sum_{k=1}^n |y_i - g_k(x_i)|^2 \cdot h(x_i, y_i, g_k)$$

## Theoretical Results

This novel framework yields some theoretical results to show the viability of its construction.

Theorem 1 (Existence of Solution) With the confusing supervised learning framework, there is an optimal solution $$h^*(x, f_j(x), g_k) = \mathbb{I}[j=k]$$

$$g_k^*(x) = f_k(x)$$

for each $k=1,..., n$ that makes the expected risk function of the CSL problem zero.

Theorem 2 (Error Bound of CSL) With probability at least $1 - \eta$ simultaneously with finite VC dimension $\tau$ of CSL learning framework, the risk measure is bounded by

$$R(\alpha) \leq R_e(\alpha) + \frac{B\epsilon(m)}{2} \left(1 + \sqrt{1 + \frac{4R_e(\alpha)}{B\epsilon(m)}}\right)$$

where $\alpha$ is the total parameters of learning functions $g, h$, $B$ is the upper bound of one sample's risk, $m$ is the size of training data and $$\epsilon(m) = 4 \; \frac{\tau (\ln \frac{2m}{\tau} + 1) - \ln \eta / 4}{m}$$

# CSL-Net

In this section the authors describe how to implement and train a network for CSL.

## The Structure of CSL-Net

Two neural networks, deconfusing-net and mapping-net are trained to implement two learning function variables in empirical risk. The optimization target of the training algorithm is: $$\min_{g, h} R_e = \sum_{i=1}^{m}\sum_{k=1}^{n} (y_i - g_k(x_i))^2 \cdot h(x_k, y_k; g_k)$$

The mapping-net is corresponding to functions set $g_k$, where $y_k = g_k(x)$ represents the output of one certain task. The deconfusing-net is corresponding to function h, whose input is a sample $(x,y)$ and output is an n-dimensional one-hot vector. This output vector determines which task the sample $(x,y)$ should be assigned to. The core difficulty of this algorithm is that the risk function cannot be optimized by gradient back-propagation due to the constraint of one-hot output from deconfusing-net. Approximation of softmax will lead the deconfusing-net output into a non-one-hot form, which resulting in meaningless trivial solutions.

## Iterative Deconfusing Algorithm

To overcome the training difficulty, the authors divide the empirical risk minimization into two local optimization problems. In each single-network optimization step, the parameters of one network is updated while the parameters of another remain fixed. With one network's parameters unchanged, the problem can be solved by a gradient descent method of neural networks.

Training of Mapping-Net: With function h from deconfusing-net being determined, the goal is to train every mapping function $g_k$ with its corresponding sample $(x_i^k, y_j^k)$. The optimization problem becomes: $\displaystyle \min_{g_k} L_{map}(g_k) = \sum_{i=1}^{m_k} \mid y_i^k - g_k(x_i^k)\mid^2$. Back-propagation algorithm can be applied to solve this optimization problem.

Training of Deconfusing-Net: The task allocation is re-evaluated during the training phase while the parameters of the mapping-net remain fixed. To minimize the original risk, every sample $(x, y)$ will be assigned to $g_k$ that is closest to label y among all different $k$s. Mapping-net thus provides a temporary solution for deconfusing-net: $\hat{h}(x_i, y_i) = arg \displaystyle\min_{k} \mid y_i - g_k(x_i)\mid^2$. The optimization becomes: $\displaystyle \min_{h} L_{dec}(h) = \sum_{i=1}^{m} \mid {h}(x_i, y_i) - \hat{h}(x_i, y_i)\mid^2$. Similarly, the optimization problem can be solved by updating the deconfusing-net with a back-propagation algorithm.

The two optimization stages are carried out alternately until the solution converges.

# Experiment

## Setup

3 data sets are used to compare CSL to existing methods, 1 function regression task and 2 image classification tasks.

Function Regression: The function regression data comes in the form of $(x_i,y_i),i=1,...,m$ pairs. However, unlike typical regression problems, there are multiple $f_j(x),j=1,...,n$ mapping functions, so the goal is to recover both the mapping functions $f_j$ as well as determine which mapping function corresponds to each of the $m$ observations. 3 scalar-valued, scalar-input functions that intersect at several points with each other have been chosen as the different tasks.

Colorful-MNIST: The first image classification data set consists of the MNIST digit data that has been colored. Each observation in this modified set consists of a colored image ($x_i$) and either the color, or the digit it represents ($y_i$). The goal is to recover the classification task ("color" or "digit") for each observation and construct the 2 classifiers for both tasks.

Kaggle Fashion Product: This data set has more observations than the "colored-MNIST" data and consists of pictures labelled with either the “Gender”, “Category”, and “Color” of the clothing item.

## Use of Pre-Trained CNN Feature Layers

In the Kaggle Fashion Product experiment, each of the 3 classification algorithms $f_j$ consist of fully-connected layers that have been attached to feature-identifying layers from pre-trained Convolutional Neural Networks.

## Metrics of Confusing Supervised Learning

There are two measures of accuracy used to evaluate and compare CSL to other methods, corresponding respectively to the accuracy of the task labelling and the accuracy of the learned mapping function.

Label Assignment Accuracy: $\alpha_T(j)$ is the average number of times the learned deconfusing function $h$ agrees with the task-assignment ability of humans $\tilde h$ on whether each observation in the data "is" or "is not" in task $j$.

$$\alpha_T(j) = \operatorname{max}_k\frac{1}{m}\sum_{i=1}^m I[h(x_i,y_i;f_k),\tilde h(x_i,y_i;f_j)]$$

The max over $k$ is taken because we need to determine which learned task corresponds to which ground-truth task.

Mapping Function Accuracy: $\alpha_T(j)$ again chooses $f_k$, the learned mapping function that is closest to the ground-truth of task $j$, and measures its average absolute accuracy compared to the ground-truth of task $j$, $f_j$, across all $m$ observations.

$$\alpha_L(j) = \operatorname{max}_k\frac{1}{m}\sum_{i=1}^m 1-\dfrac{|g_k(x_i)-f_j(x_i)|}{|f_j(x_i)|}$$

## Results

Given confusing data, CSL performs better than traditional supervised learning methods, Pseudo-Label(Lee, 2013), and SMiLE(Tan et al., 2017). This is demonstrated by CSL's $\alpha_L$ scores of around 95%, compared to $\alpha_L$ scores of under 50% for the other methods. This supports the assertion that traditional methods only learn the means of all the ground-truth mapping functions when presented with confusing data.

Function Regression: In order to "correctly" partition the observations into the correct tasks, a 5-shot warm-up was used.

Image Classification: Visualizations created through Spectral embedding confirm the task labelling proficiency of the deconfusing neural network $h$.

The classification and function prediction accuracy of CSL are comparable to supervised learning programs that have been given access to the ground-truth labels.

## Application of Multi-label Learning

CSL also had better accuracy than traditional supervised learning methods, Pseudo-Label(Lee, 2013), and SMiLE(Tan et al., 2017) when presented with multi-labelled data $(x_i,y_i)$, where $y_i$ is a $n$-long vector containing the correct output for each task.

# Conclusion

This paper proposes the CSL method for tackling the multi-task learning problem with manual task annotations in the input data. The model obtains a basic task concept by differentiating multiple mappings. The paper also demonstrates that the CSL method is an important step to moving from Narrow AI towards General AI for multi-task learning.

# Critique

The classification accuracy of CSL was made with algorithms not designed to deal with confusing data and which do not first classify the task of each observation.

Human task annotation is also imperfect, so one additional application of CSL may be to attempt to flag task annotation errors made by humans, such as in sorting comments for items sold by online retailers; concerned customers in particular may not correctly label their comments as "refund", "order didn't arrive", "order damaged", "how good the item is" etc.