Presented By aslkdfj;awekrf

# 1. Introduction

hialll

Hello

\begin{align*} e & = \pi = \sqrt{g} \end{align*}

# 2. Related Work

How does formatting of paragraphs work? hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi hi

\begin{align*} e & = \text{Hellow}\\ & = \dfrac{123}{4}\\ \end{align*}

\begin{align*} h+1 & = \dfrac{abc}{\text{def}}\\ & = \dfrac{123}{4}\\ \end{align*}

# Experiment

## Setup

3 data sets are used to compare CSL to existing methods, 1 function regression task and 2 image classification tasks.

Function Regression: The function regression data comes in the form of $(x_i,y_i),i=1,...,m$ pairs. However, unlike typical regression problems, there are multiple $f_j(x),j=1,...,n$ mapping functions, so the goal is to recover both the mapping functions $f_j$ as well as determine which mapping function corresponds to each of the $m$ observations. 3 scalar-valued, scalar-input functions that intersect at several points with each other have been chosen as the different tasks.

Colorful-MNIST: The first image classification data set consists of the MNIST digit data that has been colored. Each observation in this modified set consists of a colored image ($x_i$) and either the color, or the digit it represents ($y_i$). The goal is to recover the classification task ("color" or "digit") for each observation and construct the 2 classifiers for both tasks.

Kaggle Fashion Product: This data set has more observations than the "colored-MNIST" data and consists of pictures labelled with either the “Gender”, “Category”, and “Color” of the clothing item.

## Use of Pre-Trained CNN Feature Layers

In the Kaggle Fashion Product experiment, each of the 3 classification algorithms $f_j$ consist of fully-connected layers that have been attached to feature-identifying layers from pre-trained Convolutional Neural Networks.

## Metrics of Confusing Supervised Learning

There are two measures of accuracy used to evaluate and compare CSL to other methods, corresponding respectively to the accuracy of the task labelling and the accuracy of the learned mapping function.

Label Assignment Accuracy: $\alpha_T(j)$ is the average number of times the learned deconfusing function $h$ agrees with the task-assignment ability of humans $\tilde h$ on whether each observation in the data "is" or "is not" in task $j$.

$$\alpha_T(j) = \operatorname{max}_k\frac{1}{m}\sum_{i=1}^m I[h(x_i,y_i;f_k),\tilde h(x_i,y_i;f_j)]$$

The max over $k$ is taken because we need to determine which learned task corresponds to which ground-truth task.

Mapping Function Accuracy: $\alpha_T(j)$ again chooses $f_k$, the learned mapping function that is closest to the ground-truth of task $j$, and measures its average absolute accuracy compared to the ground-truth of task $j$, $f_j$, across all $m$ observations.

$$\alpha_L(j) = \operatorname{max}_k\frac{1}{m}\sum_{i=1}^m 1-\dfrac{|g_k(x_i)-f_j(x_i)|}{|f_j(x_i)|}$$

## Results

Given confusing data, the CSL performs better than traditional supervised learning methods, Pseudo-Label(Lee, 2013), and SMiLE(Tan et al., 2017). This is demonstrated by CSL's $\alpha_L$ scores of around 95%, compared to $\alpha_L$ scores of under 50% for the other methods. This supports the assertion that traditional methods only learn the means of all the ground-truth mapping functions when presented with confusing data.

Function Regression: In order to "correctly" partition the observations into the correct tasks, a 5-shot warm-up was used.

Image Classification: Visualizations created through Spectral embedding confirm the task labelling proficiency of the deconfusing neural network $h$.

The classification and function prediction accuracy of CSL are comparable to supervised learning programs that have been given access to the ground-truth labels.

## Application of Multi-label Learning

CSL also had better accuracy than traditional supervised learning methods, Pseudo-Label(Lee, 2013), and SMiLE(Tan et al., 2017) when presented with multi-labelled data $(x_i,y_i)$, where $y_i$ is a $n$-long vector containing the correct output for each task.