# STAT 441/841 / CM 463/763 - Tuesday, 2011/09/20

## Wiki Course Notes

Students will need to contribute to the wiki for 20% of their grade. Access via wikicoursenote.com Go to editor sign-up, and use your UW userid for your account name, and use your UW email.

primary (10%) Post a draft of lecture notes within 48 hours. You will need to do this 1 or 2 times, depending on class size.

secondary (10%) Make improvements to the notes for at least 60% of the lectures. More than half of your contributions should be technical rather than editorial. There will be a spreadsheet where students can indicate what they've done and when. The instructor will conduct random spot checks to ensure that students have contributed what they claim.

## Classification (Lecture: Sep. 20, 2011)

### Introduction

Machine learning (ML) methodology in general is an artificial intelligence approach to establish and train a model to recognize the pattern or underlying mapping of a system based on a set of training examples consisting of input and output patterns. Unlike in classical statistics where inference is made from small datasets, machine learning involves drawing inference from an overwhelming amount of data that could not be reasonably parsed by manpower.

In machine learning, pattern recognition is the assignment of some sort of output value (or label) to a given input value (or instance), according to some specific algorithm. The approach of using examples to produce the output labels is known as learning methodology. When the underlying function from inputs to outputs exists, it is referred to as the target function. The estimate of the target function which is learned or output by the learning algorithm is known as the solution of learning problem. In case of classification this function is referred to as the decision function.

In the broadest sense, any method that incorporates information from training samples in the design of a classifier employs learning. Learning tasks can be classified along different dimensions. One important dimension is the distinction between supervised and unsupervised learning. In supervised learning a category label for each pattern in the training set is provided. The trained system will then generalize to new data samples. In unsupervised learning , on the other hand, training data has not been labeled, and the system forms clusters or natural grouping of input patterns based on some sort of measure of similarity and it can then be used to determine the correct output value for new data instances.

The first category is known as pattern classification and the second one as clustering. Pattern classification is the main focus in this course.

Classification problem formulation : Suppose that we are given n observations. Each observation consists of a pair: a vector $\mathbf{x}_i\subset \mathbb{R}^d, \quad i=1,...,n$, and the associated label $y_i$. Where $\mathbf{x}_i = (x_{i1}, x_{i2}, ... x_{id}) \in \mathcal{X} \subset \mathbb{R}^d$ and $Y_i$ belongs to some finite set $\mathcal{Y}$.

The classification task is now looking for a function $f:\mathbf{x}_i\mapsto y$ which maps the input data points to a target value (i.e. class label). Function $f(\mathbf{x},\theta)$ is defined by a set of parametrs $\mathbf{\theta}$ and the goal is to train the classifier in a way that among all possible mappings with different parameters the obtained decision boundary gives the minimum classification error.

### Definitions

The true error rate for classifier $h$ is the error with respect to the unknown underlying distribution when predicting a discrete random variable Y from a given input X.

$L(h) = P(h(X) \neq Y )$

The empirical error rate is the error of our classification function $h(x)$ on a given dataset with known outputs (e.g. training data, test data)

$\hat{L}_n(h) = (1/n) \sum_{i=1}^{n} \mathbf{I}(h(X_i) \neq Y_i)$ where h is a clssifier and $\mathbf{I}()$ is an indicator function. The indicator function is defined by

$\mathbf{I}(x) = \begin{cases} 1 & \text{if } x \text{ is true} \\ 0 & \text{if } x \text{ is false} \end{cases}$

So in this case, $\mathbf{I}(h(X_i)\neq Y_i) = \begin{cases} 1 & \text{if } h(X_i)\neq Y_i \text{ (i.e. misclassification)} \\ 0 & \text{if } h(X_i)=Y_i \text{ (i.e. classified properly)} \end{cases}$

For example, suppose we have 100 new data points with known (true) labels

$X_1 ... X_{100}$ $y_1 ... y_{100}$

To calculate the empirical error, we count how many times our function $h(X)$ classifies incorrectly (does not match $y$) and divide by n=100.

### Bayes Classifier

The principle of the Bayes Classifier is to calculate the posterior probability of a given object from its prior probability via Bayes' Rule, and then assign the object to the class with the largest posterior probability<ref> http://www.wikicoursenote.com/wiki/Stat841#Bayes_Classifier </ref>.

First recall Bayes' Rule, in the format $P(Y|X) = \frac{P(X|Y) P(Y)} {P(X)}$

P(Y|X)  : posterior , probability of $Y$ given $X$

P(X|Y)  : likelihood, probability of $X$ being generated by $Y$

P(Y)  : prior, probability of $Y$ being selected

P(X)  : marginal, probability of obtaining $X$

We will start with the simplest case: $\mathcal{Y} = \{0,1\}$

$r(x) = P(Y=1|X=x) = \frac{P(X=x|Y=1) P(Y=1)} {P(X=x)} = \frac{P(X=x|Y=1) P(Y=1)} {P(X=x|Y=1) P(Y=1) + P(X=x|Y=0) P(Y=0)}$

Bayes' rule can be approached by computing either one of the following:

1) The posterior: $\ P(Y=1|X=x)$ and $\ P(Y=0|X=x)$

2) The likelihood: $\ P(X=x|Y=1)$ and $\ P(X=x|Y=0)$

The former reflects a Bayesian approach. The Bayesian approach uses previous beliefs and observed data (e.g., the random variable $\ X$) to determine the probability distribution of the parameter of interest (e.g., the random variable $\ Y$). The probability, according to Bayesians, is a degree of belief in the parameter of interest taking on a particular value (e.g., $\ Y=1$), given a particular observation (e.g., $\ X=x$). Historically, the difficulty in this approach lies with determining the posterior distribution. However, more recent methods such as Markov Chain Monte Carlo (MCMC) allow the Bayesian approach to be implemented <ref name="PCAustin">P. C. Austin, C. D. Naylor, and J. V. Tu, "A comparison of a Bayesian vs. a frequentist method for profiling hospital performance," Journal of Evaluation in Clinical Practice, 2001</ref>.

The latter reflects a Frequentist approach. The Frequentist approach assumes that the probability distribution (including the mean, variance, etc.) is fixed for the parameter of interest (e.g., the variable $\ Y$, which is not random). The observed data (e.g., the random variable $\ X$) is simply a sampling of a far larger population of possible observations. Thus, a certain repeatability or frequency is expected in the observed data. If it were possible to make an infinite number of observations, then the true probability distribution of the parameter of interest can be found. In general, frequentists use a technique called hypothesis testing to compare a null hypothesis (e.g. an assumption that the mean of the probability distribution is $\ \mu_0$) to an alternative hypothesis (e.g. assuming that the mean of the probability distribution is larger than $\ \mu_0$) <ref name="PCAustin"/>. For more information on hypothesis testing see <ref>R. Levy, "Frequency hypothesis testing, and contingency tables" class notes for LING251, Department of Linguistics, University of California, 2007. Available: http://idiom.ucsd.edu/~rlevy/lign251/fall2007/lecture_8.pdf </ref>.

There was some class discussion on which approach should be used. Both the ease of computation and the validity of both approaches were discussed. A main point that was brought up in class is that Frequentists consider X to be a random variable, but they do not consider Y to be a random variable because it has to take on one of the values from a fixed set (in the above case it would be either 0 or 1 and there is only one correct label for a given value X=x). Thus, from a Frequentist's perspective it does not make sense to talk about the probability of Y. This is actually a grey area and sometimes Bayesians and Frequentists use each others' approaches. So using Bayes' rule doesn't necessarily mean you're a Bayesian. Overall, the question remains unresolved.

The Bayes Classifier uses $\ P(Y=1|X=x)$

$P(Y=1|X=x) = \frac{P(X=x|Y=1) P(Y=1)} {P(X=x|Y=1) P(Y=1) + P(X=x|Y=0) P(Y=0)}$

P(Y=1) : The Prior, probability of Y taking the value chosen

denominator : Equivalent to P(X=x), for all values of Y, normalizes the probability

$h(x) = \begin{cases} 1 \ \ \hat{r}(x) \gt 1/2 \\ 0 \ \ otherwise \end{cases}$

The set $\mathcal{D}(h) = \{ x : P(Y=1|X=x) = P(Y=0|X=x)... \}$

which defines a decision boundary.

$h^*(x) = \begin{cases} 1 \ \ if \ \ P(Y=1|X=x) \gt P(Y=0|X=x) \\ 0 \ \ \ \ \ \ otherwise \end{cases}$

Theorem: The Bayes Classifier is optimal, i.e., if $h$ is any other classification rule, then $L(h^*) \lt = L(h)$

Proof: Consider any classifier $h$. We can express the error rate as

$P( \{h(X) \ne Y \} ) = E_{X,Y} [ \mathbf{1}_{\{h(X) \ne Y \}} ] = E_X \left[ E_Y[ \mathbf{1}_{\{h(X) \ne Y \}}| X] \right]$

To minimize this last expression, it suffices to minimize the inner expectation. Expanding this expectation:

$E_Y[ \mathbf{1}_{\{h(X) \ne Y \}}| X] = \sum_{y \in Supp(Y)} P( h(X) \ne y | X) \mathbf{1}_{\{h(X) \ne y \} }$

which, in the two-class case, simplifies to

$= P( h(X) \ne 0 | X) \mathbf{1}_{\{h(X) \ne 0 \} } + P( h(X) \ne 1 | X) \mathbf{1}_{\{h(X) \ne 1 \} }$
$= r(X) \mathbf{1}_{\{h(X) \ne 0 \} } + (1-r(X))\mathbf{1}_{\{h(X) \ne 1 \} }$

where $r(x)$ is defined as above. We should 'choose' h(X) to equal the label that minimizes the sum. Consider if $r(X)\gt 1/2$, then $r(X)\gt 1-r(X)$ so we should let $h(X) = 1$ to minimize the sum. Thus the Bayes classifier is the optimal classifier.

Why then do we need other classification methods? Because X densities are often/typically unknown. I.e., $f_k(x)$ and/or $\pi_k$ unknown.

$P(Y=k|X=x) = \frac{P(X=x|Y=k)P(Y=k)} {P(X=x)} = \frac{f_k(x) \pi_k} {\sum_k f_k(x) \pi_k}$

$f_k(x)$ is referred to as the class conditional distribution (~likelihood).

Therefore, we must rely on some data to estimate these quantities.

### Three Main Approaches

1. Empirical Risk Minimization: Choose a set of classifiers H (e.g., linear, neural network) and find $h^* \in H$ that minimizes (some estimate of) the true error, L(h).

2. Regression: Find an estimate ($\hat{r}$) of function $r$ and define $h(x) = \begin{cases} 1 \ \ \hat{r}(x) \gt 1/2 \\ 0 \ \ otherwise \end{cases}$

The $1/2$ in the expression above is a threshold set for the regression prediction output.

In general regression refers to finding a continuous, real valued y. The problem here is more difficult, because of the restricted domain (y is a set of discrete label values).

3. Density Estimation: Estimate $P(X=x|Y=0)$ from $X_i$'s for which $Y_i = 0$ Estimate $P(X=x|Y=1)$ from $X_i$'s for which $Y_i = 1$ and let $P(Y=y) = (1/n) \sum_{i=1}^{n} I(Y_i = y)$

Define $\hat{r}(x) = \hat{P}(Y=1|X=x)$ and $h(x) = \begin{cases} 1 \ \ \hat{r}(x) \gt 1/2 \\ 0 \ \ otherwise \end{cases}$

It is possible that there may not be enough data to use density estimation, but the main problem lies with high dimensional spaces, as the estimation results may have a high error rate and sometimes estimation may be infeasible. The term curse of dimensionality was coined by Bellman <ref>R. E. Bellman, Dynamic Programming. Princeton University Press, 1957</ref> to describe this problem.

As the dimension of the space goes up, the data points required for learning increases exponentially.

The third approach is the simplest.

### Multi-Class Classification

Generalize to case Y takes on k>2 values.

Theorem: $Y \in \mathcal{Y} = \{1,2,..., k\}$ optimal rule

$\ h^{*}(x) = argmax_k P(Y=k|X=x)$

where $P(Y=k|X=x) = \frac{f_k(x) \pi_k} {\sum_r f_r(x) \pi_r}$

### Examples of Classification

• Face detection in images.
• Medical diagnosis.
• Detecting credit card fraud (fraudulent or legitimate).
• Speech recognition.
• Handwriting recognition.

There are also some interesting reads on Bayes Classification:

## LDA and QDA

Discriminant function analysis finds features that best allow discrimination between two or more classes. The approach is similar to analysis of Variance (ANOVA) in that discriminant function analysis looks at the mean values to determine if two or more classes are very different and should be separated. Once the discriminant functions (that separate two or more classes) have been determined, new data points can be classified (i.e. placed in one of the classes) based on the discriminant functions <ref> StatSoft, Inc. (2011). Electronic Statistics Textbook. [Online]. Available: http://www.statsoft.com/textbook/discriminant-function-analysis/. </ref>. Linear discriminant analysis (LDA) and Quadratic discriminant analysis (QDA) are methods of discriminant analysis that are best applied to linearly and quadradically separable classes, respectively. Fisher discriminant analysis (FDA) another method of discriminant analysis that is different from linear discriminant analysis, but oftentimes both terms are used interchangeably.

### LDA

The simplest method is to use approach 3 (above) and assume a parametric model for densities. Assume class conditional is Gaussian.

$\mathcal{Y} = \{ 0,1 \}$ assumed (i.e., 2 labels)

$h(x) = \begin{cases} 1 \ \ P(Y=1|X=x) \gt P(Y=0|X=x) \\ 0 \ \ otherwise \end{cases}$

$P(Y=1|X=x) = \frac{f_1(x) \pi_1} {\sum_k f_k \pi_k} \ \$ (denom = P(x))

1) Assume Gaussian distributions

$f_k(x) = \frac{1}{(2\pi)^{d/2} |\Sigma_k|^{1/2}} \text{exp}\big(-\frac{1}{2}(\mathbf{x-\mu_k}) \Sigma_k^{-1}(\mathbf{x-\mu_k}) )$

must compare $\frac{f_1(x) \pi_1} {p(x)}$ with $\frac{f_0(x) \pi_0} {p(x)}$ Note that the p(x) denom can be ignored: $f_1(x) \pi_1$ with $f_0(x) \pi_0$

To find the decision boundary, set $f_1(x) \pi_1 = f_0(x) \pi_0$

$\frac{1}{(2\pi)^{d/2} |\Sigma_1|^{1/2}} exp(-\frac{1}{2}(\mathbf{x - \mu_1}) \Sigma_1^{-1}(\mathbf{x-\mu_1}) )\pi_1 = \frac{1}{(2\pi)^{d/2} |\Sigma_0|^{1/2}} exp(-\frac{1}{2}(\mathbf{x -\mu_0}) \Sigma_0^{-1}(\mathbf{x-\mu_0}) )\pi_0$

2) Assume $\Sigma_1 = \Sigma_0$, we can use $\Sigma = \Sigma_0 = \Sigma_1$.

$\frac{1}{(2\pi)^{d/2} |\Sigma|^{1/2}} exp(-\frac{1}{2}(\mathbf{x -\mu_1}) \Sigma^{-1}(\mathbf{x-\mu_1}) )\pi_1 = \frac{1}{(2\pi)^{d/2} |\Sigma|^{1/2}} exp(-\frac{1}{2}(\mathbf{x- \mu_0}) \Sigma^{-1}(\mathbf{x-\mu_0}) )\pi_0$

3) Cancel $(2\pi)^{-d/2} |\Sigma|^{-1/2}$ from both sides.

$exp(-\frac{1}{2}(\mathbf{x - \mu_1}) \Sigma^{-1}(\mathbf{x-\mu_1}) )\pi_1 = exp(-\frac{1}{2}(\mathbf{x - \mu_0}) \Sigma^{-1}(\mathbf{x-\mu_0}) )\pi_0$

4) Take log of both sides.

$-\frac{1}{2}(\mathbf{x - \mu_1}) \Sigma^{-1}(\mathbf{x-\mu_1}) )+ \text{log}(\pi_1) = -\frac{1}{2}(\mathbf{x - \mu_0}) \Sigma^{-1}(\mathbf{x-\mu_0}) )+ \text{log}(\pi_0)$

5) Subtract one side from both sides, leaving zero on one side.

$-\frac{1}{2}(\mathbf{x - \mu_1})^T \Sigma^{-1} (\mathbf{x-\mu_1}) + \text{log}(\pi_1) - [-\frac{1}{2}(\mathbf{x - \mu_0})^T \Sigma^{-1} (\mathbf{x-\mu_0}) + \text{log}(\pi_0)] = 0$

$\frac{1}{2}[-\mathbf{x}^T \Sigma^{-1}\mathbf{x - \mu_1}^T \Sigma^{-1} \mathbf{\mu_1} + 2\mathbf{\mu_1}^T \Sigma^{-1} \mathbf{x} + \mathbf{x}^T \Sigma^{-1}\mathbf{x} + \mathbf{\mu_0}^T \Sigma^{-1} \mathbf{\mu_0} - 2\mathbf{\mu_0}^T \Sigma^{-1} \mathbf{x} ] + \text{log}(\frac{\pi_1}{\pi_0}) = 0$

Cancelling out the terms quadratic in $\mathbf{x}$ and rearranging results in

$\frac{1}{2}[-\mathbf{\mu_1}^T \Sigma^{-1} \mathbf{\mu_1} + \mathbf{\mu_0}^T \Sigma^{-1} \mathbf{\mu_0} + (2\mathbf{\mu_1}^T \Sigma^{-1} - 2\mathbf{\mu_0}^T \Sigma^{-1}) \mathbf{x}] + \text{log}(\frac{\pi_1}{\pi_0}) = 0$

We can see that the first pair of terms is constant, and the second pair is linear in x. Therefore, we end up with something of the form $ax + b = 0$. For more about LDA <ref>http://sites.stat.psu.edu/~jiali/course/stat597e/notes2/lda.pdf</ref>

## LDA and QDA Continued (Lecture: Sep. 22, 2011)

If we relax assumption 2 (i.e. $\Sigma_1 \neq \Sigma_0$) then we get a quadratic equation that can be written as ${x}^Ta{x}+b{x} + c = 0$

### Generalizing LDA and QDA

Theorem:

Suppose that $\,Y \in \{1,\dots,K\}$, if $\,f_k(\mathbf{x}) = Pr(X=\mathbf{x}|Y=k)$ is Gaussian. The Bayes Classifier is

$\,h^*(\mathbf{x}) = \arg\max_{k} \delta_k(\mathbf{x})$

Where

$\,\delta_k(\mathbf{x}) = - \frac{1}{2}log(|\Sigma_k|) - \frac{1}{2}(\mathbf{x}-\boldsymbol{\mu}_k)^\top\Sigma_k^{-1}(\mathbf{x}-\boldsymbol{\mu}_k) + log (\pi_k)$

When the Gaussian variances are equal $\Sigma_1 = \Sigma_0$ (e.g. LDA), then

$\,\delta_k(\mathbf{x}) = \mathbf{x}^\top\Sigma^{-1}\boldsymbol{\mu}_k - \frac{1}{2}\boldsymbol{\mu}_k^\top\Sigma^{-1}\boldsymbol{\mu}_k + log (\pi_k)$

(To compute this, we need to calculate the value of $\,\delta$ for each class, and then take the one with the max. value).

### In practice

We estimate the prior to be the chance that a random item from the collection belongs to class k, e.g.

$\,\hat{\pi_k} = \hat{Pr}(y=k) = \frac{n_k}{n}$

The mean to be the average item in set k, e.g.

$\,\hat{\mu_k} = \frac{1}{n_k}\sum_{i:y_i=k}x_i$

and calculate the covariance of each class e.g.

$\,\hat{\Sigma_k} = \frac{1}{n_k}\sum_{i:y_i=k}(x_i-\hat{\mu_k})(x_i-\hat{\mu_k})^\top$

If we wish to use LDA we must calculate a common covariance, so we average all the covariances e.g.

$\,\Sigma=\frac{\sum_{r=1}^{k}(n_r\Sigma_r)}{\sum_{r=1}^{k}n_r}$

Where: $\,n_r$ is the number of data points in class $\,r$, $\,\Sigma_r$ is the covariance of class $\,r$, $\,n$ is the total number of data points, and $\,k$ is the number of classes.

### Computation

For QDA we need to calculate: $\,\delta_k(\mathbf{x}) = - \frac{1}{2}log(|\Sigma_k|) - \frac{1}{2}(\mathbf{x}-\boldsymbol{\mu}_k)^\top\Sigma_k^{-1}(\mathbf{x}-\boldsymbol{\mu}_k) + log (\pi_k)$

Lets first consider when $\, \Sigma_k = I, \forall k$. This is the case where each distribution is spherical, around the mean point.

#### Case 1

When $\, \Sigma_k = I$

We have:

$\,\delta_k = - \frac{1}{2}log(|I|) - \frac{1}{2}(\mathbf{x}-\boldsymbol{\mu}_k)^\top I(\mathbf{x}-\boldsymbol{\mu}_k) + log (\pi_k)$

but $\ \log(|I|)=\log(1)=0$

and $\, (\mathbf{x}-\boldsymbol{\mu}_k)^\top I(\mathbf{x}-\boldsymbol{\mu}_k) = (\mathbf{x}-\boldsymbol{\mu}_k)^\top(\mathbf{x}-\boldsymbol{\mu}_k)$ is the squared Euclidean distance between two points $\,\mathbf{x}$ and $\,\boldsymbol{\mu}_k$

Thus in this condition, a new point can be classified by its distance away from the center of a class, adjusted by some prior.

Further, for two-class problem with equal prior, the discriminating function would be the bisector of the 2-class's means.

#### Case 2

When $\, \Sigma_k \neq I$

Using the Singular Value Decomposition (SVD) of $\, \Sigma_k$ we get $\, \Sigma_k = U_kS_kV_k^\top$. In particular, $\, U_k$ is a collection of eigenvectors of $\, \Sigma_k\Sigma_k^*$, and $\, V_k$ is a collection of eigenvectors of $\,\Sigma_k^*\Sigma_k$. Since $\, \Sigma_k$ is a symmetric matrix<ref> http://en.wikipedia.org/wiki/Covariance_matrix#Properties </ref>, $\, \Sigma_k = \Sigma_k^*$, so we have $\, \Sigma_k = U_kS_kU_k^\top$.

For $\,\delta_k$, the second term becomes what is also known as the Mahalanobis distance <ref>P. C. Mahalanobis, "On The Generalised Distance in Statistics," Proceedings of the National Institute of Sciences of India, 1936</ref> :

\begin{align} (\mathbf{x}-\boldsymbol{\mu}_k)^\top\Sigma_k^{-1}(\mathbf{x}-\boldsymbol{\mu}_k)&= (\mathbf{x}-\boldsymbol{\mu}_k)^\top U_kS_k^{-1}U_k^T(\mathbf{x}-\boldsymbol{\mu}_k)\\ & = (U_k^\top \mathbf{x}-U_k^\top\boldsymbol{\mu}_k)^\top S_k^{-1}(U_k^\top \mathbf{x}-U_k^\top \boldsymbol{\mu}_k)\\ & = (U_k^\top \mathbf{x}-U_k^\top\boldsymbol{\mu}_k)^\top S_k^{-\frac{1}{2}}S_k^{-\frac{1}{2}}(U_k^\top \mathbf{x}-U_k^\top\boldsymbol{\mu}_k) \\ & = (S_k^{-\frac{1}{2}}U_k^\top \mathbf{x}-S_k^{-\frac{1}{2}}U_k^\top\boldsymbol{\mu}_k)^\top I(S_k^{-\frac{1}{2}}U_k^\top \mathbf{x}-S_k^{-\frac{1}{2}}U_k^\top \boldsymbol{\mu}_k) \\ & = (S_k^{-\frac{1}{2}}U_k^\top \mathbf{x}-S_k^{-\frac{1}{2}}U_k^\top\boldsymbol{\mu}_k)^\top(S_k^{-\frac{1}{2}}U_k^\top \mathbf{x}-S_k^{-\frac{1}{2}}U_k^\top \boldsymbol{\mu}_k) \\ \end{align}

If we think of $\, S_k^{-\frac{1}{2}}U_k^\top$ as a linear transformation that takes points in class $\,k$ and distributes them spherically around a point, like in case 1. Thus when we are given a new point, we can apply the modified $\,\delta_k$ values to calculate $\ h^*(\,x)$. After applying the singular value decomposition, $\,\Sigma_k^{-1}$ is considered to be an identity matrix such that

$\,\delta_k = - \frac{1}{2}log(|I|) - \frac{1}{2}[(S_k^{-\frac{1}{2}}U_k^\top \mathbf{x}-S_k^{-\frac{1}{2}}U_k^\top\boldsymbol{\mu}_k)^\top(S_k^{-\frac{1}{2}}U_k^\top \mathbf{x}-S_k^{-\frac{1}{2}}U_k^\top \boldsymbol{\mu}_k)] + log (\pi_k)$

and,

$\ \log(|I|)=\log(1)=0$

For applying the above method with classes that have different covariance matrices (for example the covariance matrices $\ \Sigma_0$ and $\ \Sigma_1$ for the two class case), each of the covariance matrices has to be decomposed using SVD to find the according transformation. Then, each new data point has to be transformed using each transformation to compare its distance to the mean of each class (for example for the two class case, the new data point would have to be transformed by the class 1 transformation and then compared to $\ \mu_0$ and the new data point would also have to be transformed by the class 2 transformation and then compared to $\ \mu_1$).

The difference between Case 1 and Case 2 (i.e. the difference between using the Euclidean and Mahalanobis distance) can be seen in the illustration below.

Illustration of Euclidean distance (a) and Mahalanobis distance (b) where the contours represent equidistant points from the center using each distance metric. Source: <ref>R. De Maesschalck, D. Jouan-Rimbaud and D. L. Massart, "Tutorial - The Mahalanobis distance," Chemometrics and Intelligent Laboratory Systems, 2000 </ref>

As can be seen from the illustration above, the Mahalanobis distance takes into account the distribution of the data points, whereas the Euclidean distance would treat the data as though it has a spherical distribution. Thus, the Mahalanobis distance applies for the more general classification in Case 2, whereas the Euclidean distance applies to the special case in Case 1 where the data distribution is assumed to be spherical.

Generally, we can conclude that QDA provides a better classifier for the data then LDA because LDA assumes that the covariance matrix is identical for each class, but QDA does not. QDA still uses Gaussian distribution as a class conditional distribution. In our real life, this distribution can not be happened each time, so we have to use other distribution as a complement.

### The Number of Parameters in LDA and QDA

Both LDA and QDA require us to estimate some parameters. Here is a comparison between the number of parameters needed to be estimated for LDA and QDA:

LDA: Since we just need to compare the differences between one given class and remaining $\,K-1$ classes, totally, there are $\,K-1$ differences. For each of them, $\,a^{T}x+b$ requires $\,d+1$ parameters. Therefore, there are $\,(K-1)\times(d+1)$ parameters.

QDA: For each of the differences, $\,x^{T}ax + b^{T}x + c$ requires $\frac{1}{2}(d+1)\times d + d + 1 = \frac{d(d+3)}{2}+1$ parameters. Therefore, there are $(K-1)(\frac{d(d+3)}{2}+1)$ parameters. Thus QDA may suffer much more extremely from the curse of dimensionality.

A plot of the number of parameters that must be estimated, in terms of (K-1). The x-axis represents the number of dimensions in the data. As is easy to see, QDA is far less robust than LDA for high-dimensional data sets.

## Trick: Using LDA to do QDA

There is a trick that allows us to use the linear discriminant analysis (LDA) algorithm to generate as its output a quadratic function that can be used to classify data. This trick is similar to, but more primitive than, the Kernel trick that will be discussed later in the course.

In this approach the feature vector is augmented with the quadratic terms (i.e. new dimensions are introduced) where the original data will be projected to that dimensions. We then apply LDA on the new higher-dimensional data.

The motivation behind this approach is to take advantage of the fact that fewer parameters have to be calculated in LDA , as explained in previous sections, and therefore have a more robust system in situations where we have fewer data points.

If we look back at the equations for LDA and QDA, we see that in LDA we must estimate $\,\mu_1$, $\,\mu_2$ and $\,\Sigma$. In QDA we must estimate all of those, plus another $\,\Sigma$; the extra $\,\frac{d(d-1)}{2}$ estimations make QDA less robust with fewer data points.

### Theoretically

Suppose we have a quadratic function to estimate: $g(\mathbf{x}) = y = \mathbf{x}^T\mathbf{v}\mathbf{x} + \mathbf{w}^T\mathbf{x}$.

Using this trick, we introduce two new vectors, $\,\hat{\mathbf{w}}$ and $\,\hat{\mathbf{x}}$ such that:

$\hat{\mathbf{w}} = [w_1,w_2,...,w_d,v_1,v_2,...,v_d]^T$

and

$\hat{\mathbf{x}} = [x_1,x_2,...,x_d,{x_1}^2,{x_2}^2,...,{x_d}^2]^T$

We can then apply LDA to estimate the new function: $\hat{g}(\mathbf{x},\mathbf{x}^2) = \hat{y} =\hat{\mathbf{w}}^T\hat{\mathbf{x}}$.

Note that we can do this for any $\, x$ and in any dimension; we could extend a $D \times n$ matrix to a quadratic dimension by appending another $D \times n$ matrix with the original matrix squared, to a cubic dimension with the original matrix cubed, or even with a different function altogether, such as a $\,sin(x)$ dimension. Note, we are not applying QDA, but instead extending LDA to calculate a non-linear boundary, that will be different from QDA. This algorithm is called nonlinear LDA.

## Principal Component Analysis (PCA) (Lecture: Sep. 27, 2011)

Principal Component Analysis (PCA) is a method of dimensionality reduction/feature extraction that transforms the data from a D dimensional space into a new coordinate system of dimension d, where d <= D ( the worst case would be to have d=D). The goal is to preserve as much of the variance in the original data as possible when switching the coordinate systems. Give data on D variables, the hope is that the data points will lie mainly in a linear subspace of dimension lower than D. In practice, the data will usually not lie precisely in some lower dimensional subspace.

The new variables that form a new coordinate system are called principal components (PCs). PCs are denoted by $\ \mathbf{u}_1, \mathbf{u}_2, ... , \mathbf{u}_D$. The principal components form a basis for the data. Since PCs are orthogonal linear transformations of the original variables there is at most D PCs. Normally, not all of the D PCs are used but rather a subset of d PCs, $\ \mathbf{u}_1, \mathbf{u}_2, ... , \mathbf{u}_d$, to approximate the space spanned by the original data points $\ \mathbf{x}=[x_1, x_2, ... , x_D]^T$. We can choose d based on what percentage of the variance of the original data we would like to maintain.

The first PC, $\ \mathbf{u}_1$ is called first principal component and has the maximum variance, thus it accounts for the most significant variance in the data. The second PC, $\ \mathbf{u}_2$ is called second principal component and has the second highest variance and so on until PC, $\ \mathbf{u}_D$ which has the minimum variance.

Let $u_i = \mathbf{w}^T\mathbf{x_i}$ be the projection of the data point $\mathbf{x_i}$ on the direction of w if w is of length one.

$\mathbf{u = (u_1,....,u_D)^T}\qquad$ , $\quad\mathbf{w^Tw = 1 }$

$var(u) =\mathbf{w}^T X (\mathbf{w}^T X)^T = \mathbf{w}^T X X^T\mathbf{w} = \mathbf{w}^TS\mathbf{w} \quad$ Where $\quad X X^T = S$ is the sample covariance matrix.

We would like to find the $\ \mathbf{w}$ which gives us maximum variation:

$\ \max (Var(\mathbf{w}^T \mathbf{x})) = \max (\mathbf{w}^T S \mathbf{w})$

Note: we require the constraint $\ \mathbf{w}^T \mathbf{w} = 1$ because if there is no constraint on the length of $\ \mathbf{w}$ then there is no upper bound. With the constraint, the direction and not the length that maximizes the variance can be found.

#### Lagrange Multiplier

Before we proceed, we should review Lagrange multipliers.

"The red line shows the constraint g(x,y) = c. The blue lines are contours of f(x,y). The point where the red line tangentially touches a blue contour is our solution." [Lagrange Multipliers, Wikipedia]

Lagrange multipliers are used to find the maximum or minimum of a function $\displaystyle f(x,y)$ subject to constraint $\displaystyle g(x,y)=0$

we define a new constant $\lambda$ called a Lagrange Multiplier and we form the Lagrangian,

$\displaystyle L(x,y,\lambda) = f(x,y) - \lambda g(x,y)$

If $\displaystyle f(x^*,y^*)$ is the max of $\displaystyle f(x,y)$, there exists $\displaystyle \lambda^*$ such that $\displaystyle (x^*,y^*,\lambda^*)$ is a stationary point of $\displaystyle L$ (partial derivatives are 0).
In addition $\displaystyle (x^*,y^*)$ is a point in which functions $\displaystyle f$ and $\displaystyle g$ touch but do not cross. At this point, the tangents of $\displaystyle f$ and $\displaystyle g$ are parallel or gradients of $\displaystyle f$ and $\displaystyle g$ are parallel, such that:

$\displaystyle \nabla_{x,y } f = \lambda \nabla_{x,y } g$

where,
$\displaystyle \nabla_{x,y} f = (\frac{\partial f}{\partial x},\frac{\partial f}{\partial{y}}) \leftarrow$ the gradient of $\, f$
$\displaystyle \nabla_{x,y} g = (\frac{\partial g}{\partial{x}},\frac{\partial{g}}{\partial{y}}) \leftarrow$ the gradient of $\, g$

#### Example :

Suppose we want to maximize the function $\displaystyle f(x,y)=x-y$ subject to the constraint $\displaystyle x^{2}+y^{2}=1$. We can apply the Lagrange multiplier method to find the maximum value for the function $\displaystyle f$; the Lagrangian is:

$\displaystyle L(x,y,\lambda) = x-y - \lambda (x^{2}+y^{2}-1)$

We want the partial derivatives equal to zero:

$\displaystyle \frac{\partial L}{\partial x}=1+2 \lambda x=0$

$\displaystyle \frac{\partial L}{\partial y}=-1+2\lambda y=0$

$\displaystyle \frac{\partial L}{\partial \lambda}=x^2+y^2-1$

Solving the system we obtain two stationary points: $\displaystyle (\sqrt{2}/2,-\sqrt{2}/2)$ and $\displaystyle (-\sqrt{2}/2,\sqrt{2}/2)$. In order to understand which one is the maximum, we just need to substitute it in $\displaystyle f(x,y)$ and see which one as the biggest value. In this case the maximum is $\displaystyle (\sqrt{2}/2,-\sqrt{2}/2)$.

### Determining w :

Use the Lagrange multiplier conversion to obtain: $\displaystyle L(\mathbf{w}, \lambda) = \mathbf{w}^T S\mathbf{w} - \lambda (\mathbf{w}^T \mathbf{w} - 1)$ where $\displaystyle \lambda$ is a constant

Take the derivative and set it to zero: $\displaystyle{\partial L \over{\partial \mathbf{w}}} = 0$

To obtain: $\displaystyle 2S\mathbf{w} - 2 \lambda \mathbf{w} = 0$

Rearrange to obtain: $\displaystyle S\mathbf{w} = \lambda \mathbf{w}$

where $\displaystyle w$ is eigenvector of $\displaystyle S$ and $\ \lambda$ is the eigenvalue of $\displaystyle S$ as $\displaystyle S\mathbf{w}= \lambda \mathbf{w}$ , and $\displaystyle \mathbf{w}^T \mathbf{w}=1$ , then we can write

$\displaystyle \mathbf{w}^T S\mathbf{w}= \mathbf{w}^T\lambda \mathbf{w}= \lambda \mathbf{w}^T \mathbf{w} =\lambda$

Note that the PCs decompose the total variance in the data in the following way :

$\sum_{i=1}^{D} Var(u_i)$

$= \sum_{i=1}^{D} (\lambda_i)$

$\ = Tr(S)$ ---- (S is a co-variance matrix, and therefore it's symmetric)

$= \sum_{i=1}^{D} Var(x_i)$

## Principal Component Analysis (PCA) Continued (Lecture: Sep. 29, 2011)

As can be seen from the above expressions, $\ Var(\mathbf{w}^\top \mathbf{w}) = \mathbf{w}^\top S \mathbf{w}= \lambda$ where lambda is an eigenvalue of the sample covariance matrix $\ S$ and $\ \mathbf{w}$ is its corresponding eigenvector. So $\ Var(u_i)$ is maximized if $\ \lambda_i$ is the maximum eigenvalue of $\ S$ and the first principal component (PC) is the corresponding eigenvector. Each successive PC can be generated in the above manner by taking the eigenvectors of $\ S$<ref>www.wikipedia.org/wiki/Eigenvalues_and_eigenvectors</ref> that correspond to the eigenvalues:

$\ \lambda_1 \geq ... \geq \lambda_D$

such that

$\ Var(u_1) \geq ... \geq Var(u_D)$

### Alternative Derivation

Another way of looking at PCA is to consider PCA as a projection from a higher D-dimension space to a lower d-dimensional subspace that minimizes the squared reconstruction error. The squared reconstruction error is the difference between the original data set $\ X$ and the new data set $\hat{X}$ obtained by first projecting the original data set into a lower d-dimensional subspace and then projecting it back into the the original higher D-dimension space. Since information is (normally) lost by compressing the the original data into a lower d-dimensional subspace, the new data set will (normally) differ from the original data even though both are part of the higher D-dimension space. The reconstruction error is computed as shown below.

#### Reconstruction Error

$e = \sum_{i=1}^{n} || x_i - \hat{x}_i ||^2$

#### Minimize Reconstruction Error

Suppose $\bar{x} = 0$ where $\hat{x}_i = x_i - \bar{x}$

Let $\ f(y) = U_d y$ where $\ U_d$ is a D by d matrix with d orthogonal unit vectors as columns.

Fit the model to the data and minimize the reconstruction error:

$\ min_{U_d, y_i} \sum_{i=1}^n || x_i - U_d y_i ||^2$

Differentiate with respect to $\ y_i$:

$\frac{\partial e}{\partial y_i} = 0$

we can rewrite reconstruction-error as : $\ e = \sum_{i=1}^n(x_i - U_d y_i)^T(x_i - U_d y_i)$

$\ \frac{\partial e}{\partial y_i} = 2(-U_d)(x_i - U_d y_i) = 0$

since $\ U_d(x_i - U_d y_i)$ is a linear combination of the columns of $\ U_d$,

which are independent (orthogonal to each other) we can conclude that:

$\ x_i - U_d y_i = 0$ or equivalently,

$\ x_i = U_d y_i$

$\ y_i = U_d^T x_i$

Find the orthogonal matrix $\ U_d$:

$\ min_{U_d} \sum_{i=1}^n || x_i - U_d U_d^T x_i||^2$

#### PCA Implementation Using Singular Value Decomposition

A unique solution can be obtained by finding the Singular Value Decomposition (SVD) of $\ X$:

$\ X = U S V^T$

For each rank d, $\ U_d$ consists of the first d columns of $\ U$. Also, the covariance matrix can be expressed as follows $\ S = \frac{1}{n-1}\sum_{i=1}^n (x_i - \mu)(x_i - \mu)^T$.

Simply put, by subtracting the mean of each of the data point features and then applying SVD, one can find the principal components:

$\tilde{X} = X - \mu$

$\ \tilde{X} = U S V^T$

Where $\ X$ is a d by n matrix of data points and the features of each data point form a column in $\ X$. Also, $\ \mu$ is a d by n matrix with identical columns each equal to the mean of the $\ x_i$'s, ie $\mu_{:,j}=\frac{1}{n}\sum_{i=1}^n x_i$. Note that the arrangement of data points is a convention and indeed in Matlab or conventional statistics, the transpose of the matrices in the above formulae is used.

As the $\ S$ matrix from the SVD has the eigenvalues arranged from largest to smallest, the corresponding eigenvectors in the $\ U$ matrix from the SVD will be such that the first column of $\ U$ is the first principal component and the second column is the second principal component and so on.

### Examples

Note that in the Matlab code in the examples below, the mean was not subtracted from the datapoints before performing SVD. This is what was shown in class. However, to properly perform PCA, the mean should be subtracted from the datapoints.

#### Example 1

Consider a matrix of data points $\ X$ with the dimensions 560 by 1965. 560 is the number of elements in each column. Each column is a vector representation of a 20x28 grayscale pixel image of a face (see image below) and there is a total of 1965 different images of faces. Each of the images are corrupted by noise, but the noise can be removed by projecting the data back to the original space taking as many dimensions as one likes (e.g, 2, 3 4 0r 5). The corresponding Matlab commands are shown below:

</ref>
 >> % start with a 560 by 1965 matrix X that contains the data points
>>
>> % set the colors to grayscale
>> colormap gray
>>
>> % show image in column 10 by reshaping column 10 into a 20 by 28 matrix
>> imagesc(reshape(X(:,10),20,28)')
>>
>> % perform SVD, if X matrix if full rank, will obtain 560 PCs
>> [S U V] = svd(X);
>>
>> % reconstruct X ( project X onto the original space) using only the first ten principal components
>> Y_pca = U(:, 1:10)'*X;
>>
>> % show image in column 10 of X_hat which is now a 560 by 1965 matrix
>> imagesc(reshape(X_hat(:,10),20,28)')


The reason why the noise is removed in the reconstructed image is because the noise does not create a major variation in a single direction in the original data. Hence, the first ten PCs taken from $\ U$ matrix are not in the direction of the noise. Thus, reconstructing the image using the first ten PCs, will remove the noise.

#### Example 2

Consider a matrix of data points $\ X$ with the dimensions 64 by 400. 64 is the number of elements in each column. Each column is a vector representation of a 8x8 grayscale pixel image of either a handwritten number 2 or a handwritten number 3 (see image below) and there are a total of 400 different images, where the first 200 images show a handwritten number 2 and the last 200 images show a handwritten number 3.

An example of the handwritten number images used in Example 2. Source: <ref>A. Ghodsi, "PCA" class notes for STAT841, Department of Statistics and Actuarial Science, University of Waterloo, 2011. </ref>

The corresponding Matlab commands for performing PCA on the data points are shown below:

 >> % start with a 64 by 400 matrix X that contains the data points
>>
>> % set the colors to grayscale
>> colormap gray
>>
>> % show image in column 2 by reshaping column 2 into a 8 by 8 matrix
>> imagesc(reshape(X(:,2),8,8))
>>
>> % perform SVD, if X matrix if full rank, will obtain 64 PCs
>> [U S V] = svd(X);
>>
>> % project data down onto the first two PCs
>> Y = U(:,1:2)'*X;
>>
>> % show Y as an image (can see the change in the first PC at column 200,
>> % when the handwritten number changes from 2 to 3)
>> imagesc(Y)
>>
>> % perform PCA using Matlab build-in function (do not use for assignment)
>> % also note that due to the Matlab convention, the transpose of X is used
>> [COEFF, Y] = princomp(X');
>>
>> % again, use the first two PCs
>> Y = Y(:,1:2);
>>
>> % use plot digits to show the distribution of images on the first two PCs
>> images = reshape(X, 8, 8, 400);
>> plotdigits(images, Y, .1, 1);


Using the plotdigits function in Matlab, clearly illustrates that the first PC captured the differences between the numbers 2 and 3 as they are projected onto different regions of the axis for the first PC. Also, the second PC captured the tilt of the handwritten numbers as numbers tilted to the left or right were projected onto different regions of the axis for the second PC.

#### Example 3

(Not discussed in class) In the news recently was a story that captures some of the ideas behind PCA. Over the past two years, Scott Golder and Michael Macy, researchers from Cornell University, collected 509 million Twitter messages from 2.4 million users in 84 different countries. The data they used were words collected at various times of day and they classified the data into two different categories: positive emotion words and negative emotion words. Then, they were able to study this new data to evaluate subjects' moods at different times of day, while the subjects were in different parts of the world. They found that the subjects generally exhibited positive emotions in the mornings and late evenings, and negative emotions mid-day. They were able to "project their data onto a smaller dimensional space" using PCS. Their paper, "Diurnal and Seasonal Mood Vary with Work, Sleep, and Daylength Across Diverse Cultures," is available in the journal Science.<ref>http://www.pcworld.com/article/240831/twitter_analysis_reveals_global_human_moodiness.html</ref>.

Assumptions Underlying Principal Component Analysis can be found here<ref>http://support.sas.com/publishing/pubcat/chaps/55129.pdf</ref>

#### Example 4

(Not discussed in class) A somewhat well known learning rule in the field of neural networks called Oja's rule can be used to train networks of neurons to compute the principal component directions of data sets. <ref>A Simplified Neuron Model as a Principal Component Analyzer. Erkki Oja. 1982. Journal of Mathematical Biology. 15: 267-273</ref>