stat841f11: Difference between revisions

From statwiki
Jump to navigation Jump to search
 
Line 1: Line 1:
== [[stat841f14 | Data Visualization  (Stat 442 / 842, CM 762 - Fall 2014) ]] ==
== Archive ==
==[[f11Stat841proposal| Proposal for Final Project]]==
==[[f11Stat841proposal| Proposal for Final Project]]==
==[[f11Stat841presentation| Presentation Sign Up]]==


==[[f11Stat841EditorSignUp| Editor Sign Up]]==
==[[f11Stat841EditorSignUp| Editor Sign Up]]==
Line 21: Line 24:


== Classification (Lecture: Sep. 20, 2011) ==
== Classification (Lecture: Sep. 20, 2011) ==
=== Definitions ===
'''classification''': Predict a discrete random variable <math>Y</math> (a label) by using another random variable <math>X</math>
(new data point) picked iid from a distribution


<math>X_i = (X_{i1}, X_{i2}, ... X_{id}) \in \mathcal{X} \subset \mathbb{R}^d</math> (<math>d</math>-dimensional vector)
===Introduction===
<math>Y_i</math> in some finite set <math>\mathcal{Y}</math>
''Machine learning'' (ML) methodology in general is an artificial intelligence approach to establish and train a model to recognize the pattern or underlying mapping of a system based on a set of training examples consisting of input and output patterns. Unlike in classical statistics where inference is made from small datasets, machine learning involves drawing inference from an overwhelming amount of data that could not be reasonably parsed by manpower.
 
In machine learning, pattern recognition is the assignment of some sort of output value (or label) to a given input value (or instance), according to some specific algorithm. The approach of using examples to produce the output labels is known as ''learning methodology''. When the underlying function from inputs to outputs exists, it is referred to as the target function. The estimate of the target function which is learned or output by the learning algorithm is known as the solution of learning problem. In case of classification this function is referred to as the ''decision function''.  
 
In the broadest sense, any method that incorporates information from training samples in the design of a classifier employs learning. Learning tasks can be classified along different dimensions. One important dimension is the distinction between supervised and unsupervised learning. In supervised learning a category label for each pattern in the training set is provided. The trained system will then generalize to new data samples. In unsupervised learning , on the other hand, training data has not been labeled, and the system forms clusters or natural grouping of input patterns based on some sort of measure of similarity and it can then be used to determine the correct output value for new data instances.
 
The first category is known as ''pattern classification'' and the second one as ''clustering''. Pattern classification is the main focus in this course.




'''classification rule''':
'''Classification problem formulation ''': Suppose that we are given ''n'' observations. Each observation consists of a pair: a vector <math>\mathbf{x}_i\subset \mathbb{R}^d, \quad i=1,...,n</math>, and the associated  label <math>y_i</math>.
<math>h : \mathcal{X} \rightarrow \mathcal{Y}</math>
Where <math>\mathbf{x}_i = (x_{i1}, x_{i2}, ... x_{id}) \in \mathcal{X} \subset \mathbb{R}^d</math> and <math>Y_i</math> belongs to some finite set <math>\mathcal{Y}</math>.
Take new observation <math>X</math> and use a classification function <math>h(x)</math> to generate a label <math>Y</math>. In other words, If we fit the function <math>h(x)</math> with a random variable <math>X</math>, it generates the label <math>Y</math> which is the class to which we predict <math>X</math> belongs.


Example: Let <math> \mathcal{X}</math> be a set of 2D images and <math>\mathcal{Y}</math> be a finite set of people. We want to learn a classification rule <math>h:\mathcal{X}\rightarrow\mathcal{Y}</math> that with small ''true'' error predicts the person who appears in the image.
The classification task is now looking for a function <math>f:\mathbf{x}_i\mapsto y</math> which maps the input data points to a target value (i.e. class label). Function <math>f(\mathbf{x},\theta)</math> is defined by a set of parametrs <math>\mathbf{\theta}</math> and the goal is to train the classifier in a way that among all possible mappings with different parameters the obtained decision boundary gives the minimum classification error.


=== Definitions ===


'''true error rate''' for classifier <math>h</math> is the error with respect to the underlying distribution (that we do not know).
The '''true error rate''' for classifier <math>h</math> is the error with respect to the unknown underlying distribution when predicting a discrete random variable Y from a given input X.


<math>L(h) = P(h(X) \neq Y )</math>
<math>L(h) = P(h(X) \neq Y )</math>




'''empirical error rate''' (or training error rate) is the amount of error that our classification function <math>h(x)</math> makes on the training data.
The '''empirical error rate''' is the error of our classification function <math>h(x)</math> on a given dataset with known outputs (e.g. training data, test data)


<math>\hat{L}_n(h)  =  (1/n) \sum_{i=1}^{n} \mathbf{I}(h(X_i) \neq Y_i)</math>
<math>\hat{L}_n(h)  =  (1/n) \sum_{i=1}^{n} \mathbf{I}(h(X_i) \neq Y_i)</math>
 
where h is a clssifier
where <math>\mathbf{I}()</math> is an indicator function. Indicator function is defined by  
and <math>\mathbf{I}()</math> is an indicator function. The indicator function is defined by  


<math>\mathbf{I}(x) = \begin{cases}  
<math>\mathbf{I}(x) = \begin{cases}  
Line 54: Line 60:
So in this case,
So in this case,
<math>\mathbf{I}(h(X_i)\neq Y_i) = \begin{cases}
<math>\mathbf{I}(h(X_i)\neq Y_i) = \begin{cases}
1 & \text{if } h(X_i)\neq Y_i \text{ (i.e. when misclassification happens)} \\
1 & \text{if } h(X_i)\neq Y_i \text{ (i.e. misclassification)} \\
0 & \text{if } h(X_i)=Y_i \text{ (i.e. classified properly)}
0 & \text{if } h(X_i)=Y_i \text{ (i.e. classified properly)}
\end{cases}</math>
\end{cases}</math>


e.g., 100 new data points with known (true) labels


<math>y_1 = h(x_1)</math>
For example, suppose we have 100 new data points with known (true) labels


...
<math>X_1 ... X_{100}</math>
<math>y_1 ... y_{100}</math>


<math>y_{100} = h(x_{100})</math>
To calculate the empirical error, we count how many times our function <math>h(X)</math> classifies incorrectly (does not match <math>y</math>) and divide by n=100.
 
To calculate the empirical error we count how many labels our function <math>h(x)</math> assigned incorrectly and divide by n=100


=== Bayes Classifier ===
=== Bayes Classifier ===
The principle of Bayes Classifier is to calculate the posterior probability of a given object from its prior probability via Bayes formula, and then place the object in the class with the largest posterior probability<ref> http://www.wikicoursenote.com/wiki/Stat841#Bayes_Classifier </ref>.
The principle of the Bayes Classifier is to calculate the posterior probability of a given object from its prior probability via Bayes' Rule, and then assign the object to the class with the largest posterior probability<ref> http://www.wikicoursenote.com/wiki/Stat841#Bayes_Classifier </ref>.


First recall Bayes' Rule, in the format
First recall Bayes' Rule, in the format
Line 90: Line 94:
=  \frac{P(X=x|Y=1) P(Y=1)} {P(X=x|Y=1) P(Y=1)  +  P(X=x|Y=0) P(Y=0)}</math>
=  \frac{P(X=x|Y=1) P(Y=1)} {P(X=x|Y=1) P(Y=1)  +  P(X=x|Y=0) P(Y=0)}</math>


Bayes' rule can be approached by computing either:
Bayes' rule can be approached by computing either one of the following:


1) '''The posterior''':  <math>\ P(Y=1|X=x) </math> and <math>\ P(Y=0|X=x) </math> or 
1) '''The posterior''':  <math>\ P(Y=1|X=x) </math> and <math>\ P(Y=0|X=x) </math>  


2) '''The likelihood''': <math>\ P(X=x|Y=1) </math> and <math>\ P(X=x|Y=0) </math>
2) '''The likelihood''': <math>\ P(X=x|Y=1) </math> and <math>\ P(X=x|Y=0) </math>




The former reflects a '''Bayesian''' approach. The Bayesian approach uses previous beliefs and observed data (e.g., the random variable <math>\ X </math>) to determine the probability distribution of the parameter of interest (e.g., the random variable <math>\ Y </math>). The probability, according to Bayesians, is a ''degree of belief'' in the parameter of interest taking on a particular value (e.g., <math>\ Y=1 </math>), given a particular observation (e.g., <math>\ X=x </math>). Historically, the difficulty in this approach lies with determining the posterior distribution, however, more recent methods such as '''Markov Chain Monte Carlo (MCMC)''' allow the Bayesian approach to be implemented <ref name="PCAustin">P. C. Austin, C. D. Naylor, and J. V. Tu, "A comparison of a Bayesian vs. a frequentist method for profiling hospital performance," ''Journal of Evaluation in Clinical Practice'', 2001</ref>.
The former reflects a '''Bayesian''' approach. The Bayesian approach uses previous beliefs and observed data (e.g., the random variable <math>\ X </math>) to determine the probability distribution of the parameter of interest (e.g., the random variable <math>\ Y </math>). The probability, according to Bayesians, is a ''degree of belief'' in the parameter of interest taking on a particular value (e.g., <math>\ Y=1 </math>), given a particular observation (e.g., <math>\ X=x </math>). Historically, the difficulty in this approach lies with determining the posterior distribution. However, more recent methods such as '''Markov Chain Monte Carlo (MCMC)''' allow the Bayesian approach to be implemented <ref name="PCAustin">P. C. Austin, C. D. Naylor, and J. V. Tu, "A comparison of a Bayesian vs. a frequentist method for profiling hospital performance," ''Journal of Evaluation in Clinical Practice'', 2001</ref>.


The latter reflects a '''Frequentist''' approach. The Frequentist approach assumes that the probability distribution, including the mean, variance, etc., is fixed for the parameter of interest (e.g., the variable <math>\ Y </math>, which is ''not'' random). The observed data (e.g., the random variable <math>\ X </math>) is simply a ''sampling'' of a far larger population of possible observations. Thus, a certain repeatability or ''frequency'' is expected in the observed data. If it were possible to make an infinite number of observations, then the true probability distribution of the parameter of interest can be found. In general, frequentists use a technique called '''hypothesis testing''' to compare a ''null hypothesis'' (e.g. an assumption that the mean of the probability distribution is <math>\ \mu_0 </math>) to an alternative hypothesis (e.g. assuming that the mean of the probability distribution is larger than <math>\ \mu_0 </math>) <ref name="PCAustin"/>. For more information on hypothesis testing see <ref>R. Levy, "Frequency hypothesis testing, and contingency tables" class notes for LING251, Department of Linguistics, University of California, 2007. Available: [http://idiom.ucsd.edu/~rlevy/lign251/fall2007/lecture_8.pdf http://idiom.ucsd.edu/~rlevy/lign251/fall2007/lecture_8.pdf] </ref>.  
The latter reflects a '''Frequentist''' approach. The Frequentist approach assumes that the probability distribution (including the mean, variance, etc.) is fixed for the parameter of interest (e.g., the variable <math>\ Y </math>, which is ''not'' random). The observed data (e.g., the random variable <math>\ X </math>) is simply a ''sampling'' of a far larger population of possible observations. Thus, a certain repeatability or ''frequency'' is expected in the observed data. If it were possible to make an infinite number of observations, then the true probability distribution of the parameter of interest can be found. In general, frequentists use a technique called '''hypothesis testing''' to compare a ''null hypothesis'' (e.g. an assumption that the mean of the probability distribution is <math>\ \mu_0 </math>) to an alternative hypothesis (e.g. assuming that the mean of the probability distribution is larger than <math>\ \mu_0 </math>) <ref name="PCAustin"/>. For more information on hypothesis testing see <ref>R. Levy, "Frequency hypothesis testing, and contingency tables" class notes for LING251, Department of Linguistics, University of California, 2007. Available: [http://idiom.ucsd.edu/~rlevy/lign251/fall2007/lecture_8.pdf http://idiom.ucsd.edu/~rlevy/lign251/fall2007/lecture_8.pdf] </ref>.  


There was some class discussion on which approach should be used. Both the ease of computation and the validity of both approaches were discussed. A main point that was brought up in class is that Frequentists consider X to be a random variable, but they do not consider Y to be a random variable because it has to take on one of the values from a fixed set (in the above case it would be either 0 or 1 and there is only one ''correct'' label for a given value X=x). Thus, from a Frequentist's perspective it does not make sense to talk about the probability of Y. This is actually a grey area and sometimes ''Bayesians'' and ''Frequentists'' use each others' approaches. So using ''Bayes' rule'' doesn't necessarily mean you're a ''Bayesian''. Overall, the question remains unresolved.
There was some class discussion on which approach should be used. Both the ease of computation and the validity of both approaches were discussed. A main point that was brought up in class is that Frequentists consider X to be a random variable, but they do not consider Y to be a random variable because it has to take on one of the values from a fixed set (in the above case it would be either 0 or 1 and there is only one ''correct'' label for a given value X=x). Thus, from a Frequentist's perspective it does not make sense to talk about the probability of Y. This is actually a grey area and sometimes ''Bayesians'' and ''Frequentists'' use each others' approaches. So using ''Bayes' rule'' doesn't necessarily mean you're a ''Bayesian''. Overall, the question remains unresolved.
Line 108: Line 112:
<math>  P(Y=1|X=x)  =  \frac{P(X=x|Y=1) P(Y=1)} {P(X=x|Y=1) P(Y=1)  +  P(X=x|Y=0) P(Y=0)}</math>
<math>  P(Y=1|X=x)  =  \frac{P(X=x|Y=1) P(Y=1)} {P(X=x|Y=1) P(Y=1)  +  P(X=x|Y=0) P(Y=0)}</math>


P(Y=1) : the prior, based on belief/evidence beforehand
P(Y=1) : The Prior, probability of Y taking the value chosen


denominator : marginalized by summation
denominator : Equivalent to P(X=x), for all values of Y, normalizes the probability


<math>h(x)  =   
<math>h(x)  =   
Line 130: Line 134:
</math>
</math>


''Theorem'':  Bayes rule is optimal.  I.e., if h is any other classification rule,  
'''Theorem''':  The Bayes Classifier is optimal, i.e., if <math>h</math> is any other classification rule,  
then  <math>L(h^*) <= L(h)</math>
then  <math>L(h^*) <= L(h)</math>
(This is to be proved in homework.)


Why then do we need other classfication methods?
'''Proof''': Consider any classifier <math>h</math>. We can express the error rate as
A:  Because X densities are often/typically unknown.  I.e., <math>f_k(x)</math> and/or <math>\pi_k</math> unknown.
 
::<math> P( \{h(X) \ne Y \} ) = E_{X,Y} [ \mathbf{1}_{\{h(X) \ne Y \}} ] = E_X \left[ E_Y[ \mathbf{1}_{\{h(X) \ne Y \}}| X] \right] </math>
 
To minimize this last expression, it suffices to minimize the inner expectation. Expanding this expectation:
 
::<math> E_Y[ \mathbf{1}_{\{h(X) \ne Y \}}| X] = \sum_{y \in Supp(Y)} P( h(X) \ne y | X) \mathbf{1}_{\{h(X) \ne y \} } </math>
which, in the two-class case, simplifies to
 
::::<math> =  P( h(X) \ne 0 | X) \mathbf{1}_{\{h(X) \ne 0 \} } + P( h(X) \ne 1 | X) \mathbf{1}_{\{h(X) \ne 1 \} } </math>
::::<math> = r(X) \mathbf{1}_{\{h(X) \ne 0 \} } + (1-r(X))\mathbf{1}_{\{h(X) \ne 1 \} } </math>
 
where <math>r(x)</math> is defined as above. We should 'choose' h(X) to equal the label that minimizes the sum. Consider if <math>r(X)>1/2 </math>, then <math>r(X)>1-r(X)</math> so we should let <math>h(X) = 1</math> to minimize the sum. Thus the Bayes classifier is the optimal classifier.
 
Why then do we need other classification methods? Because X densities are often/typically unknown.  I.e., <math>f_k(x)</math> and/or <math>\pi_k</math> unknown.


<math>P(Y=k|X=x)  =  \frac{P(X=x|Y=k)P(Y=k)} {P(X=x)}  =  \frac{f_k(x) \pi_k} {\sum_k f_k(x) \pi_k}</math>
<math>P(Y=k|X=x)  =  \frac{P(X=x|Y=k)P(Y=k)} {P(X=x)}  =  \frac{f_k(x) \pi_k} {\sum_k f_k(x) \pi_k}</math>
f_k(x)  is referred to as the class conditional distribution (~likelihood).


Therefore, we rely on some data to estimate quantities.
<math>f_k(x)</math>  is referred to as the class conditional distribution (~likelihood).
 
Therefore, we must rely on some data to estimate these quantities.


=== Three Main Approaches ===
=== Three Main Approaches ===


'''1. Empirical Risk Minimization''':
'''1. Empirical Risk Minimization''':
Choose a set of classifiers H (e.g., line, neural network) and find <math>h^* \in H</math>
Choose a set of classifiers H (e.g., linear, neural network) and find <math>h^* \in H</math>
that minimizes (some estimate of) L(h).
that minimizes (some estimate of) the true error, L(h).


'''2. Regression''':
'''2. Regression''':
Line 164: Line 181:
Estimate  <math>P(X=x|Y=0)</math>  from <math>X_i</math>'s for which <math>Y_i = 0</math>
Estimate  <math>P(X=x|Y=0)</math>  from <math>X_i</math>'s for which <math>Y_i = 0</math>
Estimate  <math>P(X=x|Y=1)</math>  from <math>X_i</math>'s for which <math>Y_i = 1</math>
Estimate  <math>P(X=x|Y=1)</math>  from <math>X_i</math>'s for which <math>Y_i = 1</math>
and let  <math>P(Y=?) = (1/n) \sum_{i=1}^{n} Y_i</math>
and let  <math>P(Y=y) = (1/n) \sum_{i=1}^{n} I(Y_i = y)</math>


Define <math>\hat{r}(x) = \hat{P}(Y=1|X=x)</math>  and
Define <math>\hat{r}(x) = \hat{P}(Y=1|X=x)</math>  and
Line 174: Line 191:
</math>
</math>


It is possible that there may not be enough data to estimate from for ''density estimation''. But the main problem lies with high dimensional spaces, as the estimation results may not be good (high error rate) and sometimes even infeasible. The term ''curse of dimensionality'' was coined by Bellman <ref>R. E. Bellman, ''Dynamic Programming''. Princeton University Press,
It is possible that there may not be enough data to use ''density estimation'', but the main problem lies with high dimensional spaces, as the estimation results may have a high error rate and sometimes estimation may be infeasible. The term ''curse of dimensionality'' was coined by Bellman <ref>R. E. Bellman, ''Dynamic Programming''. Princeton University Press,
1957</ref> to describe this problem.
1957</ref> to describe this problem.


As the dimension of the space goes up, the learning requirements go up exponentially.
As the dimension of the space goes up, the data points required for learning increases exponentially.
 
To learn more about methods for handling high-dimensional data see <ref> https://docs.google.com/viewer?url=http%3A%2F%2Fwww.bios.unc.edu%2F~dzeng%2FBIOS740%2Flecture_notes.pdf</ref>


To Learn more about methods for handling high-dimensional data <ref> https://docs.google.com/viewer?url=http%3A%2F%2Fwww.bios.unc.edu%2F~dzeng%2FBIOS740%2Flecture_notes.pdf</ref>
The third approach is the simplest.


=== Multi-Class Classification ===
=== Multi-Class Classification ===
Line 185: Line 204:




''Theorem'':  <math>Y \in \mathcal{Y} = {1,2,..., k}</math>  optimal rule
''Theorem'':  <math>Y \in \mathcal{Y} = \{1,2,..., k\} </math>  optimal rule


<math>h*(x)  =  argmax_k P</math>   
<math>\ h^{*}(x)  =  argmax_k P(Y=k|X=x) </math>   


where  <math>P(Y=k|X=x)  =  \frac{f_k(x) \pi_k} {\sum_r f_r \pi_r}</math>
where  <math>P(Y=k|X=x)  =  \frac{f_k(x) \pi_k} {\sum_r f_r(x) \pi_r}</math>


===Examples of Classification===
===Examples of Classification===
Line 198: Line 217:
* Speech recognition.
* Speech recognition.
* Handwriting recognition.
* Handwriting recognition.
There are also some interesting reads on Bayes Classification:
* http://esto.nasa.gov/conferences/estc2004/papers/b8p4.pdf (NASA)
* http://www.cmla.ens-cachan.fr/fileadmin/Membres/vachier/Garcia6812.pdf (application to medical images)
* http://www.springerlink.com/content/g221vh5m6744362r/ (Journal of Medical Systems)


== LDA and QDA ==
== LDA and QDA ==
Line 220: Line 244:
1)  Assume Gaussian distributions
1)  Assume Gaussian distributions


<math>f_k(x)  =  \frac{1}{(2\pi)^{d/2} |\Sigma_k|^{1/2}} exp(-(1/2)(\mathbf{x} - \mathbf{\mu_k}) \Sigma_k^{-1}(\mathbf{x}-\mathbf{\mu_k}) )</math>
<math>f_k(x)  =  \frac{1}{(2\pi)^{d/2} |\Sigma_k|^{1/2}} \text{exp}\big(-\frac{1}{2}(\mathbf{x-\mu_k}) \Sigma_k^{-1}(\mathbf{x-\mu_k}) )</math>


must compare  
must compare  
Line 229: Line 253:
To find the decision boundary, set  
To find the decision boundary, set  
<math>f_1(x) \pi_1  =  f_0(x) \pi_0 </math>
<math>f_1(x) \pi_1  =  f_0(x) \pi_0 </math>
<math> \frac{1}{(2\pi)^{d/2} |\Sigma_1|^{1/2}} exp(-\frac{1}{2}(\mathbf{x - \mu_1}) \Sigma_1^{-1}(\mathbf{x-\mu_1}) )\pi_1 = \frac{1}{(2\pi)^{d/2} |\Sigma_0|^{1/2}} exp(-\frac{1}{2}(\mathbf{x -\mu_0}) \Sigma_0^{-1}(\mathbf{x-\mu_0}) )\pi_0</math>


2) Assume <math>\Sigma_1 = \Sigma_0</math>,  we can use <math>\Sigma = \Sigma_0 = \Sigma_1</math>.
2) Assume <math>\Sigma_1 = \Sigma_0</math>,  we can use <math>\Sigma = \Sigma_0 = \Sigma_1</math>.


Cancel  <math>(2\pi)^{-d/2} |\Sigma_k|^{-1/2}</math> from both sides.
<math> \frac{1}{(2\pi)^{d/2} |\Sigma|^{1/2}} exp(-\frac{1}{2}(\mathbf{x -\mu_1}) \Sigma^{-1}(\mathbf{x-\mu_1}) )\pi_1 = \frac{1}{(2\pi)^{d/2} |\Sigma|^{1/2}} exp(-\frac{1}{2}(\mathbf{x- \mu_0}) \Sigma^{-1}(\mathbf{x-\mu_0}) )\pi_0</math>


Take log of both sides.
3) Cancel  <math>(2\pi)^{-d/2} |\Sigma|^{-1/2}</math>  from both sides.


Subtract one side from both sides, leaving zero on one side.


<math> exp(-\frac{1}{2}(\mathbf{x - \mu_1}) \Sigma^{-1}(\mathbf{x-\mu_1}) )\pi_1 = exp(-\frac{1}{2}(\mathbf{x - \mu_0}) \Sigma^{-1}(\mathbf{x-\mu_0}) )\pi_0</math>


<math>-(1/2)(\mathbf{x} - \mathbf{\mu_1})^T \Sigma^{-1} (\mathbf{x}-\mathbf{\mu_1}) + log(\pi_1) - [-(1/2)(\mathbf{x} - \mathbf{\mu_0})^T \Sigma^{-1} (\mathbf{x}-\mathbf{\mu_0}) + log(\pi_0)]  =  0  </math>
4) Take log of both sides.


<math> -\frac{1}{2}(\mathbf{x - \mu_1}) \Sigma^{-1}(\mathbf{x-\mu_1}) )+ \text{log}(\pi_1) = -\frac{1}{2}(\mathbf{x - \mu_0}) \Sigma^{-1}(\mathbf{x-\mu_0}) )+ \text{log}(\pi_0)</math>


<math>(1/2)[-\mathbf{x}^T \Sigma^{-1}\mathbf{x} - \mathbf{\mu_1}^T \Sigma^{-1} \mathbf{\mu_1}  + 2\mathbf{\mu_1}^T \Sigma^{-1} \mathbf{x}
5) Subtract one side from both sides, leaving zero on one side.
 
 
<math>-\frac{1}{2}(\mathbf{x - \mu_1})^T \Sigma^{-1} (\mathbf{x-\mu_1}) + \text{log}(\pi_1) - [-\frac{1}{2}(\mathbf{x - \mu_0})^T \Sigma^{-1} (\mathbf{x-\mu_0}) + \text{log}(\pi_0)]  =  0  </math>
 
 
<math>\frac{1}{2}[-\mathbf{x}^T \Sigma^{-1}\mathbf{x - \mu_1}^T \Sigma^{-1} \mathbf{\mu_1}  + 2\mathbf{\mu_1}^T \Sigma^{-1} \mathbf{x}
  + \mathbf{x}^T \Sigma^{-1}\mathbf{x} + \mathbf{\mu_0}^T \Sigma^{-1} \mathbf{\mu_0}  - 2\mathbf{\mu_0}^T \Sigma^{-1} \mathbf{x} ]
  + \mathbf{x}^T \Sigma^{-1}\mathbf{x} + \mathbf{\mu_0}^T \Sigma^{-1} \mathbf{\mu_0}  - 2\mathbf{\mu_0}^T \Sigma^{-1} \mathbf{x} ]
  + log(\pi_1/\pi_0)  =  0  </math>
  + \text{log}(\frac{\pi_1}{\pi_0})  =  0  </math>




Cancelling out the terms quadratic in <math>\mathbf{x}</math> and rearranging results in  
Cancelling out the terms quadratic in <math>\mathbf{x}</math> and rearranging results in  


<math>(1/2)[-\mathbf{\mu_1}^T \Sigma^{-1} \mathbf{\mu_1}  +  \mathbf{\mu_0}^T \Sigma^{-1} \mathbf{\mu_0}
<math>\frac{1}{2}[-\mathbf{\mu_1}^T \Sigma^{-1} \mathbf{\mu_1}  +  \mathbf{\mu_0}^T \Sigma^{-1} \mathbf{\mu_0}
  +  (2\mathbf{\mu_1}^T \Sigma^{-1} - 2\mathbf{\mu_0}^T \Sigma^{-1}) \mathbf{x}]
  +  (2\mathbf{\mu_1}^T \Sigma^{-1} - 2\mathbf{\mu_0}^T \Sigma^{-1}) \mathbf{x}]
  + log(\pi_1/\pi_0)  =  0  </math>
  + \text{log}(\frac{\pi_1}{\pi_0})  =  0  </math>




Line 269: Line 302:




Suppose that <math>\,Y \in \{1,\dots,K\}</math>, if <math>\,f_k(x) = Pr(X=x|Y=k)</math> is Gaussian, the Bayes Classifier rule is
Suppose that <math>\,Y \in \{1,\dots,K\}</math>, if <math>\,f_k(\mathbf{x}) = Pr(X=\mathbf{x}|Y=k)</math> is Gaussian. The Bayes Classifier is
:<math>\,h^*(x) = \arg\max_{k} \delta_k(x)</math>
:<math>\,h^*(\mathbf{x}) = \arg\max_{k} \delta_k(\mathbf{x})</math>


Where
Where


<math> \,\delta_k(x)  = - \frac{1}{2}log(|\Sigma_k|) - \frac{1}{2}(x-\mu_k)^\top\Sigma_k^{-1}(x-\mu_k) + log (\pi_k) </math>
<math> \,\delta_k(\mathbf{x})  = - \frac{1}{2}log(|\Sigma_k|) - \frac{1}{2}(\mathbf{x}-\boldsymbol{\mu}_k)^\top\Sigma_k^{-1}(\mathbf{x}-\boldsymbol{\mu}_k) + log (\pi_k) </math>


When the Gaussian variances are equal <math>\Sigma_1 = \Sigma_0</math> (e.g. LDA), then
When the Gaussian variances are equal <math>\Sigma_1 = \Sigma_0</math> (e.g. LDA), then


<math> \,\delta_k(x)  = x^\top\Sigma^{-1}\mu_k - \frac{1}{2}\mu_k^\top\Sigma^{-1}\mu_k + log (\pi_k) </math>
<math> \,\delta_k(\mathbf{x})  = \mathbf{x}^\top\Sigma^{-1}\boldsymbol{\mu}_k - \frac{1}{2}\boldsymbol{\mu}_k^\top\Sigma^{-1}\boldsymbol{\mu}_k + log (\pi_k) </math>


(To compute this, we need to calculate the value of <math>\,\delta </math> for each class, and then take the one with the max. value).
(To compute this, we need to calculate the value of <math>\,\delta </math> for each class, and then take the one with the max. value).
Line 303: Line 336:
===Computation===
===Computation===


For QDA we need to calculate: <math> \,\delta_k(x)  = - \frac{1}{2}log(|\Sigma_k|) - \frac{1}{2}(x-\mu_k)^\top\Sigma_k^{-1}(x-\mu_k) + log (\pi_k) </math>
For QDA we need to calculate: <math> \,\delta_k(\mathbf{x})  = - \frac{1}{2}log(|\Sigma_k|) - \frac{1}{2}(\mathbf{x}-\boldsymbol{\mu}_k)^\top\Sigma_k^{-1}(\mathbf{x}-\boldsymbol{\mu}_k) + log (\pi_k) </math>


Lets first consider when <math>\, \Sigma_k = I, \forall k </math>. This is the case where each distribution is spherical, around the mean point.
Lets first consider when <math>\, \Sigma_k = I, \forall k </math>. This is the case where each distribution is spherical, around the mean point.
Line 312: Line 345:
We have:
We have:


<math> \,\delta_k  = - \frac{1}{2}log(|I|) - \frac{1}{2}(x-\mu_k)^\top I(x-\mu_k) + log (\pi_k) </math>
<math> \,\delta_k  = - \frac{1}{2}log(|I|) - \frac{1}{2}(\mathbf{x}-\boldsymbol{\mu}_k)^\top I(\mathbf{x}-\boldsymbol{\mu}_k) + log (\pi_k) </math>


but <math>\ \log(|I|)=\log(1)=0 </math>
but <math>\ \log(|I|)=\log(1)=0 </math>


and <math>\, (x-\mu_k)^\top I(x-\mu_k) = (x-\mu_k)^\top(x-\mu_k) </math> is the [http://en.wikipedia.org/wiki/Euclidean_distance#Squared_Euclidean_Distance squared Euclidean distance] between two points <math>\,x</math>  and <math>\,\mu_k</math>
and <math>\, (\mathbf{x}-\boldsymbol{\mu}_k)^\top I(\mathbf{x}-\boldsymbol{\mu}_k) = (\mathbf{x}-\boldsymbol{\mu}_k)^\top(\mathbf{x}-\boldsymbol{\mu}_k) </math> is the [http://en.wikipedia.org/wiki/Euclidean_distance#Squared_Euclidean_Distance squared Euclidean distance] between two points <math>\,\mathbf{x}</math>  and <math>\,\boldsymbol{\mu}_k</math>


Thus in this condition, a new point can be classified by its distance away from the center of a class, adjusted by some prior.
Thus in this condition, a new point can be classified by its distance away from the center of a class, adjusted by some prior.
Line 324: Line 357:
====Case 2====  
====Case 2====  
When <math>\, \Sigma_k \neq I </math>
When <math>\, \Sigma_k \neq I </math>


Using the [[Singular Value Decomposition(SVD) | Singular Value Decomposition (SVD)]] of <math>\, \Sigma_k</math>
Using the [[Singular Value Decomposition(SVD) | Singular Value Decomposition (SVD)]] of <math>\, \Sigma_k</math>
Line 332: Line 366:


:<math>\begin{align}
:<math>\begin{align}
(x-\mu_k)^\top\Sigma_k^{-1}(x-\mu_k)&= (x-\mu_k)^\top U_kS_k^{-1}U_k^T(x-\mu_k)\\
(\mathbf{x}-\boldsymbol{\mu}_k)^\top\Sigma_k^{-1}(\mathbf{x}-\boldsymbol{\mu}_k)&= (\mathbf{x}-\boldsymbol{\mu}_k)^\top U_kS_k^{-1}U_k^T(\mathbf{x}-\boldsymbol{\mu}_k)\\
& = (U_k^\top x-U_k^\top\mu_k)^\top S_k^{-1}(U_k^\top x-U_k^\top \mu_k)\\
& = (U_k^\top \mathbf{x}-U_k^\top\boldsymbol{\mu}_k)^\top S_k^{-1}(U_k^\top \mathbf{x}-U_k^\top \boldsymbol{\mu}_k)\\
& = (U_k^\top x-U_k^\top\mu_k)^\top S_k^{-\frac{1}{2}}S_k^{-\frac{1}{2}}(U_k^\top x-U_k^\top\mu_k) \\
& = (U_k^\top \mathbf{x}-U_k^\top\boldsymbol{\mu}_k)^\top S_k^{-\frac{1}{2}}S_k^{-\frac{1}{2}}(U_k^\top \mathbf{x}-U_k^\top\boldsymbol{\mu}_k) \\
& = (S_k^{-\frac{1}{2}}U_k^\top x-S_k^{-\frac{1}{2}}U_k^\top\mu_k)^\top I(S_k^{-\frac{1}{2}}U_k^\top x-S_k^{-\frac{1}{2}}U_k^\top \mu_k) \\
& = (S_k^{-\frac{1}{2}}U_k^\top \mathbf{x}-S_k^{-\frac{1}{2}}U_k^\top\boldsymbol{\mu}_k)^\top I(S_k^{-\frac{1}{2}}U_k^\top \mathbf{x}-S_k^{-\frac{1}{2}}U_k^\top \boldsymbol{\mu}_k) \\
& = (S_k^{-\frac{1}{2}}U_k^\top x-S_k^{-\frac{1}{2}}U_k^\top\mu_k)^\top(S_k^{-\frac{1}{2}}U_k^\top x-S_k^{-\frac{1}{2}}U_k^\top \mu_k) \\
& = (S_k^{-\frac{1}{2}}U_k^\top \mathbf{x}-S_k^{-\frac{1}{2}}U_k^\top\boldsymbol{\mu}_k)^\top(S_k^{-\frac{1}{2}}U_k^\top \mathbf{x}-S_k^{-\frac{1}{2}}U_k^\top \boldsymbol{\mu}_k) \\
\end{align}
\end{align}
</math>
</math>
Line 342: Line 376:
If we think of <math> \, S_k^{-\frac{1}{2}}U_k^\top </math> as a linear transformation that takes points in class <math>\,k</math> and distributes them spherically around a point, like in case 1. Thus when we are given a new point, we can apply the modified <math>\,\delta_k</math> values to calculate <math>\ h^*(\,x)</math>. After applying the singular value decomposition, <math>\,\Sigma_k^{-1}</math> is considered to be an identity matrix such that
If we think of <math> \, S_k^{-\frac{1}{2}}U_k^\top </math> as a linear transformation that takes points in class <math>\,k</math> and distributes them spherically around a point, like in case 1. Thus when we are given a new point, we can apply the modified <math>\,\delta_k</math> values to calculate <math>\ h^*(\,x)</math>. After applying the singular value decomposition, <math>\,\Sigma_k^{-1}</math> is considered to be an identity matrix such that


<math> \,\delta_k  = - \frac{1}{2}log(|I|) - \frac{1}{2}[(S_k^{-\frac{1}{2}}U_k^\top x-S_k^{-\frac{1}{2}}U_k^\top\mu_k)^\top(S_k^{-\frac{1}{2}}U_k^\top x-S_k^{-\frac{1}{2}}U_k^\top \mu_k)] + log (\pi_k) </math>
<math> \,\delta_k  = - \frac{1}{2}log(|I|) - \frac{1}{2}[(S_k^{-\frac{1}{2}}U_k^\top \mathbf{x}-S_k^{-\frac{1}{2}}U_k^\top\boldsymbol{\mu}_k)^\top(S_k^{-\frac{1}{2}}U_k^\top \mathbf{x}-S_k^{-\frac{1}{2}}U_k^\top \boldsymbol{\mu}_k)] + log (\pi_k) </math>


and,
and,
Line 357: Line 391:
As can be seen from the illustration above, the Mahalanobis distance takes into account the distribution of the data points, whereas the Euclidean distance would treat the data as though it has a spherical distribution. Thus, the Mahalanobis distance applies for the more general classification in [[#Case 2 | Case 2]], whereas the Euclidean distance applies to the special case in [[#Case 1 | Case 1]] where the data distribution is assumed to be spherical.
As can be seen from the illustration above, the Mahalanobis distance takes into account the distribution of the data points, whereas the Euclidean distance would treat the data as though it has a spherical distribution. Thus, the Mahalanobis distance applies for the more general classification in [[#Case 2 | Case 2]], whereas the Euclidean distance applies to the special case in [[#Case 1 | Case 1]] where the data distribution is assumed to be spherical.


== Principal Component Analysis (Lecture: Sep. 27, 2011) ==
Generally, we can conclude that QDA provides a better classifier for the data then LDA because LDA assumes that the covariance matrix is identical for each class, but QDA does not. QDA still uses Gaussian distribution as a class conditional distribution. In our real life, this distribution can not be happened each time, so we have to use other distribution as a complement.
 
===The Number of Parameters in LDA and QDA===
 
Both LDA and QDA require us to estimate some parameters. Here is a comparison between the number of parameters needed to be estimated for LDA and QDA:
 
LDA: Since we just need to compare the differences between one given class and remaining <math>\,K-1</math> classes, totally, there are <math>\,K-1</math> differences. For each of them, <math>\,a^{T}x+b</math> requires <math>\,d+1</math> parameters. Therefore, there are <math>\,(K-1)\times(d+1)</math> parameters.
 
QDA: For each of the differences, <math>\,x^{T}ax + b^{T}x + c</math> requires <math>\frac{1}{2}(d+1)\times d + d + 1 = \frac{d(d+3)}{2}+1</math> parameters. Therefore, there are <math>(K-1)(\frac{d(d+3)}{2}+1)</math> parameters. Thus QDA may suffer much more extremely from the curse of dimensionality.
 
[[File:Lda-qda-parameters.png|frame|center|A plot of the number of parameters that must be estimated, in terms of (K-1). The x-axis represents the number of dimensions in the data. As is easy to see, QDA is far less robust than LDA for high-dimensional data sets.]]
 
== Trick: Using LDA to do QDA ==
There is a trick that allows us to use the linear discriminant analysis (LDA) algorithm to generate as its output a quadratic function that can be used to classify data. This trick is similar to, but more primitive than, the [http://en.wikipedia.org/wiki/Kernel_trick Kernel trick] that will be discussed later in the course.
 
In this approach the feature vector is augmented with the quadratic terms (i.e. new dimensions are introduced) where the original data will be projected to that dimensions. We then apply LDA on the new higher-dimensional data.
 
The motivation behind this approach is to take advantage of the fact that fewer parameters have to be calculated in LDA , as explained in previous sections, and therefore have a more robust system in situations where we have fewer data points.
 
If we look back at the equations for LDA and QDA, we see that in LDA we must estimate <math>\,\mu_1</math>, <math>\,\mu_2</math> and <math>\,\Sigma</math>. In QDA we must estimate all of those, plus another <math>\,\Sigma</math>; the extra <math>\,\frac{d(d-1)}{2}</math> estimations make QDA less robust with fewer data points.
 
=== Theoretically ===
 
Suppose we have a quadratic function to estimate: <math>g(\mathbf{x}) = y = \mathbf{x}^T\mathbf{v}\mathbf{x} + \mathbf{w}^T\mathbf{x}</math>.
 
Using this trick, we introduce two new vectors, <math>\,\hat{\mathbf{w}}</math> and <math>\,\hat{\mathbf{x}}</math> such that:
 
<math>\hat{\mathbf{w}} = [w_1,w_2,...,w_d,v_1,v_2,...,v_d]^T</math>
 
and
 
<math>\hat{\mathbf{x}} = [x_1,x_2,...,x_d,{x_1}^2,{x_2}^2,...,{x_d}^2]^T</math>
 
We can then apply LDA to estimate the new function: <math>\hat{g}(\mathbf{x},\mathbf{x}^2) = \hat{y} =\hat{\mathbf{w}}^T\hat{\mathbf{x}}</math>.
 
Note that we can do this for any <math>\, x</math> and in any dimension; we could extend a <math>D \times n</math> matrix to a quadratic dimension by appending another <math>D \times n</math> matrix with the original matrix squared, to a cubic dimension with the original matrix cubed, or even with a different function altogether, such as a <math>\,sin(x)</math> dimension. Note, we are not applying QDA, but instead extending LDA to calculate a non-linear boundary, that will be different from QDA. This algorithm is called nonlinear LDA.
 
== Principal Component Analysis (PCA) (Lecture: Sep. 27, 2011) ==


'''Principal Component Analysis (PCA)''' is a method of dimensionality reduction/feature extraction that transforms the data from a D dimensional space into a new coordinate system of dimension d, where d <= D ( the worst case would be to have d=D).  The goal is to preserve as much of the variance in the original data as possible when switching the coordinate systems. Give data on D variables, the hope is that the data points will lie mainly in a linear subspace of dimension lower than D. In practice, the data will usually not lie precisely in some lower dimensional subspace.
'''Principal Component Analysis (PCA)''' is a method of dimensionality reduction/feature extraction that transforms the data from a D dimensional space into a new coordinate system of dimension d, where d <= D ( the worst case would be to have d=D).  The goal is to preserve as much of the variance in the original data as possible when switching the coordinate systems. Give data on D variables, the hope is that the data points will lie mainly in a linear subspace of dimension lower than D. In practice, the data will usually not lie precisely in some lower dimensional subspace.




The new variables that form a new coordinate system are called '''principal components''' (PCs). PCs are denoted by <math>\ u_1, u_2, ... , u_D </math>. Since PCs are orthogonal linear transformations of the original variables there is at most D PCs. Normally, not all of the D PCs are used but rather a subset of d PCs, <math>\ u_1, u_2, ... , u_d </math>, to approximate the space spanned by the original data points <math>\ x_1, x_2, ... , x_D </math>.  
The new variables that form a new coordinate system are called '''principal components''' (PCs). PCs are denoted by <math>\ \mathbf{u}_1, \mathbf{u}_2, ... , \mathbf{u}_D </math>. The principal components form a basis for the data. Since PCs are orthogonal linear transformations of the original variables there is at most D PCs. Normally, not all of the D PCs are used but rather a subset of d PCs, <math>\ \mathbf{u}_1, \mathbf{u}_2, ... , \mathbf{u}_d </math>, to approximate the space spanned by the original data points <math>\ \mathbf{x}=[x_1, x_2, ... , x_D]^T </math>. We can choose d based on what percentage of the variance of the original data we would like to maintain.  


Let <math>\ PC_j</math> be a linear combination of <math>\ x_1, x_2, ... , x_D </math> defined by the coefficients
The first PC, <math>\ \mathbf{u}_1 </math> is called '''first principal component''' and has the maximum variance, thus it accounts for the most significant variance in the data. The second PC, <math>\ \mathbf{u}_2 </math> is called '''second principal component''' and has the second highest variance and so on until PC, <math>\ \mathbf{u}_D </math> which has the minimum variance.
<math>\ W^{(j)}</math> = <math> ( {w_1}^{(j)}, {w_2}^{(j)},...,{w_D}^{(j)} )^T </math>


Thus, <math> u_j = {w_1}^{(j)} x_1 + {w_2}^{(j)} x_2 + ... + {w_D}^{(j)} x_D = W^{(j)^T} X </math>
Let <math>u_i = \mathbf{w}^T\mathbf{x_i}</math> be the projection of the data point <math>\mathbf{x_i}</math> on the direction of '''w''' if '''w''' is of length one.




This is a unique configuration since it sets up the PCs in order from maximum to minimum variances. The first PC, <math>\ u_1 </math> is called '''first principal component''' and has the maximum variance, thus it accounts for the most significant variance in the data <math>\ x_1, x_2, ... , x_D </math>. The second PC, <math>\ u_2 </math> is called '''second principal component''' and has the second highest variance and so on until PC, <math>\ u_D </math> which has the minimum variance.
<math>\mathbf{u = (u_1,....,u_D)^T}\qquad</math> , <math>\quad\mathbf{w^Tw = 1 }</math>




To get the first principal component, we would like to use the following equation:
<math>var(u) =\mathbf{w}^T X (\mathbf{w}^T X)^T = \mathbf{w}^T X X^T\mathbf{w} = \mathbf{w}^TS\mathbf{w} \quad </math>
Where <math>\quad X X^T =  S </math> is the sample covariance matrix.


<math>\ max (Var(W^T X)) = max (W^T S W) </math>


Where <math>\ S </math> is the covariance matrix. And we solve for <math>\ W  </math>.


We would like to find the <math>\ \mathbf{w} </math> which gives us maximum variation:


Note: we require the constraint <math>\ W^T W = 1 </math> because if there is no constraint on the length of <math>\ W </math> then there is no upper bound. With the constraint, the direction and not the length that maximizes the variance can be found.  
<math>\ \max (Var(\mathbf{w}^T \mathbf{x})) = \max (\mathbf{w}^T S \mathbf{w}) </math>
 
 
Note: we require the constraint <math>\ \mathbf{w}^T \mathbf{w} = 1 </math> because if there is no constraint on the length of <math>\ \mathbf{w} </math> then there is no upper bound. With the constraint, the direction and not the length that maximizes the variance can be found.  




Line 390: Line 463:




Lagrange multipliers are used to find the maximum or minimum of a function <math>\displaystyle f(x,y)</math> subject to constraints <math>\displaystyle g(x,y)=0</math>  
Lagrange multipliers are used to find the maximum or minimum of a function <math>\displaystyle f(x,y)</math> subject to constraint <math>\displaystyle g(x,y)=0</math>  


we define a new constant <math> \lambda</math> called a [http://en.wikipedia.org/wiki/Lagrange_multipliers Lagrange Multiplier] and we form the Lagrangian,<br /><br />
we define a new constant <math> \lambda</math> called a [http://en.wikipedia.org/wiki/Lagrange_multipliers Lagrange Multiplier] and we form the Lagrangian,<br /><br />
<math>\displaystyle L(x,y,\lambda) = f(x,y) - \lambda g(x,y)</math>
<math>\displaystyle L(x,y,\lambda) = f(x,y) - \lambda g(x,y)</math>
<br /><br />
<br /><br />
If <math>\displaystyle (x^*,y^*)</math> is the max of <math>\displaystyle f(x,y)</math>, there exists <math>\displaystyle \lambda^*</math> such that <math>\displaystyle (x^*,y^*,\lambda^*) </math> is a stationary point of <math>\displaystyle L</math> (partial derivatives are 0).
If <math>\displaystyle f(x^*,y^*)</math> is the max of <math>\displaystyle f(x,y)</math>, there exists <math>\displaystyle \lambda^*</math> such that <math>\displaystyle (x^*,y^*,\lambda^*) </math> is a stationary point of <math>\displaystyle L</math> (partial derivatives are 0).
<br>In addition <math>\displaystyle (x^*,y^*)</math> is a point in which functions <math>\displaystyle f</math> and <math>\displaystyle g</math> touch but do not cross. At this point, the tangents of <math>\displaystyle f</math> and <math>\displaystyle g</math> are parallel or gradients of <math>\displaystyle f</math> and <math>\displaystyle g</math> are parallel, such that:
<br>In addition <math>\displaystyle (x^*,y^*)</math> is a point in which functions <math>\displaystyle f</math> and <math>\displaystyle g</math> touch but do not cross. At this point, the tangents of <math>\displaystyle f</math> and <math>\displaystyle g</math> are parallel or gradients of <math>\displaystyle f</math> and <math>\displaystyle g</math> are parallel, such that:
<br /><br />
<br /><br />
Line 422: Line 495:
Solving the system we obtain two stationary points: <math>\displaystyle (\sqrt{2}/2,-\sqrt{2}/2)</math> and <math>\displaystyle (-\sqrt{2}/2,\sqrt{2}/2)</math>. In order to understand which one is the maximum, we just need to substitute it in <math>\displaystyle f(x,y)</math> and see which one as the biggest value. In this case the maximum is <math>\displaystyle (\sqrt{2}/2,-\sqrt{2}/2)</math>.
Solving the system we obtain two stationary points: <math>\displaystyle (\sqrt{2}/2,-\sqrt{2}/2)</math> and <math>\displaystyle (-\sqrt{2}/2,\sqrt{2}/2)</math>. In order to understand which one is the maximum, we just need to substitute it in <math>\displaystyle f(x,y)</math> and see which one as the biggest value. In this case the maximum is <math>\displaystyle (\sqrt{2}/2,-\sqrt{2}/2)</math>.


===Determining W :===
===Determining w :===


Use the Lagrange multiplier conversion to obtain:
Use the Lagrange multiplier conversion to obtain:
<math>\displaystyle L(W, \lambda) = W^T SW - \lambda (W^T W - 1)</math>  where <math>\displaystyle \lambda </math> is a constant  
<math>\displaystyle L(\mathbf{w}, \lambda) = \mathbf{w}^T S\mathbf{w} - \lambda (\mathbf{w}^T \mathbf{w} - 1)</math>  where <math>\displaystyle \lambda </math> is a constant  


Take the derivative and set it to zero:
Take the derivative and set it to zero:
<math>\displaystyle{\Delta L \over{\Delta W}} = 0 </math>
<math>\displaystyle{\partial L \over{\partial \mathbf{w}}} = 0 </math>




To obtain:  
To obtain:  
<math>\displaystyle 2SW - 2 \lambda W = 0</math>
<math>\displaystyle 2S\mathbf{w} - 2 \lambda \mathbf{w} = 0</math>




Rearrange to obtain:
Rearrange to obtain:
<math>\displaystyle SW = \lambda W</math>
<math>\displaystyle S\mathbf{w} = \lambda \mathbf{w}</math>




where <math>\displaystyle W</math> is eigenvector of <math>\displaystyle S </math> and <math>\ \lambda </math> is the eigenvalue of <math>\displaystyle S </math> as <math>\displaystyle SW= \lambda W </math> , and <math>\displaystyle W^T W=1</math> , then we can write
where <math>\displaystyle w</math> is eigenvector of <math>\displaystyle S </math> and <math>\ \lambda </math> is the eigenvalue of <math>\displaystyle S </math> as <math>\displaystyle S\mathbf{w}= \lambda \mathbf{w} </math> , and <math>\displaystyle \mathbf{w}^T \mathbf{w}=1</math> , then we can write


<math>\displaystyle W^T SW= W^T\lambda W= \lambda W^T W =\lambda </math>  
<math>\displaystyle \mathbf{w}^T S\mathbf{w}= \mathbf{w}^T\lambda \mathbf{w}= \lambda \mathbf{w}^T \mathbf{w} =\lambda </math>  


Note that the PCs decompose the total variance in the data in the following way
Note that the PCs decompose the total variance in the data in the following way :


<math> \sum_{i=1}^{D} Var(u_i) </math>
<math> \sum_{i=1}^{D} Var(u_i) </math>
Line 449: Line 522:
<math>= \sum_{i=1}^{D} (\lambda_i) </math>  
<math>= \sum_{i=1}^{D} (\lambda_i) </math>  


<math>\ = Tr(S) </math>
<math>\ = Tr(S) </math> ---- (S is a co-variance matrix, and therefore it's symmetric)


<math>= \sum_{i=1}^{D} Var(x_i)</math>
<math>= \sum_{i=1}^{D} Var(x_i)</math>


== Principal Component Analysis (PCA) Continued (Lecture: Sep. 29, 2011) ==  
== Principal Component Analysis (PCA) Continued (Lecture: Sep. 29, 2011) ==  
As can be seen from the above expressions, <math>\ Var(W^\top X) = W^\top S W= \lambda </math> where lambda is an eigenvalue of the sample covariance matrix <math>\ S </math> and <math>\ W</math> is its corresponding eigenvector. So <math>\ Var(u_i) </math> is maximized if <math>\ \lambda_i </math> is the maximum eigenvalue of <math>\ S </math> and the first principal component (PC) is the corresponding eigenvector. Each successive PC can be generated in the above manner by taking the eigenvectors of <math>\ S </math> that correspond to the eigenvalues:
As can be seen from the above expressions, <math>\ Var(\mathbf{w}^\top \mathbf{w}) = \mathbf{w}^\top S \mathbf{w}= \lambda </math> where lambda is an eigenvalue of the sample covariance matrix <math>\ S </math> and <math>\ \mathbf{w}</math> is its corresponding eigenvector. So <math>\ Var(u_i) </math> is maximized if <math>\ \lambda_i </math> is the maximum eigenvalue of <math>\ S </math> and the first principal component (PC) is the corresponding eigenvector. Each successive PC can be generated in the above manner by taking the eigenvectors of <math>\ S</math><ref>www.wikipedia.org/wiki/Eigenvalues_and_eigenvectors</ref> that correspond to the eigenvalues:


<math>\ \lambda_1 \geq ... \geq \lambda_D </math>  
<math>\ \lambda_1 \geq ... \geq \lambda_D </math>  
Line 467: Line 540:
====Reconstruction Error====
====Reconstruction Error====


<math> \sum_{i=1}^{n} || x_i - \hat{x}_i ||^2 </math>
<math> e = \sum_{i=1}^{n} || x_i - \hat{x}_i ||^2 </math>


====Minimize Reconstruction Error====
====Minimize Reconstruction Error====
Line 481: Line 554:
Differentiate with respect to <math>\ y_i </math>:
Differentiate with respect to <math>\ y_i </math>:


<math> \frac{d}{dy_i} = 0 </math>
<math> \frac{\partial e}{\partial y_i} = 0 </math>
 
we can rewrite reconstruction-error as : <math>\ e = \sum_{i=1}^n(x_i - U_d y_i)^T(x_i - U_d y_i) </math>
 
<math>\ \frac{\partial e}{\partial y_i} = 2(-U_d)(x_i - U_d y_i) = 0 </math>
 
since <math>\ U_d(x_i - U_d y_i) </math> is a linear combination of the columns of <math>\ U_d </math>,
 
which are independent (orthogonal to each other) we can conclude that:


<math>\ 2(-U_d)(x_i - U_d y_i) = 0 </math>
<math>\ x_i - U_d y_i = 0 </math> or equivalently,


<math>\ x_i = U_d y_i </math>
<math>\ x_i = U_d y_i </math>
Line 493: Line 574:
<math>\ min_{U_d} \sum_{i=1}^n  || x_i - U_d U_d^T x_i||^2 </math>
<math>\ min_{U_d} \sum_{i=1}^n  || x_i - U_d U_d^T x_i||^2 </math>


====Using SVD====
====PCA Implementation Using Singular Value Decomposition====


A unique solution can be obtained by finding the [[Singular Value Decomposition(SVD) | Singular Value Decomposition (SVD)]] of <math>\ X </math>:
A unique solution can be obtained by finding the [[Singular Value Decomposition(SVD) | Singular Value Decomposition (SVD)]] of <math>\ X </math>:
Line 499: Line 580:
<math>\ X = U S V^T </math>
<math>\ X = U S V^T </math>


For each rank d, <math>\ U_d </math> consists of the first d columns of <math>\ U </math>. Also, the covariance matrix can be expressed as follows <math>\ S = \Sigma_i (x_i - \mu)(x_i - \mu)^T </math>.
For each rank d, <math>\ U_d </math> consists of the first d columns of <math>\ U </math>. Also, the covariance matrix can be expressed as follows <math>\ S = \frac{1}{n-1}\sum_{i=1}^n (x_i - \mu)(x_i - \mu)^T </math>.


Simply put, by subtracting the mean of each of the data point features and then applying SVD, one can find the principal components:
Simply put, by subtracting the mean of each of the data point features and then applying SVD, one can find the principal components:
Line 507: Line 588:
<math>\ \tilde{X} = U S V^T </math>
<math>\ \tilde{X} = U S V^T </math>


Where <math>\ X </math> is a d by n matrix of data points and the features of each data point form a column in <math>\ X </math>. Also, <math>\ \mu </math> is a d by n matrix of the mean of each of the data points. Note that the arrangement of data points is a convention and indeed in Matlab or conventional statistics, the transpose of the matrices in the above formulae is used.
Where <math>\ X </math> is a d by n matrix of data points and the features of each data point form a column in <math>\ X </math>. Also, <math>\ \mu </math> is a d by n matrix with identical columns each equal to the mean of the <math>\ x_i</math>'s, ie <math>\mu_{:,j}=\frac{1}{n}\sum_{i=1}^n x_i </math>. Note that the arrangement of data points is a convention and indeed in Matlab or conventional statistics, the transpose of the matrices in the above formulae is used.


As the <math>\ S </math> matrix from the SVD has the eigenvalues arranged from largest to smallest, the corresponding eigenvectors in the <math>\ U </math> matrix from the SVD will be such that the first column of <math>\ U </math> is the first principal component and the second column is the second principal component and so on.
As the <math>\ S </math> matrix from the SVD has the eigenvalues arranged from largest to smallest, the corresponding eigenvectors in the <math>\ U </math> matrix from the SVD will be such that the first column of <math>\ U </math> is the first principal component and the second column is the second principal component and so on.
Line 516: Line 597:


==== Example 1 ====
==== Example 1 ====
Consider a matrix of data points <math>\ X </math> with the dimensions 560 by 1965. 560 is the number of elements in each column. Each column is a vector representation of a 20x28 grayscale pixel image of a face (see image below) and there is a total of 1965 different images of faces. Each of the images are corrupted by noise, but the noise can be removed using PCA. The corresponding Matlab commands are shown below:
Consider a matrix of data points <math>\ X </math> with the dimensions 560 by 1965. 560 is the number of elements in each column. Each column is a vector representation of a 20x28 grayscale pixel image of a face (see image below) and there is a total of 1965 different images of faces. Each of the images are corrupted by noise, but the noise can be removed by projecting the data back to the original space taking as many dimensions as one likes (e.g, 2, 3 4 0r 5). The corresponding Matlab commands are shown below:
[[File:FreyFaceExample.PNG|thumb|185px|An example of the face images used in [[#Example 1 | Example 1]] with noise removed. Source: <ref>S. Roweis (2011). ''Data for MATLAB.'' [Online]. Available: [http://cs.nyu.edu/~roweis/data.html http://cs.nyu.edu/~roweis/data.html.] |</ref>]]
[[File:FreyFaceExample.PNG|thumb|185px|An example of the face images used in [[#Example 1 | Example 1]] with noise removed. Source: <ref>S. Roweis (2011). ''Data for MATLAB.'' [Online]. Available: [http://cs.nyu.edu/~roweis/data.html http://cs.nyu.edu/~roweis/data.html.] |</ref>]]
<pre style="align:left; width: 70%; padding: 2% 2%">
<pre style="align:left; width: 75%; padding: 2% 2%">
  >> % start with a 560 by 1965 matrix X that contains the data points
  >> % start with a 560 by 1965 matrix X that contains the data points
  >> load(noisy.mat);
  >> load(noisy.mat);
Line 529: Line 610:
  >>  
  >>  
  >> % perform SVD, if X matrix if full rank, will obtain 560 PCs
  >> % perform SVD, if X matrix if full rank, will obtain 560 PCs
  >> [U S V] = svd(X);
  >> [S U V] = svd(X);
  >>  
  >>  
  >> % reconstruct X using only the first ten principal components
  >> % reconstruct X ( project X onto the original space) using only the first ten principal components
  >> X_hat = U(:, 1:10)*S(1:10, 1:10)*V(:,1:10)';
  >> Y_pca = U(:, 1:10)'*X;
  >>  
  >>  
  >> % show image in column 10 of X_hat which is now a 560 by 1965 matrix
  >> % show image in column 10 of X_hat which is now a 560 by 1965 matrix
  >> imagesc(reshape(X_hat(:,10),20,28)')
  >> imagesc(reshape(X_hat(:,10),20,28)')
</pre>
</pre>
The reason why the noise is removed in the reconstructed image is because the noise does not create a major variation in a single direction in the original data. Hence, the first ten PCs are not in the direction of the noise. Thus, reconstructing the image using the first ten PCs, will remove the noise.
The reason why the noise is removed in the reconstructed image is because the noise does not create a major variation in a single direction in the original data. Hence, the first ten PCs taken from <math>\ U </math> matrix are not in the direction of the noise. Thus, reconstructing the image using the first ten PCs, will remove the noise.


==== Example 2 ====
==== Example 2 ====
Line 578: Line 659:


==== Example 3 ====
==== Example 3 ====
(Not discussed in class) In the news recently was a story that captures some of the ideas behind PCA. Over the past two years, Scott Golder and Michael Macy, researchers from Cornell University, collected 509 million Twitter messages from 2.4 million users in 84 different countries. The data they used were words collected at various times of day and they classified the data into two different categories: positive emotion words and negative emotion words. Then, they were able to study this new data to evaluate subjects' moods at different times of day, while the subjects were in different parts of the world. They found that the subjects generally exhibited positive emotions in the mornings and late evenings, and negative emotions mid-day. They were able to "project their data onto a smaller dimensional space" using PCS. Their paper, "Diurnal and Seasonal Mood Vary with Work, Sleep, and Daylength Across Diverse Cultures," is available in the journal Science.<ref>http://www.pcworld.com/article/240831/twitter_analysis_reveals_global_human_moodiness.html</ref>
(Not discussed in class) In the news recently was a story that captures some of the ideas behind PCA. Over the past two years, Scott Golder and Michael Macy, researchers from Cornell University, collected 509 million Twitter messages from 2.4 million users in 84 different countries. The data they used were words collected at various times of day and they classified the data into two different categories: positive emotion words and negative emotion words. Then, they were able to study this new data to evaluate subjects' moods at different times of day, while the subjects were in different parts of the world. They found that the subjects generally exhibited positive emotions in the mornings and late evenings, and negative emotions mid-day. They were able to "project their data onto a smaller dimensional space" using PCS. Their paper, "Diurnal and Seasonal Mood Vary with Work, Sleep, and Daylength Across Diverse Cultures," is available in the journal Science.<ref>http://www.pcworld.com/article/240831/twitter_analysis_reveals_global_human_moodiness.html</ref>.
 
Assumptions Underlying Principal Component Analysis can be found here<ref>http://support.sas.com/publishing/pubcat/chaps/55129.pdf</ref>
 
==== Example 4 ====
(Not discussed in class) A somewhat well known learning rule in the field of neural networks called Oja's rule can be used to train networks of neurons to compute the principal component directions of data sets. <ref>A Simplified Neuron Model as a Principal Component Analyzer. Erkki Oja. 1982. Journal of Mathematical Biology. 15: 267-273</ref>  This rule is formulated as follows
 
<math>\,\Delta w = \eta yx -\eta y^2w </math>
 
where <math>\,\Delta w </math> is the neuron weight change, <math>\,\eta</math> is the learning rate, <math>\,y</math> is the neuron output given the current input, <math>\,x</math> is the current input and <math>\,w</math> is the current neuron weight.  This learning rule shares some similarities with another method for calculating principal components: power iteration.  The basic algorithm for power iteration (taken from wikipedia: <ref>Wikipedia. http://en.wikipedia.org/wiki/Principal_component_analysis#Computing_principal_components_iteratively</ref>) is shown below
 
 
<math>\mathbf{p} =</math> a random vector
do ''c'' times:
      <math>\mathbf{t} = 0</math> (a vector of length ''m'')
      for each row <math>\mathbf{x} \in \mathbf{X^T}</math>
            <math>\mathbf{t} = \mathbf{t} + (\mathbf{x} \cdot \mathbf{p})\mathbf{x}</math>
      <math>\mathbf{p} = \frac{\mathbf{t}}{|\mathbf{t}|}</math>
return <math>\mathbf{p}</math>
 
Comparing this with the neuron learning rule we can see that the term <math>\, \eta y x </math> is very similar to the <math>\,\mathbf{t}</math> update equation in the power iteration method, and identical if the neuron model is assumed to be linear (<math>\,y(x)=x\mathbf{p}</math>) and the learning rate is set to 1.  Additionally, the <math>\, -\eta y^2w </math> term performs the normalization, the same function as the <math>\,\mathbf{p}</math> update equation in the power iteration method.


=== Observations ===
=== Observations ===
Some observations about the PCA were brought up in class:
Some observations about the PCA were brought up in class:


1) '''PCA''' assumes that data is on a ''linear subspace'' or close to a linear subspace. For non-linear dimensionality reduction, other techniques are used. Amongst the first proposed techniques for non-linear dimensionality reduction are '''Locally Linear Embedding (LLE)''' and '''Isomap'''. More recent techniques include '''Maximum Variance Unfolding (MVU)''' and '''t-Distributed Stochastic Neighbor Embedding (t-SNE)'''. '''Kernel PCAs''' may also be used, but they depend on the type of kernel used and generally do not work well in practice. (Kernels will be covered in more detail later in the course.)
* '''PCA''' assumes that data is on a ''linear subspace'' or close to a linear subspace. For non-linear dimensionality reduction, other techniques are used. Amongst the first proposed techniques for non-linear dimensionality reduction are '''Locally Linear Embedding (LLE)''' and '''Isomap'''. More recent techniques include '''Maximum Variance Unfolding (MVU)''' and '''t-Distributed Stochastic Neighbor Embedding (t-SNE)'''. '''Kernel PCAs''' may also be used, but they depend on the type of kernel used and generally do not work well in practice. (Kernels will be covered in more detail later in the course.)
 


2) Finding the number of PCs to use is not straightforward. It requires knowledge about the ''instrinsic dimentionality of data''. In practice, oftentimes a heuristic approach is adopted by looking at the eigenvalues ordered from largest to smallest. If there is a "dip" in the magnitude of the eigenvalues, the "dip" is used as a cut off point and only the large eigenvalues before the "dip" are used. Otherwise, it is possible to add up the eigenvalues from largest to smallest until a certain percentage value is reached. This percentage value represents the percentage of variance that is preserved when projecting onto the PCs corresponding to the eigenvalues that have been added together to achieve the percentage.  
* Finding the number of PCs to use is not straightforward. It requires knowledge about the ''instrinsic dimentionality of data''. In practice, oftentimes a heuristic approach is adopted by looking at the eigenvalues ordered from largest to smallest. If there is a "dip" in the magnitude of the eigenvalues, the "dip" is used as a cut off point and only the large eigenvalues before the "dip" are used. Otherwise, it is possible to add up the eigenvalues from largest to smallest until a certain percentage value is reached. This percentage value represents the percentage of variance that is preserved when projecting onto the PCs corresponding to the eigenvalues that have been added together to achieve the percentage.  


* It is a good idea to normalize the variance of the data before applying PCA. This will avoid PCA finding PCs in certain directions due to the scaling of the data, rather than the real variance of the data.


3) It is a good idea to normalize the variance of the data before applying PCA. This will avoid PCA finding PCs in certain directions due to the scaling of the data, rather than the real variance of the data.
* PCA can be considered as an unsupervised approach, since the main direction of variation is not known beforehand, i.e. it is not completely certain which dimension the first PC will capture. The PCs found may not correspond to the desired labels for the data set. There are, however, alternate methods for performing supervised dimensionality reduction.


* (Not in class) Even though the traditional PCA method does not work well on data set that lies on a non-linear manifold. A revised PCA method, called c-PCA, has been introduced to improve the stability and convergence of intrinsic dimension estimation. The approach first finds a minimal cover (a cover of a set X is a collection of sets whose union contains X as a subset<ref>http://en.wikipedia.org/wiki/Cover_(topology)</ref>) of the data set. Since set covering is an NP-hard problem, the approach only finds an approximation of minimal cover to reduce the complexity of the run time. In each subset of the minimal cover, it applies PCA and filters out the noise in the data. Finally the global intrinsic dimension can be determined from the variance results from all the subsets. The algorithm produces robust results.<ref>Mingyu Fan, Nannan Gu, Hong Qiao, Bo Zhang, Intrinsic dimension estimation of data by principal component analysis, 2010. Available: http://arxiv.org/abs/1002.2050</ref>


4) PCA can be considered as an unsupervised approach, since the main direction of variation is not known beforehand, i.e. it is not completely certain which dimension the first PC will capture. The PCs found may not correspond to the desired labels for the data set. There are, however, alternate methods for performing supervised dimensionality reduction.
*(Not in class) While PCA finds the mathematically optimal method (as in minimizing the squared error), it is sensitive to outliers in the data that produce large errors PCA tries to avoid. It therefore is common practice to remove outliers before computing PCA. However, in some contexts, outliers can be difficult to identify. For example in data mining algorithms like correlation clustering, the assignment of points to clusters and outliers is not known beforehand. A recently proposed generalization of PCA based on a '''Weighted PCA''' increases robustness by assigning different weights to data objects based on their estimated relevancy.<ref>http://en.wikipedia.org/wiki/Principal_component_analysis</ref>


5) (Not in class) Even though the traditional PCA method does not work well on data set that lies on a non-linear manifold. A revised PCA method, called c-PCA, has been introduced to improve the stability and convergence of intrinsic dimension estimation. The approach first finds a minimal cover (a cover of a set X is a collection of sets whose union contains X as a subset<ref>http://en.wikipedia.org/wiki/Cover_(topology)</ref>) of the data set. Since set covering is an NP-hard problem, the approach only finds an approximation of minimal cover to reduce the complexity of the run time. In each subset of the minimal cover, it applies PCA and filters out the noise in the data. Finally the global intrinsic dimension can be determined from the variance results from all the subsets. The algorithm produces robust results.<ref>Mingyu Fan, Nannan Gu, Hong Qiao, Bo Zhang, Intrinsic dimension estimation of data by principal component analysis, 2010. Available: http://arxiv.org/abs/1002.2050</ref>
* (Not in class) Comparison between PCA and LDA: Principal Component Analysis (PCA)and Linear Discriminant Analysis (LDA) are two commonly used techniques for data classification and dimensionality reduction. Linear Discriminant Analysis easily handles the case where the within-class frequencies are unequal and their performances has been examined on randomly generated test data. This method maximizes the ratio of between-class variance to the within-class variance in any particular data set thereby guaranteeing maximal separability. ... The prime difference between LDA and PCA is that PCA does more of feature classification and LDA does data classification. In PCA, the shape and location of the original data sets changes when transformed to a different space whereas LDA doesn’t change the location but only tries to provide more class separability and draw a decision region between the given classes. This method also helps to better understand the distribution of the feature data." <ref> Balakrishnama, S., Ganapathiraju, A. LINEAR DISCRIMINANT ANALYSIS - A BRIEF TUTORIAL. http://www.isip.piconepress.com/publications/reports/isip_internal/1998/linear_discrim_analysis/lda_theory.pdf </ref>


=== Summary ===
=== Summary ===
The PCA algorithm can be summarized into the following steps:
The PCA algorithm can be summarized into the following steps:


*'''(1) Recover basis''':
# '''Recover basis'''
:: <math>\ \text{ Calculate } XX^T=\Sigma_{i=1}^{t}x_ix_{i}^{T} \text{ and let } U=\text{ eigenvectors of } XX^T \text{ corresponding to the top } d \text{ eigenvalues.} </math>
#: <math>\ \text{ Calculate } XX^T=\Sigma_{i=1}^{t}x_ix_{i}^{T} \text{ and let } U=\text{ eigenvectors of } XX^T \text{ corresponding to the largest } d \text{ eigenvalues.} </math>
# '''Encode training data'''
#: <math>\ \text{Let } Y=U^TX \text{, where } Y \text{ is a } d \times t \text{ matrix of encodings of the original data.} </math>
# '''Reconstruct training data'''
#: <math> \hat{X}=UY=UU^TX </math>.
# '''Encode test example'''
#: <math>\ y = U^Tx \text{ where } y \text{ is a } d\text{-dimensional encoding of } x </math>.
# '''Reconstruct test example'''
#: <math> \hat{x}=Uy=UU^Tx </math>.
 
=== Dual PCA ===
 
Singular value decomposition allows us to formulate the principle components algorithm entirely in terms of dot products between data points and limit the direct dependence on the original dimensionality ''d''. Now assume that the dimensionality ''d'' of the ''d × n'' matrix of data X is large (i.e., ''d >> n''). In this case, the algorithm described in previous sections become impractical. We would prefer a run time that depends only on the number of training examples ''n'', or that at least has a reduced dependence on ''n''.
Note that in the SVD factorization <math>\ X = U \Sigma V^T </math>, the eigenvectors in <math>\ U </math> corresponding to non-zero singular values in <math>\ \Sigma </math> (square roots of eigenvalues) are in a one-to-one correspondence with the eigenvectors in <math>\ V </math> .
After performing dimensionality reduction on <math>\ U </math> and keeping only the first ''l'' eigenvectors, corresponding to the top ''l'' non-zero singular values in <math>\ \Sigma </math>, these eigenvectors will still be in a one-to-one correspondence with the first ''l'' eigenvectors in <math>\ V </math> :
 
<math>\ X V = U \Sigma  </math>


*'''(2) Encode training data''':
<math>\ \Sigma </math> is square and invertible, because its diagonal has non-zero entries. Thus, the following conversion between the top ''l'' eigenvectors can be derived:
:: <math>\ \text{Let } Y=U^TX \text{, where } Y \text{ is a } d \times t \text{ matrix of encodings of the original data.} </math>


*'''(3) Reconstruct training data''':
<math>\ U = X V \Sigma^{-1} </math>
:: <math> \hat{X}=UY=UU^TX </math>.


*'''(4) <math>\ \quad </math>''Encode test example''':
Now Replacing <math>\ U </math> with <math>\ X V \Sigma^{-1} </math> gives us the dual form of PCA.
:: <math>\ y = U^Tx \text{ where } y \text{ is a } d\text{-dimensional encoding of } x </math>.


*'''(5) <math>\ \quad </math>''Reconstruct test example''':
== Fisher Discriminant Analysis (FDA) (Lecture: Sep. 29, 2011 - Oct. 04, 2011) ==
:: <math> \hat{x}=Uy=UU^Tx </math>.


== Fisher Discriminant Analysis (FDA) ==
'''Fisher Discriminant Analysis (FDA)''' is sometimes called ''Fisher Linear Discriminant Analysis (FLDA)'' or just ''Linear Discriminant Analysis (LDA)''. This causes confusion with the [[#LDA | ''Linear Discriminant Analysis (LDA)'']] technique covered earlier in the course. The LDA technique covered earlier in the course has a normality assumption and is a boundary finding technique. The FDA technique outlined here is a supervised feature extraction technique. FDA differs from PCA as well because PCA does not use the class labels, <math>\ y_i</math>, of the data <math>\ (x_i,y_i)</math> while FDA organizes data into their ''classes'' by finding the direction of maximum separation between classes.
This method is often confused with the similarly named ''Fisher Linear Discriminant Analysis''(FLDA), or just ''Linear Discriminant Analysis''(LDA), but they are different methods. The latter method, FLDA, has a normality assumption and is a boundary finding technique. The technique outlined here, FDA, is a feature extraction technique. <br />




The FDA technique is a supervised feature extraction technique. While PCA does not use the labels, <math>Y_i</math>, of the data <math>(X_i,Y_i)</math>, FDA organizes data into ''classes''. An area where the PCA technique breaks down is when the direction of greatest variance is not the classification we desire. 
=== PCA ===


- Find a rank d subspace which minimize the squared reconstruction error:


One main drawback of the PCA technique is that the direction of greatest variation may not be the classification we desire. For example, imagine if the [[data set]] above had a lightening filter applied to a random subset of images. Then the greatest variation would be the brightness and not the more important variations we wish to classify. The FDA circumvents this problem by using the labels, <math>Y_i</math>, of the data <math>(X_i,Y_i)</math> i.e. the FDA uses ''supervised learning''. An elementary way to see the algorithm is to imagine 2 classes of data, and we project the classes onto a suitably-chosen line that minimizes the variance within-class, and maximizes the distance between the two classes i.e. groups similar data together, and spreads apart different data. This way, new data acquired can be compared, after a transformation, to where these projections, using some well-chosen metric.
<math> \Sigma = |x_i - \hat{x} |^2</math>


where <math>\hat{x} </math> is projection of original data.
One main drawback of the PCA technique is that the direction of greatest variation may not produce the classification we desire. For example, imagine if the [[#Example 2 | data set]] above had a lighting filter applied to a random subset of the images. Then the greatest variation would be the brightness and not the more important variations we wish to classify. As another example , if we imagine 2 cigar like clusters in 2 dimensions, one cigar has <math>y = 1</math> and the other <math>y = -1</math>. The cigars are positioned in parallel and very closely together, such that the variance in the total data-set, ignoring the labels, is in the direction of the cigars. For classification, this would be a terrible projection, because all labels get evenly mixed and we destroy the useful information. A much more useful projection is orthogonal to the cigars, i.e. in the direction of least overall variance, which would perfectly separate the data-cases (obviously, we would still need to perform classification in this 1-D space.) See figure below <ref>www.ics.uci.edu/~welling/classnotes/papers_class/Fisher-LDA.pdf</ref>. FDA circumvents this problem by using the labels, <math>\ y_i</math>, of the data <math>\ (x_i,y_i)</math> i.e. the FDA uses ''supervised learning''.
The main difference between FDA and PCA is that, in PCA we are interested in transforming the data to a new coordinate system such that the greatest variance of data lies on the first coordinate, but in FDA, we project the data of each class onto a point in such a way that the resulting points would be as far apart from each other as possible. The FDA goal is achieved by projecting data onto a suitably chosen line that minimizes the within class variance, and maximizes the distance between the two classes i.e. group similar data together and spread different data apart. This way, new data acquired can be compared, after a transformation, to where these projections, using some well-chosen metric.
[[File:Classification.jpg | Two cigar distributions where the direction of greatest variance is not the most useful for classification]]


We first consider the cases of two-classes. Denote the mean and covariance matrix of class <math>i=0,1</math> by <math>\mathbf{\mu}_i</math> and <math>\mathbf{\Sigma}_i</math> respectively. We transform the data so that it is projected into 1 dimension i.e. a scalar value. To do this, we compute the inner product of our <math>dx1</math>-dimensional data, <math>\mathbf{x}</math>, by a to-be-determined <math>dx1</math>-dimensional vector <math>\mathbf{w}</math>. The new means and covariances of the transformed data:
We first consider the cases of two-classes. Denote the mean and covariance matrix of class <math>i=0,1</math> by <math>\mathbf{\mu}_i</math> and <math>\mathbf{\Sigma}_i</math> respectively. We transform the data so that it is projected into 1 dimension i.e. a scalar value. To do this, we compute the inner product of our <math>dx1</math>-dimensional data, <math>\mathbf{x}</math>, by a to-be-determined <math>dx1</math>-dimensional vector <math>\mathbf{w}</math>. The new means and covariances of the transformed data:
Line 629: Line 750:
::<math> \Sigma'_i :\rightarrow \mathbf{w}^{T}\mathbf{\sigma}_i \mathbf{w}</math>
::<math> \Sigma'_i :\rightarrow \mathbf{w}^{T}\mathbf{\sigma}_i \mathbf{w}</math>


The new means and variances are actually scalar values now, but we will use vector and matrix notation and arguments throughout the following derivation as the multi-class case is then just a simpler extension.  
The new means and variances are actually scalar values now, but we will use vector and matrix notation and arguments throughout the following derivation as the multi-class case is then just a simpler extension.


'''The Goals of FDA''':
===Goals of FDA===
As we will see from our objective function, we want to maximize the separation of the classes and to minimize the within-variance of each class. That is, our ideal situation is that the individual classes are as far away from each other as possible, and at the same time the data within each class are as close to each other as possible (collapsed to a single point in the most extreme case).


As will be shown in the objective function, the goal of FDA is to maximize the separation of the classes (between class variance) and minimize the scatter within each class (within class variance). That is, our ideal situation is that the individual classes are as far away from each other as possible and at the same time the data within each class are as close to each other as possible (collapsed to a single point in the most extreme case). An interesting note is that R. A. Fisher who FDA is named after, used the FDA technique for purposes of taxonomy, in particular for categorizing different species of iris flowers. <ref name="RAFisher">R. A. Fisher, "The Use of Multiple measurements in Taxonomic Problems," ''Annals of Eugenics'', 1936</ref>. It is very easy to visualize what is meant by within class variance (i.e. differences between the iris flowers of the same species) and between class variance (i.e. the differences between the iris flowers of different species) in that case.


Our first goal is to minimize the individual classes' covariance: This will help to collapse the data together.  
First, we need to reduce the dimensionality of covariate to one dimension (two-class case) by projecting the data onto a line. That is take the d-dimensional input values x and project it to one dimension by using <math>z=\mathbf{w}^T \mathbf{x}</math>  where <math>\mathbf{w}^T </math> is 1 by d and <math>\mathbf{x}</math> is d by 1.
 
Goal: choose the vector <math>\mathbf{w}=[w_1,w_2,w_3,...,w_d]^T </math> that best seperate the data, then we perform classification with projected data <math>z</math> instead of original data  <math>\mathbf{x}</math> .
 
 
<math>\hat{{\mu}_0}=\frac{1}{n_0}\sum_{i:y_i=0} x_i</math>
 
<math>\hat{{\mu}_1}=\frac{1}{n_1}\sum_{i:y_i=1} x_i</math>
 
<math>\mathbf{x}\rightarrow\mathbf{w}^{T}\mathbf{x}</math>. <br />
<math>\mathbf{\mu}\rightarrow\mathbf{w}^{T}\mathbf{\mu}</math>.<br />
<math>\mathbf{\Sigma}\rightarrow\mathbf{w}^{T}\mathbf{\Sigma}\mathbf{w}</math> <br />
 
 
 
 
'''1)''' Our '''first''' goal is to minimize the individual classes' covariance. This will help to collapse the data together.  
We have two minimization problems
We have two minimization problems


::<math>\min_{\mathbf{w}} \mathbf{w} \mathbf{\Sigma}_0 \mathbf{w}^{T}</math>  
::<math>\min_{\mathbf{w}} \mathbf{w}^{T} \mathbf{\Sigma}_0 \mathbf{w}</math>  
and  
and  
::<math>\min_{\mathbf{w}} \mathbf{w} \mathbf{\Sigma}_1 \mathbf{w}^{T}</math>.
::<math>\min_{\mathbf{w}} \mathbf{w}^{T} \mathbf{\Sigma}_1 \mathbf{w}</math>.


But these can be combined:
But these can be combined:
::<math> \min_{\mathbf{w}} \mathbf{w} \mathbf{\Sigma}_0 \mathbf{w}^{T} + \mathbf{w} \mathbf{\Sigma}_1 \mathbf{w}^{T}</math>  
::<math> \min_{\mathbf{w}} \mathbf{w} ^{T}\mathbf{\Sigma}_0 \mathbf{w} + \mathbf{w}^{T} \mathbf{\Sigma}_1 \mathbf{w}</math>  
:: <math> = \min_{\mathbf{w}} \mathbf{w} ( \mathbf{\Sigma_0} + \mathbf{\Sigma_1} ) \mathbf{w}^{T} </math>
:: <math> = \min_{\mathbf{w}} \mathbf{w} ^{T}( \mathbf{\Sigma_0} + \mathbf{\Sigma_1} ) \mathbf{w}</math>


Define <math> \mathbf{S}_W =\mathbf{\Sigma_0} + \mathbf{\Sigma_1} </math>, called the ''within class variance matrix''.   
Define <math> \mathbf{S}_W =\mathbf{\Sigma_0} + \mathbf{\Sigma_1} </math>, called the ''within class variance matrix''.   


Our second goal is to move the minimized classes as far away from each other as possible. One way to accomplish this is to maximize the distances between the means of the transformed data i.e.
'''2)''' Our '''second''' goal is to move the minimized classes as far away from each other as possible. One way to accomplish this is to maximize the distances between the means of the transformed data i.e.


<math> \max_{\mathbf{w}} |\mathbf{w}^{T}\mathbf{mu}_0 - \mathbf{w}^{T}\mathbf{mu}_1|^2 </math>
<math> \max_{\mathbf{w}} |\mathbf{w}^{T}\mathbf{\mu}_0 - \mathbf{w}^{T}\mathbf{\mu}_1|^2 </math>


Simplifying:
Simplifying:
::<math> \max_{\mathbf{w}} \,(\mathbf{w}^{T}\mathbf{\mu}_0 - \mathbf{w}^{T}\mathbf{\mu}_1)^T (\mathbf{w}^{T}\mathbf{\mu}_0 - \mathbf{w}^{T}\mathbf{mu}_1) </math> <br/>
::<math> \max_{\mathbf{w}} \,(\mathbf{w}^{T}\mathbf{\mu}_0 - \mathbf{w}^{T}\mathbf{\mu}_1)^T (\mathbf{w}^{T}\mathbf{\mu}_0 - \mathbf{w}^{T}\mathbf{\mu}_1) </math> <br/>
::<math> = \max_{\mathbf{w}}\, (\mathbf{\mu}_0-\mathbf{\mu}_1)^{T}\mathbf{w}^{T} \mathbf{w} (\mathbf{\mu}_0-\mathbf{\mu}_1)</math> <br/>
::<math> = \max_{\mathbf{w}}\, (\mathbf{\mu}_0-\mathbf{\mu}_1)^{T}\mathbf{w} \mathbf{w}^{T} (\mathbf{\mu}_0-\mathbf{\mu}_1)</math> <br/>
::<math> = \max_{\mathbf{w}} \,\mathbf{w}^{T}(\mathbf{\mu}_0-\mathbf{\mu}_1)(\mathbf{\mu}_0-\mathbf{\mu}_1)^{T}\mathbf{w}</math>
::<math> = \max_{\mathbf{w}} \,\mathbf{w}^{T}(\mathbf{\mu}_0-\mathbf{\mu}_1)(\mathbf{\mu}_0-\mathbf{\mu}_1)^{T}\mathbf{w}</math>


Line 663: Line 800:
This matrix, called the ''between class variance matrix'', is a rank 1 matrix, so an inverse does not exist. Altogether, we have two optimization problems we must solve simultaneously:
This matrix, called the ''between class variance matrix'', is a rank 1 matrix, so an inverse does not exist. Altogether, we have two optimization problems we must solve simultaneously:


::1) <math> \min_{\mathbf{w}} \mathbf{w} \mathbf{S_W} \mathbf{w}^{T} </math><br/>
::1) <math> \min_{\mathbf{w}} \mathbf{w}^{T} \mathbf{S_W} \mathbf{w} </math><br/>
::2) <math> \max_{\mathbf{w}} \mathbf{w} \mathbf{S_B} \mathbf{w}^{T} </math>
::2) <math> \max_{\mathbf{w}} \mathbf{w}^{T} \mathbf{S_B} \mathbf{w} </math>


There are other metrics one can use to both minimize the data's variance and maximizes the distance between classes, and other goals we can try to accomplish (see metric learning, below...one day), but Fisher used this elegant method, hence his recognition in the name, and we will follow his method.
There are other metrics one can use to both minimize the data's variance and maximizes the distance between classes, and other goals we can try to accomplish (see metric learning, below...one day), but Fisher used this elegant method, hence his recognition in the name, and we will follow his method.
Line 670: Line 807:
We can combine the two optimization problems into one after noting that the negative of max is min:
We can combine the two optimization problems into one after noting that the negative of max is min:


::<math> \max_{\mathbf{w}} \mathbf{w} \mathbf{S_W} \mathbf{w}^{T} - \alpha \mathbf{w} \mathbf{S_B} \mathbf{w}^{T}</math><br/>
::<math> \max_{\mathbf{w}} \; \alpha \mathbf{w}^{T} \mathbf{S_B} \mathbf{w} - \mathbf{w}^{T} \mathbf{S_W} \mathbf{w} </math><br/>


The <math>\alpha</math> coefficient is a necessary scaling factor: if the scale of one of the terms is much larger than the other, the optimization problem will be dominated by the larger term. This means we have another unknown, <math>\alpha</math>, to solve for. Instead, we can circumvent the scaling problem by looking at the ratio of the quantities, the original solution Fisher proposed:
The <math>\alpha</math> coefficient is a necessary scaling factor: if the scale of one of the terms is much larger than the other, the optimization problem will be dominated by the larger term. This means we have another unknown, <math>\alpha</math>, to solve for. Instead, we can circumvent the scaling problem by looking at the ratio of the quantities, the original solution Fisher proposed:


::<math> \max_{\mathbf{w}} \frac{\mathbf{w} \mathbf{S_B} \mathbf{w}^{T}}{\mathbf{w} \mathbf{S_W} \mathbf{w}^{T}} </math>
::<math> \max_{\mathbf{w}} \frac{\mathbf{w}^{T} \mathbf{S_B} \mathbf{w}}{\mathbf{w}^{T} \mathbf{S_W} \mathbf{w}} </math>


This optimization problem can be shown<ref>
This optimization problem can be shown<ref>
Line 680: Line 817:
</ref> to be equivalent to the following optimization problem:
</ref> to be equivalent to the following optimization problem:


:: <math> \max_{\mathbf{w}} \mathbf{w} \mathbf{S_B} \mathbf{w}^{T} \text{subject to:} </math>
:: <math> \max_{\mathbf{w}} \mathbf{w}^{T} \mathbf{S_B} \mathbf{w}</math> <br />
(optimized function)


:::: <math> \mathbf{w} \mathbf{S_W} \mathbf{w}^{T} = 1 </math>
subject to:
 
:: <math> {\mathbf{w}^{T} \mathbf{S_W} \mathbf{w}} = 1 </math><br />
(constraint)


A heuristic understanding of this equivalence is that we have two degrees of freedom: direction and scalar. The scalar value is irrelevant to our discussion. Thus, we can set one of the values to be a constant. We can use Lagrange multipliers to solve this optimization problem:
A heuristic understanding of this equivalence is that we have two degrees of freedom: direction and scalar. The scalar value is irrelevant to our discussion. Thus, we can set one of the values to be a constant. We can use Lagrange multipliers to solve this optimization problem:


::<math>L( \mathbf{w}, \lambda) = \mathbf{w} \mathbf{S_B} \mathbf{w}^{T} - \lambda(\mathbf{w} \mathbf{S_W} \mathbf{w}^{T}-1)</math>
::<math>L( \mathbf{w}, \lambda) = \mathbf{w}^{T} \mathbf{S_B} \mathbf{w} - \lambda(\mathbf{w}^{T} \mathbf{S_W} \mathbf{w}-1)</math>
:: <math> \Rightarrow \frac{\partial L}{\partial \mathbf{w}} = 2 \mathbf{S}_B \mathbf{w} - 2\lambda \mathbf{S}_W\mathbf{w} </math>
:: <math> \Rightarrow \frac{\partial L}{\partial \mathbf{w}} = 2 \mathbf{S}_B \mathbf{w} - 2\lambda \mathbf{S}_W\mathbf{w} </math>


Line 693: Line 834:
::<math> \mathbf{S}_B \mathbf{w} = \lambda \mathbf{S}_W \mathbf{w} </math>
::<math> \mathbf{S}_B \mathbf{w} = \lambda \mathbf{S}_W \mathbf{w} </math>
:: <math> \Rightarrow  \mathbf{S}_W^{-1} \mathbf{S}_B \mathbf{w} = \lambda \mathbf{w} </math>
:: <math> \Rightarrow  \mathbf{S}_W^{-1} \mathbf{S}_B \mathbf{w} = \lambda \mathbf{w} </math>
This is a generalized eigenvalue problem and <math>\  \mathbf{w} </math> can be computed as the eigenvector corresponds to the largest eigenvalue of
:: <math> \mathbf{S}_W^{-1} \mathbf{S}_B </math>


It is very likely that <math> \mathbf{S}_W </math> has an inverse. If not, the pseudo-inverse<ref>
It is very likely that <math> \mathbf{S}_W </math> has an inverse. If not, the pseudo-inverse<ref>
Line 700: Line 844:
</ref> can be used. In Matlab the pseudo-inverse function is named ''pinv''. Thus, we should choose <math>\mathbf{w}</math> to equal the eigenvector of the largest eigenvalue as our projection vector.   
</ref> can be used. In Matlab the pseudo-inverse function is named ''pinv''. Thus, we should choose <math>\mathbf{w}</math> to equal the eigenvector of the largest eigenvalue as our projection vector.   


In fact we can simplify the above expression further in the of two classes. Recall the definition of <math>\mathbf{S}_B = (\mathbf{\mu}_0-\mathbf{\mu}_1)(\mathbf{\mu}_0-\mathbf{\mu}_1)^{T}</math>. Substituting this into our expression:
In fact we can simplify the above expression further in the case of two classes. Recall the definition of <math>\mathbf{S}_B = (\mathbf{\mu}_0-\mathbf{\mu}_1)(\mathbf{\mu}_0-\mathbf{\mu}_1)^{T}</math>. Substituting this into our expression:


::<math> \mathbf{S}_W^{-1}(\mathbf{\mu}_0-\mathbf{\mu}_1)(\mathbf{\mu}_0-\mathbf{\mu}_1)^{T} \mathbf{w} = \lambda \mathbf{w} </math>
::<math> \mathbf{S}_W^{-1}(\mathbf{\mu}_0-\mathbf{\mu}_1)(\mathbf{\mu}_0-\mathbf{\mu}_1)^{T} \mathbf{w} = \lambda \mathbf{w} </math>
:::: <math> ( \mathbf{S}_W^{-1}(\mathbf{\mu}_0-\mathbf{\mu}_1) ) ((\mathbf{\mu}_0-\mathbf{\mu}_1)^{T} \mathbf{w}) </math>
::<math> (\mathbf{S}_W^{-1}(\mathbf{\mu}_0-\mathbf{\mu}_1) ) ((\mathbf{\mu}_0-\mathbf{\mu}_1)^{T} \mathbf{w}) = \lambda \mathbf{w} </math>


This second term is a scalar value, let's denote it <math>\beta</math>. Then
This second term is a scalar value, let's denote it <math>\beta</math>. Then
::<math>  \mathbf{S}_W^{-1}(\mathbf{\mu}_0-\mathbf{\mu}_1)  \propto \mathbf{w} </math>
::<math>  \mathbf{S}_W^{-1}(\mathbf{\mu}_0-\mathbf{\mu}_1)  = \frac{\lambda}{\beta} \mathbf{w} </math>
::<math>  \Rightarrow \, \mathbf{S}_W^{-1}(\mathbf{\mu}_0-\mathbf{\mu}_1)  \propto \mathbf{w} </math>
<br />
(this equation indicates the direction of the separation).
All we are interested in the direction of <math>\mathbf{w}</math>, so to compute this is sufficient to finding our projection vector. Though this will not work in higher dimensions, as <math>\mathbf{w}</math> would be a matrix and not a vector in higher dimensions.


All we are interested in the direction of <math>\mathbf{w}</math>, so to compute this is sufficient to finding our projection vector. Though this will not work in higher dimensions, as <math>\mathbf{w}</math> is a matrix, not a vector.
=== Extensions to Multiclass Case ===
 
If we have <math>\ k</math> classes, we need <math>\ k-1</math> directions i.e. we need to project <math>\ k</math> 'points' onto a <math>\ k-1</math> dimensional hyperplane. What does this change in our above derivation? The most significant difference is that our projection vector,<math>\mathbf{w}</math>, is no longer a vector but instead is a matrix <math>\mathbf{W}</math>, where <math>\mathbf{W}</math> is a d*(k-1) matrix if X is in d-dim. We transform the data as:
===Extensions to Multiclass Case ===
If we have <math>k</math> classes, we need <math>k-1</math> directions i.e. we need to project <math>k</math> 'points' onto a <math>k-1</math> dimensional hyperplane. What does this change in our above derivation? The most significant difference is that our projection vector,<math>\mathbf{w}</math>, is no longer a vector but instead is a matrix <math>\mathbf{W}</math>. We transform the data as


::<math> \mathbf{x}' :\rightarrow \mathbf{W}^{T} \mathbf{x}</math>
::<math> \mathbf{x}' :\rightarrow \mathbf{W}^{T} \mathbf{x}</math>
Line 725: Line 871:
::<math>\min_{\mathbf{W}} \mathbf{W}^{T} \mathbf{S}_W \mathbf{W} </math>
::<math>\min_{\mathbf{W}} \mathbf{W}^{T} \mathbf{S}_W \mathbf{W} </math>


Similarly, the second optimization problem is  
Similarly, the second optimization problem is:


::<math>\max_{\mathbf{W}} \mathbf{W}^{T} \mathbf{S}_B \mathbf{W} </math>
::<math>\max_{\mathbf{W}} \mathbf{W}^{T} \mathbf{S}_B \mathbf{W} </math>


What is <math>\mathbf{S}_B</math> in this case? It can be shown that <math>\mathbf{S}_T = \mathbf{S}_B + \mathbf{S}_W </math> where <math> \mathbf{S}_T </math> is the covariance matrix of all the data. From this we can compute <math> \mathbf{S}_B </math>. Next, if we express  
What is <math>\mathbf{S}_B</math> in this case? It can be shown that <math>\mathbf{S}_T = \mathbf{S}_B + \mathbf{S}_W </math> where <math> \mathbf{S}_T </math> is the covariance matrix of all the data. From this we can compute <math> \mathbf{S}_B </math>.  
 
Next, if we express <math> \mathbf{W} = ( \mathbf{w}_1 , \mathbf{w}_2 , \dots ,\mathbf{w}_k ) </math> observe that, for <math> \mathbf{A} = \mathbf{S}_B , \mathbf{S}_W </math>:
 
::<math>  Tr(\mathbf{W}^{T} \mathbf{A} \mathbf{W})  = \mathbf{w}_1^{T} \mathbf{A} \mathbf{w}_1  + \dots +  \mathbf{w}_k^{T}  \mathbf{A} \mathbf{w}_k </math>
 
where <math>\ Tr()</math> is the trace of a matrix. Thus, following the same steps as in the two-class case, we have the new optimization problem:
 
::<math> \max_{\mathbf{W}} \frac{ Tr(\mathbf{W}^{T} \mathbf{S}_B \mathbf{W}) }{Tr(\mathbf{W}^{T} \mathbf{S}_W \mathbf{W})} </math>
 
The first (k-1) eigenvector of <math> \mathbf{S}_W^{-1} \mathbf{S}_B </math> are required (k-1) direction. That is why under multiclass case, for the k-class problem, we need to project initial points onto k-1 direction.
 
subject to:
 
:: <math> Tr( \mathbf{W} \mathbf{S_W} \mathbf{W}^{T}) = 1 </math>
 
Again, in order to solve the above optimization problem, we can use the Lagrange multiplier <ref>
http://en.wikipedia.org/wiki/Lagrange_multiplier </ref>:
 
:: <math>\begin{align}L(\mathbf{W},\Lambda) = Tr[\mathbf{W}^{T}\mathbf{S}_{B}\mathbf{W}] - \Lambda\left\{ Tr[\mathbf{W}^{T}\mathbf{S}_{W}\mathbf{W}] - 1 \right\}\end{align}</math>.
 
where <math>\ \Lambda</math> is a d by d diagonal matrix.
 
Then, we differentiating with respect to <math>\mathbf{W}</math>:
 
:: <math>\begin{align}\frac{\partial L}{\partial \mathbf{W}} = (\mathbf{S}_{B} + \mathbf{S}_{B}^{T})\mathbf{W} - \Lambda (\mathbf{S}_{W} + \mathbf{S}_{W}^{T})\mathbf{W}\end{align} = 0</math>.
 
Thus:
 
:: <math>\begin{align}\mathbf{S}_{B}\mathbf{W} = \Lambda\mathbf{S}_{W}\mathbf{W}\end{align}</math>
 
:: <math>\begin{align}\mathbf{S}_{W}^{-1}\mathbf{S}_{B}\mathbf{W} = \Lambda\mathbf{W}\end{align}</math>
 
where, <math> \mathbf{\Lambda} =\begin{pmatrix}\lambda_{1} & & 0\\&\ddots&\\0 & &\lambda_{d}\end{pmatrix}</math>
 
The above equation is of the form of an eigenvalue problem. Thus, for the solution the k-1 eigenvectors corresponding to the k-1 largest eigenvalues should be chosen as the projection matrix, <math>\mathbf{W}</math>. In fact, there should only by k-1 eigenvectors corresponding to k-1 non-zero eigenvalues using the above equation.
 
=== Summary ===
FDA has two optimization problems:
::1) <math> \min_{\mathbf{w}} \mathbf{w}^{T} \mathbf{S_W} \mathbf{w} </math><br/>
::2) <math> \max_{\mathbf{w}} \mathbf{w}^{T} \mathbf{S_B} \mathbf{w} </math>   
 
where <math>\mathbf{S}_W = \mathbf{\Sigma_1} + \dots + \mathbf{\Sigma_k}</math> is called the within class variance and <math>\ \mathbf{S}_B = \mathbf{S}_T - \mathbf{S}_W </math> is called the between class variance where <math>\mathbf{S}_T </math> is the variance of all the data together.
 
Every column of <math> \mathbf{w} </math> is parallel to a single eigenvector.
 
The two optimization problems are combined as follows:
::<math> \max_{\mathbf{w}} \frac{\mathbf{w}^{T} \mathbf{S_B} \mathbf{w}}{\mathbf{w}^{T} \mathbf{S_W} \mathbf{w}} </math>
 
By adding a constraint as shown:
::<math> \max_{\mathbf{w}} \mathbf{w}^{T} \mathbf{S_B} \mathbf{w}</math>
 
subject to:
:: <math> \mathbf{w}^{T} \mathbf{S_W} \mathbf{w} = 1 </math>
 
Lagrange multipliers can be used and essentially the problem becomes an eigenvalue problem:
 
::<math>\begin{align}\mathbf{S}_{W}^{-1}\mathbf{S}_{B}\mathbf{w} = \lambda\mathbf{w}\end{align}</math>
 
And <math>\ w </math> can be computed as the k-1 eigenvectors corresponding to the largest k-1 eigenvalues of <math> \mathbf{S}_W^{-1} \mathbf{S}_B </math>.
 
=== Variations ===
 
Some adaptations and extensions exist for the FDA technique (Source: <ref>R. Gutierrez-Osuna, "Linear Discriminant Analysis" class notes for Intro to Pattern Analysis, Texas A&M University. Available: [http://research.cs.tamu.edu/prism/lectures/pr/pr_l10.pdf]</ref>):
 
1) ''Non-Parametric LDA (NPLDA)'' by Fukunaga
 
This method does not assume that the Gaussian distribution is unimodal and it is actually possible to extract more than k-1 features (where k is the number of classes).
 
2) ''Orthonormal LDA (OLDA)'' by Okada and Tomita
 
This method finds projections that are orthonormal in addition to maximizing the FDA objective function. This method can also extract more than k-1 features (where k is the number of classes).
 
3) ''Generalized LDA (GLDA)'' by Lowe
 
This method incorporates additional cost functions into the FDA objective function. This causes classes with a higher cost to be placed further apart in the lower dimensional representation.
 
=== Optical Character Recognition (OCR) using FDA ===
Optical Character Recognition (OCR) is a method to translate scanned, human-readable text into machine-encoded text. In class, we have employed FDA to recognize digits. A paper <ref>Manjunath Aradhya, V.N.,  Kumar, G.H.,  Noushath, S.,  Shivakumara, P., "Fisher Linear Discriminant Analysis based Technique Useful for Efficient Character Recognition", Intelligent Sensing and Information Processing, 2006.</ref> describes the use of FDA to recognize printed documents written in English and Kannada, the fifth most popular language in India. The researchers conducted two types of experiments: one on printed Kannada and English documents and another on handwritten English characters. In the first type of experiments, they conducted four experiments: i) clear and degraded characters in specific fonts; ii) characters in various size; iii) characters in various fonts; iv) characters with noise. In experiment i, FDA achieved 98.2% recognition rate with 12 projection vectors in 21,560 samples. In experiment ii, it achieved 96.9% recognition rate with 10 projection vectors in 11,200 samples. In experiment iii, it achieved 93% recognition rate with 17 projection vectors in 19,850 samples. In experiment iv, it achieved 96.3% recognition rate with 14 projection vectors in 20,000 samples. Overall, the recognition by FDA was very satisfying. In the second type of experiment, a total of 12,400 handwriting samples from 200 different writers were collected. With 175 samples for training purpose, the recognition rate by FDA is 92% with 35 projection vectors.
 
=== Facial Recognition using FDA ===
 
The Fisherfaces method of facial recognition uses PCA and FDA in a similar way to using just PCA. However, it is more advantageous than using on PCA because it minimizes variation within each class and maximizes class separation. The PCA only method is, therefore, more sensitive to lighting and pose variations. In studies done by Belhumeir, Hespanda, and Kiregeman (1997)  and Turk and Pentland (1991), this method had a 96% recognition rate. <ref>Bagherian, Elham. Rahmat, Rahmita. Facial Feature Extraction for Face Recognition: a Review. International Symposium on Information Technology, 2008. ITSim2 article number 4631649.</ref>
 
== Linear and Logistic Regression (Lecture: Oct. 06, 2011) ==
 
=== Linear Regression ===
 
Both Regression and Classification are aimed to find a function h which maps data X to feature Y. In regression, <math>\ y </math> is a continuous variable. In classification, <math>\ y </math> is a discrete variable. In linear regression, data is modeled using a linear function, and unknown parameters are estimated from the data. Regression problems are easier to formulate into functions (since <math>\ y </math> is continuous) and it is possible to solve classification problems by treating them like regression problems. In order to do so, the requirement in classification that <math>\ y </math> is discrete must first be relaxed. Once <math>\ y </math> has been found using regression techniques, it is possible to determine the discrete class corresponding to the <math>\ y </math> that has been found to solve the original classification problem. The discrete class is obtained by defining a threshold where <math>\ y </math> values below the threshold belong to one class and <math>\ y </math> values above the threshold belong to another class.
 
When running a regression we are making two assumptions,
 
# A linear relationship exists between two variables (i.e. X and Y)
# This relationship is additive (i.e. <math>Y= f_1(x_1) + f_2(x_2) + …+ f_n(x_n)</math>). Technically, linear regression estimates how much Y changes when X changes one unit.
 
 
More formally: a more direct approach to classification is to estimate the regression function <math>\ r(\mathbf{x}) = E[Y | X]</math> without bothering to estimate <math>\ f_k(\mathbf{x}) </math>. For the linear model, we assume that either the regression function <math>r(\mathbf{x})</math> is linear, or the linear model has a reasonable approximation.
 
Here is a simple example. If <math>\ Y = \{0,1\}</math> (a two-class problem), then <math>\, h^*(\mathbf{x})= \left\{\begin{matrix}
1 &\text{, if }  \hat r(\mathbf{x})>\frac{1}{2}  \\
0 &\mathrm{, otherwise}  \end{matrix}\right.</math>
 
Basically, we can use a linear function
<math>\ f(x, \beta) = y_i = \mathbf{\beta\,}^T \mathbf{x_{i}} + \mathbf{\beta\,_0} </math> , <math>\mathbf{x_{i}}  \in \mathbb{R}^{d}</math>
and use the least squares approach to fit the function to the given data. This is done by minimizing the following expression:
 
<math>\min_{\mathbf{\beta}} \sum_{i=1}^n (y_i - \mathbf{\beta}^T
\mathbf{x_{i}} - \mathbf{\beta_0})^2</math>
 
For convenience, <math>\mathbf{\beta}</math> and <math>\mathbf{\beta}_0</math> can be combined into a d+1 dimensional vector, <math>\tilde{\mathbf{\beta}}</math>. The term ''1'' is appended to <math>\ x </math>. Thus, the function to be minimized can now be re-expressed as:
 
<math>\ LS = \min_{\tilde{\beta}} \sum_{i=1}^{n} (y_i - \tilde{\beta}^T \tilde{x_i} )^2 </math>
 
<math>\ LS = \min_{\tilde{\beta}} || y - X \tilde{\beta} ||^2 </math>
 
where
 
<math>\tilde{\mathbf{\beta}} = \left( \begin{array}{c}\mathbf{\beta_{1}}
 
\\ \\
  \vdots \\ \\
  \mathbf{\beta}_{d} \\ \\
  \mathbf{\beta}_{0} \end{array} \right) \in \mathbb{R}^{d+1}</math> and
 
<math>\tilde{x} = \left( \begin{array}{c}{x_{1}}
 
\\ \\
  \vdots \\ \\
  {x}_{d} \\ \\
  1 \end{array} \right) \in \mathbb{R}^{d+1}</math>.
 
where <math>\tilde{\mathbf{\beta}}</math> is a d+1 by 1 matrix(a d+1 dimensional vector)
 
Here <math>\ y </math> and <math>\tilde{\beta}</math> are vectors and <math>\ X </math> is a n by d+1 matrix with each row represents a data point with a 1 as the last entry. X also can be seen as a matrix
in which each column represents a feature and the <math>\ (d+1)^{th} </math> column is an all-one vector corresponding to <math>\ \beta_0 </math> .
 
<math>\ {\tilde{\beta}}</math> that minimizes the error is:
 
<math>\ \frac{\partial LS}{\partial \tilde{\beta}} = -2(X^T)(y-X\tilde{\beta})=0 </math>, which gives us <math>\ {\tilde{\beta}} = (X^TX)^{-1}X^Ty </math>.  When <math>\ X^TX</math> is singular we have to use pseudo inverse for obtaining optimal <math>\ \tilde{\beta}</math>.
 
Using regression to solve classification problems is not mathematically correct, if we want to be true to classification. However, this method works well in practice, if the problem is not complicated. When we have only two classes (for which the target values are encoded as <math>\ \frac{-n}{n_1} </math> and <math>\ \frac{n}{n_2} </math>, where <math>\ n_i</math> is the number of data points in class i and n is the total number of points in the data set) this method is identical to LDA.
 
==== Matlab Example ====
 
The following is the code and the explanation for each step.
 
Again, we use the data in 2_3.m.
  >>load 2_3;
  >>[U, sample] = princomp(X');
  >>sample = sample(:,1:2);
We carry out Principal Component Analysis (PCA) to reduce the dimensionality from 64 to 2.
 
  >>y = zeros(400,1);
  >>y(201:400) = 1;
We let y represent the set of labels coded as 0 and 1.
 
  >>x=[sample;ones(1,400)];
Construct x by adding a row of vector 1 to data.
 
  >>b=inv(x*x')*x*y;
Calculate b, which represents <math>\beta</math> in the linear regression model.
 
  >>x1=x';
  >>for i=1:400
    if x1(i,:)*b>0.5
        plot(x1(i,1),x1(i,2),'.')
        hold on
    elseif x1(i,:)*b < 0.5
        plot(x1(i,1),x1(i,2),'r.')
    end
  end
Plot the fitted y values.
 
[[File: linearregression.png|center|frame| the figure shows that the classification of the data points in 2_3.m by the linear regression model]]
 
==== Practical Usefulness ====
Linear regression in general is not very useful for classification purposes. One of the main problems is that new data may not always have a positive ("more successful") impact on the linear regression learning algorithm due to the non-linear "binary" form of the classes. Consider the following simple example:
 
[[File: linreg1.jpg|center|frame]]
 
The boundary decision at <math>r(x)=0.5</math> was added for visualization purposes. Clearly, linear regression categories this data properly. However, consider adding one more datum:
 
[[File: linreg2.jpg|center|frame]]
 
This datum actually skews linear regression to the point that it misclassified some of the data points that should be labelled '1'. This shows how linear regression cannot adapt well to binary classification problems.
 
==== general guidelines for building a regression model====
 
# Make sure all relevant predictors are included. These are based on your research question, theory and knowledge on the topic.
# Combine those predictors that tend to measure the same thing (i.e. as an index).
# Consider the possibility of adding interactions (mainly for those variables with large effects)
# Strategy to keep or drop variables:
## Predictor not significant and has the expected sign -> Keep it
## Predictor not significant and does not have the expected sign -> Drop it
## Predictor is significant and has the expected sign -> Keep it
## Predictor is significant but does not have the expected sign -> Review, you may need more variables, it may be interacting with another variable in the model or there may be an error in the data.<ref>http://dss.princeton.edu/training/Regression101.pdf</ref>
 
===Logistic Regression===
 
Logistic regression is a more advanced method for classification, and is
more commonly used.
In statistics, logistic regression (sometimes called the logistic model or logit model) is used for prediction of the probability of occurrence of an event by fitting data to a logit function logistic curve. It is a generalized linear model used for binomial regression. Like many forms of regression analysis, it makes use of several predictor variables that may be either numerical or categorical. For example, the probability that a person has a heart attack within a specified time period might be predicted from knowledge of the person's age, sex and body mass index. Logistic regression is used extensively in the medical and social sciences fields, as well as marketing applications such as prediction of a customer's propensity to purchase a product or cease a subscription.<ref>http://en.wikipedia.org/wiki/Logistic_regression</ref>
 
We can define a function <br />
<math>f_1(x)= P(Y=1| X=x) = (\frac{e^{\mathbf{\beta\,}^T \mathbf{x}}}{1+e^{\mathbf{\beta\,}^T \mathbf{x}}})</math>
[[File:Picture1.png‎|150px|thumb|right|<math>P(Y=1 | X=x)</math>]]
 
 
This is a valid conditional density function since the two components (<math>f_1</math> and <math>f_2</math>, shown just below) sum to 1 and remain in [0, 1].
 
It looks similar to a step function, but
we have relaxed it so that we have a smooth curve, and can therefore take the
derivative.
 
The range of this function is (0,1) since<br /><br/>
<math>\lim_{x \to -\infty}f_1(\mathbf{x}) = 0</math> and
<math>\lim_{x \to \infty}f_1(\mathbf{x}) = 1</math>.
 
As shown on [http://www.wolframalpha.com/input/?i=Plot%5BE^x/%281+%2B+E^x%29,+{x,+-10,+10}%5D%29 this graph] of <math>\ P(Y=1 | X=x) </math>.
 
Then we compute the complement of f1(x), and get<br />
 
<math>f_2(x)= P(Y=0| X=x) = 1-f_1(x) = (\frac{1}{1+e^{\mathbf{\beta\,}^T \mathbf{x}}})</math>, denoted <math>f_2</math>.
[[File:Picture2.png‎ |150px|thumb|right|<math>P(Y=0 | X=x)</math>]]
 
 
Function <math>f_2</math> is commonlly called Logistic function, and it behaves like <br />
<math>\lim_{x \to -\infty}f_2(\mathbf{x}) = 1</math> and<br />
<math>\lim_{x \to \infty}f_2(\mathbf{x}) = 0</math>.
 
As shown on [http://www.wolframalpha.com/input/?i=Plot%5B1/%281+%2B+E^x%29,+{x,+-10,+10}%5D%29 this graph] of <math>\ P(Y=0 | X=x) </math>.
 
Since <math>f_1</math> and <math>f_2</math> specify the conditional distribution, the Bernoulli distribution is appropriate for specifying the likelihood of the class.  Conveniently code the two classes via 0 and 1 responses, then the likelihood of <math>y_i</math> for given input <math>x_i</math> is given by,
 
<math>f(y_i|\mathbf{x_i}) = (f_1(\mathbf{x_i}))^{y} (1-f_1\mathbf{x_i}))^{1-y} = (\frac{e^{\mathbf{\beta\,}^T \mathbf{x_i}}}{1+e^{\mathbf{\beta\,}^T \mathbf{x_i}}})^{y_i} (\frac{1}{1+e^{\mathbf{\beta\,}^T \mathbf{x_i}}})^{1-y_i}</math>
 
Thus y takes value 1 with success probability <math>f_1</math> and value 0 with failure probability <math>1 - f_1</math>.  We can use this to derive the likelihood for N training observations, and search for the maximizing parameter <math>\beta</math>.
 
In general, we can think of the problem as having a box with some knobs. Inside the box is our objective function which gives the form to classify our input (<math>x_i</math>) to
our output (<math>y_i</math>). The knobs in the box are functioning like the parameters of the objective function. Our job is to find the proper parameters that can minimize the error between our output and the true value. So we have turned our machine learning problem into an optimization problem.
 
Since we need to find the parameters that maximize the chance of having our observed data coming from the distribution of <math>f (x|\theta)</math>, we need to introduce Maximum Likelihood Estimation.
 
====Maximum Likelihood Estimation====
 
Given iid data points <math>({\mathbf{x}_i})_{i=1}^n</math> and density function <math>f(\mathbf{x}|\mathbf{\theta})</math>, where the form of f is known but the parameters <math>\theta</math> are unknown. The maximum likelihood estimation of <math>\theta\,_{ML}</math> is a set of parameters that maximize the probability of observing <math>({\mathbf{x}_i})_{i=1}^n</math> given <math>\theta\,_{ML}</math>. For example, we may know that the data come from a Gaussian distribution but we don't know the mean and variance of the distribution.
 
<math>\theta_\mathrm{ML} = \underset{\theta}{\operatorname{arg\,max}}\ f(\mathbf{x}|\theta)</math>.
 
There was some discussion in class regarding the notation. In literature, Bayesians use <math>f(\mathbf{x}|\mu)</math> the probability of x given <math>\mu</math>, while Frequentists use <math>f(\mathbf{x};\mu)</math> the probability of x and <math>\mu</math> occurring together. In practice, these two are equivalent.
 
Our goal is to find theta to maximize
<math>\mathcal{L}(\theta\,) = f(\underline{\mathbf{x}}|\;\theta) = \prod_{i=1}^n f(\mathbf{x_i}|\theta)</math>. where <math>\underline{\mathbf{x}}=\{x_i\}_{i=1}^{n}</math> (The second equality holds because data points are iid.)
 
In many cases, it’s more convenient to work with the natural logarithm of the likelihood. (Recall that the logarithm preserves minumums and maximums.)
<math>\ell(\theta)=\ln\mathcal{L}(\theta\,)</math>
 
<math>\ell(\theta\,)=\sum_{i=1}^n \ln f(\mathbf{x_i}|\theta)</math>
 
Applying Maximum Likelihood Estimation to <math>f(y|\mathbf{x})= (\frac{e^{\mathbf{\beta\,}^T \mathbf{x}}}{1+e^{\mathbf{\beta\,}^T \mathbf{x}}})^{y} (\frac{1}{1+e^{\mathbf{\beta\,}^T \mathbf{x}}})^{1-y}</math>, gives
 
<math>\mathcal{L}(\mathbf{\beta\,})=\prod_{i=1}^n (\frac{e^{\mathbf{\beta\,}^T \mathbf{x_i}}}{1+e^{\mathbf{\beta\,}^T \mathbf{x_i}}})^{y_i} (\frac{1}{1+e^{\mathbf{\beta\,}^T \mathbf{x_i}}})^{1-y_i}</math>
 
<math>\ell(\mathbf{\beta\,}) = \sum_{i=1}^n \left[ y_i \ln(P(Y=y_i|X=x_i)) + (1-y_i) \ln(1-P(Y=y_i|X=x_i))\right]
</math>
 
This is the likelihood function we want to maximize.  Note that <math>-\ell(\mathbf{\beta\,})</math> can be interpreted as the cost function we want to minimize. Simplifying, we get:
 
<math>\begin{align} {\ell(\mathbf{\beta\,})} & {} = \sum_{i=1}^n \left(y_i ({\mathbf{\beta\,}^T \mathbf{x_i}} - \ln({1+e^{\mathbf{\beta\,}^T \mathbf{x_i}}})) + (1-y_i) (\ln{1} - \ln({1+e^{\mathbf{\beta\,}^T \mathbf{x_i}}}))\right) \\[10pt]&{} = \sum_{i=1}^n \left(y_i ({\mathbf{\beta\,}^T \mathbf{x_i}} - \ln({1+e^{\mathbf{\beta\,}^T \mathbf{x_i}}})) - (1-y_i) \ln({1+e^{\mathbf{\beta\,}^T \mathbf{x_i}}})\right) \\[10pt] &{} = \sum_{i=1}^n \left(y_i ({\mathbf{\beta\,}^T \mathbf{x_i}} - \ln({1+e^{\mathbf{\beta\,}^T \mathbf{x_i}}})) - \ln({1+e^{\mathbf{\beta\,}^T \mathbf{x_i}}}) + y_i \ln({1+e^{\mathbf{\beta\,}^T \mathbf{x_i}}})\right) \\[10pt] &{} = \sum_{i=1}^n \left(y_i {\mathbf{\beta\,}^T \mathbf{x_i}} - \ln({1+e^{\mathbf{\beta\,}^T \mathbf{x_i}}})\right) \end{align}</math>
 
<math>\begin{align} {\frac{\partial \ell}{\partial \mathbf{\beta\,}}}&{} = \sum_{i=1}^n \left(y_i \mathbf{x_i} - \frac{e^{\mathbf{\beta\,}^T \mathbf{x_i}}}{1+e^{\mathbf{\beta\,}^T \mathbf{x_i}}} \mathbf{x_i} \right) \\[8pt] & {}= \sum_{i=1}^n \left(y_i \mathbf{x_i}  - P(\mathbf{x_i} | \mathbf{\beta\,}) \mathbf{x_i}\right) \end{align}</math>
 
Now set <math>\frac{\partial \ell}{\partial \mathbf{\beta\,}}</math> equal to 0, and <math> \mathbf{\beta\,} </math> can be numerically solved by Newton's method.
 
====Newton's Method====
 
Newton's Method (or Newton-Raphson method) is a numerical method to find better approximations to the solutions of real-valued function. The function usually does not have an analytical form.
 
The goal is to find <math>\mathbf{x}</math> such that <math>
f(\mathbf{x})
= 0 </math>, such Xs are called the roots of function f. Iteration can be used to solve for x using the following equation
<math>\mathbf{x_n} = \mathbf{x_{n-1}} - \frac{f(\mathbf{x_{n-1}})}{f'(\mathbf{x_{n-1}})}.\,\!
</math>.
 
It takes an initial guess <math>\mathbf{x_0}</math> and the direction <math>\ \frac{f(x_{n-1})}{f'(x_{n-1})}</math> that moves toward a better approximation. It then finds a newer and better <math>\mathbf{x_n}</math>. Iterating from the original guess slowly converges to a solution that will be sufficiently accurate to the actual solution <math>\mathbf{x_n}</math>. Note that this may find local optimums, and each function may require multiple guesses to find all the roots.
 
=====Matlab Example=====
 
Below is the Matlab code to find a root of the function <math>\,y=x^2-2500</math> from the initial guess of <math>\,x=90</math>.  The roots of this equation are trivially solved analytically to be <math>\,x=\pm 50</math>. 
 
x=1:100;
y=x.^2 - 2500;  %function to find root of
plot(x,y);
x_opt=90;  %starting guess
x_traversed=[];
y_traversed=[];
error=[];
for i=1:6,
    y_opt=x_opt^2-2500;
    y_prime_opt=2*x_opt;
   
    %save results of each iteration
    x_traversed=[x_traversed x_opt];
    y_traversed=[y_traversed y_opt];
    error=[error abs(y_opt)];
   
    %update minimum
    x_opt=x_opt-(y_opt/y_prime_opt);
end
hold on;
plot(x_traversed,y_traversed,'r','LineWidth',2);
title('Progressions Towards Root of y=x^2 - 2500');
legend('y=x^2 - 2500','Progression');
xlabel('x');
ylabel('y');
hold off;
figure();
semilogy(1:6,error);
title('Error vs Iteration');
xlabel('Iteration');
ylabel('Absolute Y Error');
 
In this example the Newton method converges to an optimum to within machine precision in only 6 iterations as can be seen from the plot of the Y deviate below.
 
[[File:newton_error.png]]
[[File:newton_progression.png]]
 
===Advantages/Limitation of Linear Regression ===
 
*Linear regression implements a statistical model that, when relationships between the independent variables  and the dependent variable are almost linear, shows optimal results.
*Linear regression is often inappropriately used to model non-linear relationships.
*Linear regression is limited to predicting numeric output.
*A lack of explanation about what has been learned can be a problem.
 
 
 
 
 
===Advantages of Logistic Regression===
 
Logistic regression has several advantages over discriminant analysis:
 
* It is more robust: the independent variables don't have to be normally distributed, or have equal variance in each group.
* It does not assume a linear relationship between the IV and DV.
* It may handle nonlinear effects.
* You can add explicit interaction and power terms.
* The DV need not be normally distributed.
* There is no homogeneity of variance assumption.
* Normally distributed error terms are not assumed.
* It does not require that the independent variables be interval.
* It does not require that the independent variables be unbounded.
 
===Comparison Between Logistic Regression And Linear Regression===
 
Linear regression is a regression where the explanatory variable X and response variable Y are linearly related. Both X and Y can be continuous variables, and for every one unit increase in the explanatory variable, there is a set increase or decrease in the response variable Y. A closed form solution exists for the least squares estimate of <math>\beta</math>.
 
Logistic regression is a regression where the explanatory variable X and response variable Y are not linearly related. The response variable provides the probability of occurrence of an event. X can be continuous but Y must be a categorical variable (e.g., can only assume two values, i.e. 0 or 1). For every one unit increase in the explanatory variable, there is a set increase or decrease in the probability of occurrence of the event. No closed form solution exists for the least squares estimate of <math>\beta</math>.
 
 
In terms of making assumptions on the data set: In LDA, we assumed that the probability density function (PDF) of each class and priors were Gaussian and Bernoulli, respectively. However, in Logistic Regression, we assumed that the PDF of each class had a parametric form and we ignored the priors. Therefore, we may conclude that Logistic regression has less assumptions than LDA.
 
==Newton-Raphson Method (Lecture: Oct 11, 2011)==
Previously we had derivated the log likelihood function for the logistic function.
 
<math>\begin{align} L(\beta\,) = \prod_{i=1}^n \left( (\frac{e^{\mathbf{\beta\,}^T \mathbf{x_i}}}{1+e^{\mathbf{\beta\,}^T \mathbf{x_i}}})^{y_i}(\frac{1}{1+e^{\mathbf{\beta\,}^T \mathbf{x_i}}})^{1-y_i} \right) \end{align}</math>
 
After taking log, we can have:
 
<math>\begin{align} \ell(\beta\,) = \sum_{i=1}^n \left( y_i \ln{\frac{e^{\mathbf{\beta\,}^T \mathbf{x_i}}}{1+e^{\mathbf{\beta\,}^T \mathbf{x_i}}}} + (1 - y_i) \ln{\frac{1}{1+e^{\mathbf{\beta\,}^T \mathbf{x_i}}}} \right) \end{align}</math>
 
This implies that:
 
<math>\begin{align} {\ell(\mathbf{\beta\,})} & {} = \sum_{i=1}^n \left(y_i \left( {\mathbf{\beta\,}^T \mathbf{x_i}} - \ln(1+e^{\mathbf{\beta\,}^T \mathbf{x_i}}) \right) - (1 - y_i)\ln({1+e^{\mathbf{\beta\,}^T \mathbf{x_i}}})\right) \end{align}</math>
<math>\begin{align} {\ell(\mathbf{\beta\,})} & {} = \sum_{i=1}^n \left(y_i {\mathbf{\beta\,}^T \mathbf{x_i}} - \ln({1+e^{\mathbf{\beta\,}^T \mathbf{x_i}}})\right) \end{align}</math>
 
Our goal is to find the <math>\beta\,</math> that maximizes <math>{\ell(\mathbf{\beta\,})}</math>. We use calculus to do this ie solve <math>{\frac{\partial \ell}{\partial \mathbf{\beta\,}}}=0</math>. To do this we use the famous numerical method of Newton-Raphson. This is an iterative method where we calculate the first and second derivative at each iteration.<br />
<br />
 
====Newton's Method====
Here is how we usually implement Newton's Method: <math>\mathbf{x_{n+1}} = \mathbf{x_n} - \frac{f(\mathbf{x_n})}{f'(\mathbf{x_n})}.\,\!
</math>. In our particular case, we look for x such that <math>g'(x) = 0</math>, and implement it by <math>\mathbf{x_{n+1}} = \mathbf{x_n} - \frac{f'(\mathbf{x_n})}{f''(\mathbf{x_n})}.\,\!
</math>.<br />
In practice, the convergence speed depends on |F'(x*)|, where F(x) = <math>\mathbf{x} - \frac{f(\mathbf{x})}{f'(\mathbf{x})}.\,\!</math>. The smaller the |F'(x*)| is, the faster the convergence is.<br />
<br />
<br />
The first derivative is typically called the score vector.
 
<math>\begin{align} S(\beta\,) {}= {\frac{\partial \ell}{ \partial \mathbf{\beta\,}}}&{} = \sum_{i=1}^n \left(y_i \mathbf{x_i} - \frac{e^{\mathbf{\beta\,}^T \mathbf{x_i}}}{1+e^{\mathbf{\beta\,}^T \mathbf{x_i}}} \mathbf{x_i} \right) \\[8pt]  \end{align}</math>
 
<math>\begin{align} S(\beta\,) {}= {\frac{\partial \ell}{ \partial \mathbf{\beta\,}}}&{} = \sum_{i=1}^n \left(y_i \mathbf{x_i} - P(x_i|\beta) \mathbf{x_i} \right) \\[8pt]  \end{align}</math>
 
where <math>\ P(x_i|\beta) = \frac{e^{\beta^T x_i}}{1+e^{\beta^T x_i}} </math>
 
The negative of the second derivative is typically called the information matrix.
 
<math>\begin{align} I(\beta\,) {}= -{\frac{\partial^2 \ell}{\partial \mathbf {\beta\,} \partial \mathbf{\beta\,}^T}}&{} = \sum_{i=1}^n \left(\mathbf{x_i}\mathbf{x_i}^T (\frac{e^{\mathbf{\beta\,}^T \mathbf{x_i}}}{1+e^{\mathbf{\beta\,}^T \mathbf{x_i}}})(1 - \frac{e^{\mathbf{\beta\,}^T \mathbf{x_i}}}{1+e^{\mathbf{\beta\,}^T \mathbf{x_i}}}) \right) \\[8pt]  \end{align}</math>
 
<math>\begin{align} I(\beta\,) {}= -{\frac{\partial^2 \ell}{\partial \mathbf {\beta\,} \partial \mathbf{\beta\,}^T}}&{} = \sum_{i=1}^n \left(\mathbf{x_i}\mathbf{x_i}^T (\frac{e^{\mathbf{\beta\,}^T \mathbf{x_i}}}{1+e^{\mathbf{\beta\,}^T \mathbf{x_i}}})(\frac{1}{1+e^{\mathbf{\beta\,}^T \mathbf{x_i}}}) \right) \\[8pt]  \end{align}</math>
 
<math>\begin{align} I(\beta\,) {}= -{\frac{\partial^2 \ell}{\partial \mathbf {\beta\,} \partial \mathbf{\beta\,}^T}}&{} = \sum_{i=1}^n \left(\mathbf{x_i}\mathbf{x_i}^T (P(x_i|\beta))(1 - P(x_i|\beta)) \right) \\[8pt]  \end{align}</math>
 
again where <math>\ P(x_i|\beta) = \frac{e^{\beta^T x_i}}{1+e^{\beta^T x_i}} </math>
 
<math>\, \beta\,^{new} \leftarrow \beta\,^{old}-\frac {f(\beta\,^{old})}{f'(\beta\,^{old})} </math><br />
<br \>
 
We then use the following update formula to calcalute continually better estimates of the optimal <math>\beta\,</math>. It is not typically important what you use as your initial estimate <math>\beta\,^{(1)}</math> is. (However, some improper beta will cause I to be a singular matrix).
 
<math> \beta\,^{(r+1)} {}= \beta\,^{(r)} + (I(\beta\,^{(r)}))^{-1} S(\beta\,^{(r)} )</math>
 
====Matrix Notation====
 
Let <math>\mathbf{y}</math> be a (n x 1) vector of all class labels. This is called the response in other contexts.
 
Let <math>\mathbb{X}</math> be a (n x (d+1)) matrix of all your features. Each row represents a data point. Each column represents a feature/covariate.
 
Let <math>\mathbf{p}^{(r)}</math> be a (n x 1) vector with values <math> P(\mathbf{x_i} |\beta\,^{(r)} ) </math>
 
Let <math>\mathbb{W}^{(r)}</math> be a (n x n) diagonal matrix with <math>\mathbb{W}_{ii}^{(r)} {}= P(\mathbf{x_i} |\beta\,^{(r)} )(1 - P(\mathbf{x_i} |\beta\,^{(r)} ))</math>
 
The score vector, information matrix and update equation can be rewritten in terms of this new matrix notation, so the first derivative is
 
<math>\begin{align} S(\beta\,^{(r)}) {}= {\frac{\partial \ell}{ \partial \mathbf{\beta\,}}}&{} = \mathbb{X}^T(\mathbf{y} - \mathbf{p}^{(r)})\end{align}</math>
 
And the second derivative is
 
<math>\begin{align} I(\beta\,^{(r)}) {}= -{\frac{\partial^{2} \ell}{\partial \mathbf {\beta\,} \partial \mathbf{\beta\,}^T}}&{} = \mathbb{X}^T\mathbb{W}^{(r)}\mathbb{X} \end{align}</math>
 
Therfore, we can fit a regression problem as follows
 
<math> \beta\,^{(r+1)} {}= \beta\,^{(r)} + (I(\beta\,^{(r)}))^{-1}S(\beta\,^{(r)} ) {}</math>
 
<math> \beta\,^{(r+1)} {}= \beta\,^{(r)} + (\mathbb{X}^T\mathbb{W}^{(r)}\mathbb{X})^{-1}\mathbb{X}^T(\mathbf{y} - \mathbf{p}^{(r)})</math>
 
====Iteratively Re-weighted Least Squares====
If we reorganize this updating formula we can see it is really iteratively solving a least squares problem each time with a new weighting.
 
<math>\beta\,^{(r+1)} {}= (\mathbb{X}^T\mathbb{W}^{(r)}\mathbb{X})^{-1}(\mathbb{X}^T\mathbb{W}^{(r)}\mathbb{X}\beta\,^{(r)} + \mathbb{X}^T(\mathbf{y} - \mathbf{p}^{(r)}))</math>
 
<math>\beta\,^{(r+1)} {}= (\mathbb{X}^T\mathbb{W}^{(r)}\mathbb{X})^{-1}\mathbb{X}^T\mathbb{W}^{(r)}\mathbf(z)^{(r)}</math>
 
where <math> \mathbf{z}^{(r)} = \mathbb{X}\beta\,^{(r)} + (\mathbb{W}^{(r)})^{-1}(\mathbf{y}-\mathbf{p}^{(r)}) </math>
 
 
Recall that linear regression by least squares finds the following minimum: <math>\ \min_{\beta}(y-X \beta)^T(y-X \beta)</math>
 
Similarly, we can say that <math>\ \beta^{(r+1)}</math> is the solution of a weighted least square problem in the new space of <math>\ \mathbf{z} </math>: ( compare the equation of <math>\ \beta^{(r+1)}</math> with the solution of weighted least square
<math>\ {\tilde{\beta}} = (X^TX)^{-1}X^Ty </math> )
 
<math>\beta^{(r+1)} \leftarrow arg \min_{\beta}(\mathbf{z}-X \beta)^T W (\mathbf{z}-X \beta)</math>
 
====Fisher Scoring Method====
 
Fisher Scoring is a method very similiar to Newton-Raphson. It uses the expected Information Matrix as opposed to the observed information matrix. This distinction simplifies the problem and in perticular the computational complexity. To learn more about this method & logistic regression in general you can take Stat431/831 at the University of Waterloo.
 
===Multi-class Logistic Regression===
 
In a multi-class logistic regression we have ''K'' classes. For 2 classes ''K'' and ''l''
 
<math>\frac{P(Y=l|X=x)}{P(Y=K|X=x)} = e^{\beta_l^T x}</math><br />
(this is resulting from
<math>f_1(x)= (\frac{e^{\mathbf{\beta\,}^T \mathbf{x}}}{1+e^{\mathbf{\beta\,}^T \mathbf{x}}})</math> and <math>f_2(x)=  (\frac{1}{1+e^{\mathbf{\beta\,}^T \mathbf{x}}})</math> )
 
We call <math>log(\frac{P(Y=l|X=x)}{P(Y=k|X=x)}) = (\beta_l-\beta_k)^T x</math> , the log ratio of the posterior probabilities as the logit transformation. The decision boundary between the 2 classes is the set of points where the logit transformation is 0.
 
For each class from 1 to K-1 we then have:
 
<math>log(\frac{P(Y=1|X=x)}{P(Y=K|X=x)}) = \beta_1^T x</math>
 
<math>log(\frac{P(Y=2|X=x)}{P(Y=K|X=x)}) = \beta_2^T x</math>
 
<math>log(\frac{P(Y=K-1|X=x)}{P(Y=K|X=x)}) = \beta_{K-1}^T x</math>
 
Note that choosing ''Y=K'' is arbitrary and any other choice is equally valid.
 
Based on the above the posterior probabilities are given by: <math>P(Y=k|X=x) = \frac{e^{\beta_k^T x}}{1 + \sum_{i=1}^{K-1}{e^{\beta_i^T x}}}\;\;for \; k=1,\ldots, K-1</math>
 
<math> P(Y=K|X=x)=\frac{1}{1+\sum_{i=1}^{K-1}{e^{\beta_i^T x}}}  </math>
 
===Logistic Regression Vs. Linear Discriminant Analysis (LDA)===
 
Logistic Regression Model and Linear Discriminant Analysis (LDA) are widely used for classification. Both models build linear boundaries to classify different groups. Also, the categorical outcome variables (i.e. the dependent variables) must be mutually exclusive.
 
LDA used more parameters.
 
However, these two models differ in their basic approach. While Logistic Regression is more relaxed and flexible in its assumptions, LDA assumes that its explanatory variables are normally distributed, linearly related and have equal covariance matrices for each class. Therefore, it can be expected that LDA is more appropriate if the normality assumptions and equal covariance assumption are fulfilled in its explanatory variables. But in all other situations Logistic Regression should be appropriate.
 
 
Also, the total number of parameters to compute is different for Logistic Regression and LDA. If the explanatory variables have d dimensions and there are two classes to categorize, we need to estimate <math>\ d+1</math> parameters in Logistic Regression (all elements of the d by 1 <math>\ \beta </math> vector plus the scalar <math>\ \beta_0 </math>) and the number of parameters grows linearly w.r.t. dimension, while we need to estimate <math>2d+\frac{d*(d+1)}{2}+2</math> parameters in LDA (two mean values for the Gaussians, the d by d symmetric covariance matrices, and two priors for the two classes) and the number of parameters grows quadratically w.r.t. dimension.
 
 
Note that the number of parameters also corresponds to the minimum number of observations needed to compute the coefficients of each function. Techniques do exist though for handling high dimensional problems where the number of parameters exceeds the number of observations. Logistic Regression can be modified using shrinkage methods to deal with the problem of having less observations than parameters. When maximizing the log likelihood, we can add a <math>-\frac{\lambda}{2}\sum^{K}_{k=1}\|\beta_k\|_{2}^{2}</math> penalization term where K is the number of classes. This resulting optimization problem is convex and can be solved using Newton-Raphson method as given in Zhu and hastie (2004). LDA involves the inversion of a d x d covariance matrix. When d is bigger than n (where n is the number of observations) this matrix has rank n < d and thus is singular. When this is the case, we can either use the pseudo inverse or perform regularized discriminant analysis which solves this problem. In RDA, we define a new covariance matrix <math>\, \Sigma(\gamma) = \gamma\Sigma + (1 - \gamma)diag(\Sigma)</math> with <math>\gamma \in [0,1]</math>. Cross validation can be used to calculate the best <math>\, \gamma</math>. More details on RDA can be found in Guo et al. (2006).
 
 
Because the Logistic Regression model has the form <math>log\frac{f_1(x)}{f_0(x)} = \beta{x}</math>, we can clearly see the role of each input variable in explaining the outcome. This is one advantage that Logistic Regression has over other classification methods and is why it is so popular in data analysis.
 
 
In terms of the performance speed, since LDA is non-iterative, unlike Logistic Regression which uses the iterative Newton-Raphson method, LDA can be expected to be faster than Logistic Regression.
 
===Example===
 
(Not discussed in class.)  One application of logistic regression that has recently been used is predicting the winner of NFL games. Previous predictors, like Yards Per Carry (YPC), were used to build probability models for games. Now, the Success Rate (SR), defined as the percentage of runs in which the a team’s point expectancy has improved, is shown to be a better predictor of a team's performance. SR is based on down, distance and yard line and is less susceptible to rare breakaway plays that can be considered outliers. More information can be found at [http://fifthdown.blogs.nytimes.com/2011/09/29/n-f-l-game-probabilities-are-back-with-one-adjustment/].
 
== Perceptron ==
 
[[Image:Perceptron1.png|right|thumb|300px|Simple perceptron]]
[[Image:Perceptron2.png|right|thumb|300px|Simple perceptron where <math>\beta_0</math> is defined as 1]]
 
Perceptron is a simple, yet effective, linear separator classifier. The perceptron is the building block for neural networks. It was invented by Rosenblatt in 1957 at Cornell Labs, and first mentioned in the paper "The Perceptron - a perceiving and recognizing automaton". The perceptron is used on linearly separable data sets.
The LS computes a linear combination of factor of input and returns the sign.
 
For a 2 class problem, and a set of inputs with ''d'' features, a perceptron will use a weighted sum and it will classify the information using the sign of the result (i.e it uses a step function as it's [http://en.wikipedia.org/wiki/Activation_function activation function] ). The figures on the right give an example of a perceptron. In these examples, <math>\ x^i</math> is the ''i''-th feature of a sample and <math>\ \beta_i</math> is the ''i''-th weight. <math>\beta_0</math> is defined as the bias. The bias alters the position of the decision boundary between the 2 classes. From a geometrical point of view, Perceptron assigns label "1" to elements on one side of vector <math>\ \beta</math> and label "-1" to elements on the other of <math>\ \beta</math>, where <math>\ \beta</math> is a vector of <math>\ \beta_i</math>s.
 
Perceptrons are generally trained using [http://en.wikipedia.org/wiki/Gradient_descent gradient descent]. This type of learning can have 2 side effects:
* If the data sets are well separated, the training of the perceptron can lead to multiple valid solutions.
* If the data sets are not linearly separable, the learning algorithm will never finish.
 
Perceptrons are the simplest kind of a feedforward neural network. A perceptron is the building block for other neural networks such as '''Multi-Layer Perceptron (MLP)''' which uses multiple layers of perceptrons with nonlinear activation functions so that it can classify data that is not linearly separable.
 
=== History of Perceptrons and Other Neural Models ===
One of the first perceptron-like models is the '''"McCulloch-Pitts Neuron"''' model developed by McCulloch and Pitts in the 1940's <ref> W. Pitts and W. S. McCulloch, "How we know universals: the perception of auditory and visual forms," ''Bulletin of Mathematical Biophysics'', 1947.</ref>. It uses a weighted sum of the inputs that is fed through an activation function, much like the perceptron. However, unlike the perceptron, the weights in the "McCulloch-Pitts Neuron" model are not adjustable, so the "McCulloch-Pitts Neuron" is unable to perform any learning based on the input data.
 
As stated in the introduction of the [[#Perceptron | perceptron]] section, the '''Perceptron''' was developed by Rosenblatt around 1960. Around the same time as the perceptron was introduced, the '''Adaptive Linear Neuron (ADALINE)''' was developed by Widrow <ref name="Widrow"> B. Widrow, "Generalization and information storage in networks of adaline 'neurons'," ''Self Organizing Systems'', 1959.</ref>. The ADALINE differs from the standard perceptron by using the weighted sum (the net) to adjust the weights in the learning phase. The standard perceptron uses the output to adjust its weights (i.e. the net after it passed through the activation function).
 
Since both the perceptron and ADALINE are only able to handle data that is linearly separable '''Multiple ADALINE (MADALINE)''' was introduced <ref name="Widrow"/>. MADALINE is a two layer network to process multiple inputs. Each layer contains a number of ADALINE units. The lack of an appropriate learning algorithm prevented more layers of units to be cascaded at the time and interest in "neural networks" receded until the 1980's when the backpropagation algorithm was applied to neural networks and it became possible to implement the '''Multi-Layer Perceptron (MLP)'''.
 
Many importand advances have been boosted by the use of inexpensive computer emulations. Following an initial period of enthusiasm, the field survived a period of frustration and disrepute. During this period when funding and professional support was minimal, important advances were made by relatively few reserchers. These pioneers were able to develop convincing technology which surpassed the limitations identified by Minsky and Papert. Minsky and Papert, published a book (in 1969) in which they summed up a general feeling of frustration (against neural networks) among researchers, and was thus accepted by most without further analysis. Currently, the neural network field enjoys a resurgence of interest and a corresponding increase in funding.<ref>
http://www.doc.ic.ac.uk/~nd/surprise_96/journal/vol4/cs11/report.html#Historical background
</ref>
 
== Perceptron Learning Algorithm (Lecture: Oct. 13, 2011) ==
Like all of the learning methods we have seen, learning in a perceptron model is accomplished by minimizing a cost (or error) function, <math>\phi(\boldsymbol{\beta}, \beta_0)</math>. In the perceptron case, the cost function is simply the difference of the output (<math>sig(\sum_{i=0}^d \beta_i x^{(i)})</math>) and the target. To achieve this, we define a cost function, <math>\phi(\boldsymbol{\beta}, \beta_0)</math>, as a summation of the distance between all misclassified points and the hyper-plane, or the decision boundary. To minimize this cost function, we need to estimate <math>\boldsymbol{\beta, \beta_0}</math>.
 
<math>\min_{\beta,\beta_0} \phi(\boldsymbol{\beta}, \beta_0)</math> = {distance of all misclassified points}
 
The logic is as follows:
 
[[File:hyperplane.png|thumb|250px|right| Distance between the point <math>\ x </math> and the decision boundary hyperplane <math>\ L </math> (black line). Note that the vector <math>\ \beta </math> is orthogonal to the decision boundary hyperplane and that points <math>\ x_0, x_1, x_2 </math> are arbitrary points on the decision boundary hyperplane. ]]
 
'''1)''' Because a hyper-plane <math>\,L</math> can be defined as
 
<math>\, L=\{x: f(x)=\beta^Tx+\beta_0=0\},</math>
 
 
For any two arbitrary points <math>\,x_1 </math> and <math>\,x_2 </math> on <math>\, L</math>, we have
 
<math>\,\beta^Tx_1+\beta_0=0</math>,
 
<math>\,\beta^Tx_2+\beta_0=0</math>,
 
such that 
 
<math>\,\beta^T(x_1-x_2)=0</math>.
 
Therefore, <math>\,\beta</math> is orthogonal to the hyper-plane and it is the normal vector.
 
 
'''2)''' For any point <math>\,x_0</math> in <math>\ L,</math> <math>\,\;\;\beta^Tx_0+\beta_0=0</math>, which means <math>\, \beta^Tx_0=-\beta_0</math>.
 
 
'''3)''' We set <math>\,\beta^*=\frac{\beta}{||\beta||}</math> as the unit normal vector of the hyper-plane<math>\, L</math>. For simplicity we call <math>\,\beta^*</math> norm vector. The distance of point <math>\,x</math> to <math>\ L</math> is given by
 
<math>\,\beta^{*T}(x-x_0)=\beta^{*T}x-\beta^{*T}x_0
                      =\frac{\beta^Tx}{||\beta||}+\frac{\beta_0}{||\beta||}
                      =\frac{(\beta^Tx+\beta_0)}{||\beta||}</math>
 
Where <math>\,x_0</math> is any point on <math>\ L</math>. Hence, <math>\,\beta^Tx+\beta_0</math> is proportional to the distance of the point <math>\,x</math> to the hyper-plane<math>\, L</math>.
 
 
'''4)''' The distance from a misclassified data point <math>\,x_i</math> to the hyper-plane <math>\, L </math> is
 
<math>\,d_i = -y_i(\boldsymbol{\beta}^Tx_i+\beta_0)</math>
 
where <math>\,y_i</math> is a target value, such that <math>\,y_i=1</math> if <math>\boldsymbol{\beta}^Tx_i+\beta_0<0</math>, <math>\,y_i=-1</math> if <math>\boldsymbol{\beta}^Tx_i+\beta_0>0</math>
 
Since we need to find the distance from the hyperplane to the ''misclassified'' data points, we need to add a negative sign in front. When the data point is misclassified, <math>\boldsymbol{\beta}^Tx_i+\beta_0</math> will produce an opposite sign of <math>\,y_i</math>. Since we need a positive sign for distance, we add a negative sign.
 
=== Perceptron Learning using Gradient Descent ===
 
The gradient descent is an optimization method that finds the minimum of an objective function by incrementally updating its parameters in the negative direction of the derivative of this function. That is, it finds the steepest slope in the D-dimensional space at a given point, and descends down in the direction of the negative slope. Note that unless the error function is convex, it is possible to get stuck in a local minima.
In our case, the objective function to be minimized is classification error and the parameters of this function are the weights associated with the inputs, <math>\beta</math> . The gradient descent algorithm updates the weights as follows:
 
<math>\beta^{\mathrm{new}} \leftarrow \beta^{\mathrm{old}} \rho \frac{\partial Err}{\partial \beta}</math>
 
<math>\rho </math>  is called the ''learning rate''.<br />
The Learning Rate <math> \rho </math> is positively related to the step size of convergence of <math>\min \phi(\boldsymbol{\beta}, \beta_0) </math>. i.e. the larger <math> \rho </math> is, the larger the step size is. Typically, <math>\rho \in [0.1, 0.3]</math>.
 
The classification error is  defined as the distance of misclassified observations to the decision boundary:
 
 
To minimize the cost function <math>\phi(\boldsymbol{\beta}, \beta_0) = -\sum\limits_{i\in M} y_i(\boldsymbol{\beta}^Tx_i+\beta_0)</math> where <math>\ M=\{\text {all points that are misclassified}\}</math> <br>
<math>\cfrac{\partial \phi}{\partial \boldsymbol{\beta}} = - \sum\limits_{i\in M} y_i x_i </math> and <math> \cfrac{\partial \phi}{\partial \beta_0} = -\sum\limits_{i \in M} y_i</math>
 
Therefore, the gradient is
<math>\nabla D(\beta,\beta_0)
= \left( \begin{array}{c} -\displaystyle\sum_{i \in M}y_{i}x_i \\   
  -\displaystyle\sum_{i \in M}y_{i} \end{array} \right)</math>
 
 
Using the gradient descent algorithm to solve these two equations, we have
<math>\begin{pmatrix}
\boldsymbol{\beta}^{\mathrm{new}}\\
\beta_0^{\mathrm{new}}
\end{pmatrix}
=
\begin{pmatrix}
\boldsymbol{\beta}^{\mathrm{old}}\\
\beta_0^{\mathrm{old}}
\end{pmatrix}
+ \rho
\begin{pmatrix}
y_i x_i\\
y_i
\end{pmatrix}</math>
 
 
If the data is linearly-separable, the solution is theoretically guaranteed to converge to a separating hyperplane in a finite number of iterations. In this situation the number of iterations depends on the learning rate and the margin. However, if the data is not linearly separable there is no guarantee that the algorithm converges.
<math>\begin{pmatrix}
\beta^0\\
\beta_0^0
\end{pmatrix}</math>
 
Note that we consider the offset term <math>\,\beta_0</math> separately from <math>\ \beta</math> to distinguish this formulation from those in which the direction of the hyperplane (<math>\ \beta</math>) has been considered.
 
A major concern about gradient descent is that it may get trapped in local optimal solutions. Many works such as [http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=00298667 this paper] by ''Cetin et al.'' and [http://indian.cp.eng.chula.ac.th/cpdb/pdf/research/fullpaper/847.pdf this paper] by ''Atakulreka et al.'' have been done to tackle this issue.
 
 
'''Features'''
* A Perceptron can only discriminate between two classes at a time.
* When data is (linearly) separable, there are an infinite number of solutions depending on the starting point.
* Even though convergence to a solution is guaranteed if the solution exists, the finite number of steps until convergence can be very large.
* The smaller the gap between the two classes, the longer the time of convergence.
* When the data is not separable, the algorithm will not converge (it should be stopped after N steps).
* A learning rate that is too high will make the perceptron periodically oscillate around the solution unless additional steps are taken.
* The L.S compute a linear combination of feature of input and return the sign.
* This were called Perceptron in the engineering literate in late 1950.
* Learning rate affects the accuracy of the solution and the number of iterations directly.
 
 
'''Separability and convergence'''
 
The training set  D is said to be linearly separable if there exists a positive constant <math>\,\gamma</math> and a weight vector <math>\,\beta</math> such that <math>\,(\beta^Tx_i+\beta_0)y_i>\gamma </math> for all <math>\,1 < i < n</math>. That is, if we say that <math>\,\beta</math> is the weight vector of Perceptron and <math>\,y_i</math> is the true label of <math>\,x_i</math>, then the signed distance of the <math>\,x_i</math> from <math>\,\beta</math> is greater than a positive constant <math>\,\gamma</math> for any <math>\,(x_i, y_i)\in D</math>.
 
 
Novikoff (1962) proved that the perceptron algorithm converges after a finite number of iterations if the data set is linearly separable. The idea of the proof is that the weight vector is always adjusted by a bounded amount in a direction that it has a negative dot product with, and thus can be bounded above by  <math>O(\sqrt{t})</math>where t is the number of changes to the weight vector. But it can also be bounded below by<math>\, O(t)</math>because if there exists an (unknown) satisfactory weight vector, then every change makes progress in this (unknown) direction by a positive amount that depends only on the input vector. This can be used to show that the number t  of updates to the weight vector is bounded by <math>  (\frac{2R}{\gamma} )^2</math> , where  R is the maximum norm of an input vector.<ref>http://en.wikipedia.org/wiki/Perceptron</ref>
 
=== Choosing a Proper Learning Rate ===
[[File:Learning_rate.jpg|500px|thumb|centre|choosing different learning rates affect the performance of gradient descent optimization algorithm.]]
 
Choice of a learning rate value will affect the final result of gradient descent algorithm. If the learning rate is too small then the algorithm would take too long to converge which could cause problems for the situations where time is an important factor. If the learning rate is chosen too be too large, then the optimal point can be skipped and never converge. In fact, if the step size is too large, larger than twice the largest eigenvalue of the second derivative matrix (Hessian) of cost function, then gradient steps will go upward instead of downward.
However, the step size is not the only factor than can cause these kind of situations: even with the same learning rate and different initial values algorithm might end up in different situations. In general it can be said that having some prior knowledge could help in choice of initial values and learning rate.
 
There are different methods of choosing the step size in an gradient descent optimization problem.  The most common method is choosing a fixed learning rate and finding a proper value for it by trial and error. This for sure is not the most sophisticated method, but the easiest one.
Learning rate can also be adaptive; that means the value of learning rate can be different at each step of the algorithm. This can be specially a helpful approach when one is dealing with on-line training and non-stationary environments (i.e. when data characteristics vary over time). In such a case learning rate has to be adapted at each step of the learning algorithm. Different approaches and algorithms for learning rate adaptation can be found in <ref>
V P Plagianakos, G D Magoulas, and M N Vrahatis, Advances in convex analysis and global optimization Pythagorion 2000 (2001), Volume: 54, Publisher: Kluwer Acad. Publ., Pages: 433-444.
</ref>.
 
The learning rate leading to a local error minimum in the error function in one learning step is optimal. <ref>[Duda, Richard O., Hart, Peter E., Stork, David G. "Pattern Classification". Second Edition. John Wiley & Sons, 2001.]</ref>
 
=== Application of Perceptron: Branch Predictor ===
 
Perceptron could be used for both online and batch learning. Online learning tasks take place in a sequence of trials. In each round of trial, the learner is given an instance and is asked to use his current knowledge to predict a label for the point. In online learning, the true label of the point is revealed to learner at each round after he makes a prediction. At the last stage of each round the learner has a chance to use the feedback he received on the true label of the instance to help improve his belief about the data for future trials.
 
Instruction pipelining is a technique to increase the throughput in modern microprocessor architecture. A microprocessor instruction can be broken into several independent steps. In a single CPU clock cycle, several instructions at different stage can be executed at the same time. However, a problem arises with a branch, e.g. if-and-else- statement. It is not known whether the instructions inside the if- or else- statements will be executed until the condition is executed. This stalls the pipeline.
 
A branch predictor is used to address this problem. Using a predictor the pipelined processor predicts the execution path and speculatively executes instructions in the branch. Neural networks are good technique for prediction; however, they are expensive for microprocessor architecture. A research studied the use of perceptron, which is less expensive and simpler to implement, as the branch predictor. The inputs are the history of binary outcomes of the executed branches. The output of the predictor is whether a particular branch will be taken. Every time a branch is executed and its true outcome is known, it can be used to train the predictor. The experiments showed that with a 4 Kb hardware, a global perceptron predictor has a misprediction rate of 1.94%, a superior accuracy. <ref>Daniel A. Jimenez , Calvin Lin, "Neural Methods for Dynamic Branch Prediction", ACM Transactions on Computer Systems, 2002</ref>
 
== Feed-Forward Neural Networks ==
 
* The term 'neural networks' is used because historically, it was used to describe the processes of the brain (e.g. synapses).
 
* A neural network is a multistate regression model which is typically represented by a network diagram (see right).
[[Image:Feed-Forward_neural_network.png|right|thumb|300px|Feed Forward Neural Network]]
 
* The feedforward neural network was the first and arguably simplest type of artificial neural network devised. In this network, the information moves in only one direction, forward, from the input nodes, through the hidden nodes (if any) and to the output nodes. There are no cycles or loops in the network.<ref>http://en.wikipedia.org/wiki/Feedforward_neural_network</ref>
 
* For regression, typically k = 1 (the number of nodes in the last layer), there is only one output unit <math>y_1</math> at the end.
 
* For c-class classification, there are typically c units at the end with the cth unit modelling the probability of class c, each <math>y_c</math> is coded as 0-1 variable for the cth class.
 
* Neural networks are known as ''universal approximators'', where a two-layer feed-forward neural network can approximate any continuous function to an arbitrary accuracy (assuming sufficient hidden nodes exist and that the necessary parameters for the neural network can be found) <ref name="CMBishop">C. M. Bishop, ''Pattern Recognition and Machine Learning''. Springer, 2006</ref>. It should be noted that fitting training data to a very high accuracy may lead to ''overfitting'', which is discussed later in this course.
 
* We often use Perceptron to blocks in Feed-Forward neural networks. We can easily to solve the problem by using Perceptron in many different classes. Feed-Forward neural networks looks like a complicated system of Perceptrons. We can regard the neural networks as an unit or a subset of Neural Network. Feed-Forward neural networks include many hidden layers of perceptron.
 
=== Backpropagation (Finding Optimal Weights) ===
There are many algorithms for calculating the weights in a feed-forward neural network. One of the most used approaches is the backpropagation algorithm. The application of the backpropagation algorithm for neural networks was popularized in the 1980's by researchers like Rumelhart, Hinton and McClelland (even though the backpropagation algorithm had existed before then). <ref>S. Seung, "Multilayer perceptrons and backpropagation learning" class notes for 9.641J, Department of Brain & Cognitive Sciences, MIT, 2002. Available: [http://hebb.mit.edu/courses/9.641/2002/lectures/lecture04.pdf] </ref>
 
As the learning part of the network (the first part being feed-forward), backpropagation consists of "presenting an input pattern and changing the network parameters to bring the actual outputs closer to the desired teaching or target values." It is one of the "simplest, most general methods for the supervised training of multilayer neural networks." (pp. 288-289) <ref>[Duda, Richard O., Hart, Peter E., Stork, David G. "Pattern Classification". Second Edition. John Wiley & Sons, 2001.]</ref>
 
For the backpropagation algorithm, we consider three hidden layers of nodes
 
Refer to figure from October 18th lecture where <math>\ l</math> represents the column of nodes in the first column, <br>
<math>\ i</math> represents the column of nodes in the second column, and <br>
<math>\ k</math> represents the column of nodes in the third column. <br>
 
We want the output of the feed forward neural network <math>\hat{y}</math> to be as close to the known target value <math>\ y </math> as possible (i.e. we want to minimize the distance between <math>\ y </math> and <math>\hat{y}</math>). Mathematically, we would write it as:
Minimize <math>(\left| y- \hat{y}\right|)^2</math>
 
Instead of the sign function that has no derivative we use the so called logistic function (a smoothed form of the sign function):
 
<math> \sigma(a)=\frac{1}{1+e^{-a}} </math>
 
 
<blockquote> "Notice that if σ is the identity function, then the entire model collapses to a linear model in the inputs. Hence a neural network can be thought of as a nonlinear generalization of the linear model, both for regression and classification." <ref>Friedman, J.,  Hastie, T. and Tibshirani, R. (2008) “The Elements of Statistical Learning”, 2nd ed, Springer.</ref> </blockquote>
 
 
''Logistic function''  is a common [http://en.wikipedia.org/wiki/Logistic_function sigmoid curve] .It can model the S-curve of growth of some population <math> \sigma</math>. The initial stage of growth is approximately exponential; then, as saturation begins, the growth slows, and at maturity, growth stops.
 
 
To solve the optimization problem, we take the derivative with respect to weight <math>u_{il}</math>: <br>
<math>\cfrac{\partial \left|y- \hat{y}\right|^2}{\partial u_{il}} = \cfrac{\partial \left|y- \hat{y}\right|^2}{\partial a_j} \cdot \cfrac{\partial a_j}{\partial u_{il}}</math> by Chain rule <br>
<math>\cfrac{\partial \left|y- \hat{y}\right|^2}{\partial u_{il}} = \delta_j \cdot z_l </math>
 
where <math> \delta_j = \cfrac{\partial \left|y- \hat{y}\right|^2}{\partial a_j} </math> which will be computed recursively.
 
<math>\ a_i=\sum_{l}z_lu_{il}</math>
 
<math>\ z_i=\delta(a_i)</math>
 
<math>\ a_j=\sum_{i}z_iu_{ji}</math><br>
 
== Backpropagation Continued (Lecture: Oct. 18, 2011) ==
[[File:Backprop.png|300px|thumb|right|Nodes from three hidden layers within the neural network are considered for the backpropagation algorithm. Each node has been divided into the weighted sum of the inputs <math>\ a </math> and the output of the activation function <math>\ z </math>. The weights between the nodes are denoted by <math>\ u </math>.]]
 
From the figure to the right it can be seen that the input (<math>\ a </math>'s) can be expressed in terms of the weighted sum of the outputs of the previous nodes and output (<math>\ z </math>'s) can be expressed as the input as follows:
 
<math>\ a_i = \sum_l z_l u_{il} </math>
 
<math>\ z_i = \sigma(a_i) </math>
 
 
The goal is to optimize the weights to reduce the L2-norm between the target output values <math>\ y </math> (i.e. the correct labels) and the actual output of the neural network <math>\ \hat{y} </math>:
 
<math>\left(y - \hat{y}\right)^2</math>
 
Since the L2-norm is differentiable, the optimization problem can be tackled by differentiating <math>\left(y - \hat{y}\right)^2</math> with respect to each weight in the hidden layers. By using the chain rule we get:
 
<math>
\cfrac{\partial \left(y - \hat{y}\right)^2}{\partial u_{il}}
= \cfrac{\partial \left(y - \hat{y}\right)^2}{\partial a_i}\cdot
\cfrac{\partial a_i}{\partial u_{il}} = \delta_{i}z_l
</math>
 
where <math>\ \delta_i = \cfrac{\partial \left(y - \hat{y}\right)^2}{\partial a_i} </math>
 
The above equation essentially shows the effect of changes in the input <math>\ a_i </math> on the overall output <math>\ \hat{y} </math> as well as the effect of changes in the weights <math>\ u_{il} </math> on the input <math>\ a_i </math>. In the above equation, <math>\ z_l </math> is a known value (i.e. it can be calculated directly), whereas <math>\ \delta_i </math> is unknown but can be expressed as a recursive definition in terms of <math>\ \delta_j</math>:
 
<math>\delta_i = \cfrac{\partial (y - \hat{y})^2}{\partial a_i} = \sum_{j} \cfrac{\partial \left(y - \hat{y}\right)^2}{\partial a_j}\cdot \cfrac{\partial a_j}{\partial a_i} </math>
 
<math>\delta_i = \sum_{j}\delta_j\cdot\cfrac{\partial a_j}{\partial z_i}\cdot\cfrac{\partial z_i}{\partial a_i}</math>
 
<math>\delta_i = \sum_{j} \delta_j\cdot u_{ji} \cdot \sigma'(a_i)</math>
 
where <math> \delta_j = \cfrac{\partial \left(y - \hat{y}\right)^2}{\partial a_j}</math>
 
The above equation essentially shows the effect of changes in the input <math>\ a_j </math> on the overall output <math>\ \hat{y} </math> as well as the effect of changes in input <math>\ a_i </math> on the input <math>\ a_j </math>. Note that if <math>\sigma(x)</math> is the sigmoid function, then <math>\sigma'(x) = \sigma(x)(1-\sigma(x))</math>
 
The recursive definition of <math>\ \delta_i </math> can be considered as a cost function at layer <math>i</math> for achieving the original goal of optimizing the weights to minimize <math>\left(y - \hat{y}\right)^2</math>:
 
<math>\delta_i= \sigma'(a_i)\sum_{j}\delta_j \cdot u_{ji}</math>.
 
Now considering <math>\ \delta_k</math> for the output layer:
 
<math>\delta_k= \cfrac{\partial \left(y - \hat{y}\right)^2}{\partial a_k}</math>.
 
where <math>\,a_k = \hat{y}</math> because an activation function is not applied in the output layer. So, our calculation becomes:
 
<math>\delta_k = \cfrac{\partial \left(y - \hat{y}\right)^2}{\partial \hat{y}} </math>
 
<math>\delta_k = -2(y - \hat{y})</math><br />
<math>u_{il} \leftarrow u_{il} - \rho \cfrac{\partial (y - \hat{y})
^2}{\partial u_{il}}</math>
 
Since <math>\ y </math> is known and <math>\ \hat{y} </math> can be computed for each data point (assuming small, random, initial values for the weights of the neural network), <math>\ \delta_k </math> can be calculated and "backpropagated" (i.e. the <math>\ \delta </math> values for the layer before the output layer can be computed using <math>\ \delta_k </math> and then the <math>\ \delta </math> values for the layer before the layer before the output layer can be computed etc.). Once all <math>\ \delta </math> values are known, the errors due to each of the weights <math>\ u </math> will be known and techniques like gradient descent can be used to optimize the weights. However, as the cost function for <math>\ \delta_i </math> shown above is not guaranteed to be convex, convergence to a global minimum is no guaranteed. This also means that changing the order in which the training points are fed into the network or changing the initial random values for the weights may lead to finding different results for the optimized weights (i.e. different local minima may be reached).
 
===Overview of Full Backpropagation Algorithm ===
The network weights are updated using the backpropagation algorithm when each training data point <math>\ x</math>is fed into the feed forward neural network (FFNN). This update procedure is done using the following steps: 
*First arbitrarily choose some random weights (preferably close to zero) for your network.
 
*Apply <math>\ x </math> to the FFNN's input layer, and calculate the outputs of all input neurons.
 
*Propagate the outputs of each hidden layer forward, one hidden layer at a time, and calculate the outputs of all hidden neurons.
 
*Once <math>\ x </math> reaches the output layer, calculate the output(s) of all output neuron(s) given the outputs of the previous hidden layer.
 
*At the output layer, compute <math>\,\delta_k = -2(y_k - \hat{y}_k)</math> for each output neuron(s).
 
*Compute each <math> \delta_i </math>, starting from <math>i=k-1</math> all the way to the first hidden layer, where <math>\delta_i= \sigma'(a_i)\sum_{j}\delta_j \cdot u_{ji}</math>.
 
*Compute <math>\cfrac{\partial \left(y - \hat{y}\right)^2}{\partial u_{il}} = \delta_{i}z_l</math> for all weights <math>\,u_{il}</math>.
 
*Then update <math>u_{il}^{\mathrm{new}} \leftarrow u_{il}^{\mathrm{old}} - \rho \cdot \cfrac{\partial \left(y - \hat{y}\right)^2}{\partial u_{il}} </math> for all weights <math>\,u_{il}</math>.
 
*Continue for next data points and iterate on the training set until weights converge.
 
====Epochs====
It is common to cycle through the all of the data points multiple times in order to reach convergence. An epoch represents one cycle in which you feed all of your datapoints through the neural network. It is good practice to randomized the order you feed the points to the neural network within each epoch; this can prevent your weights changing in cycles. The number of epochs required for convergence depends greatly on the learning rate & convergence requirements used.
 
===Limitations===
*The convergence obtained from backpropagation learning is very slow.
 
*The convergence in backpropagation learning is not guaranteed.
 
*The result may generally converge to any local minimum on the error surface, since stochastic gradient descent exists on a surface which is not flat.
 
*Backpropagation learning requires input scaling or normalization. Inputs are usually scaled into the range of +0.1f to +0.9f for best performance.<ref>http://en.wikipedia.org/wiki/Backpropagation</ref>
 
*Numerical problems may be encountered when there are a large number of hidden layers, as the errors at each layer may become very small and vanish.
 
===Deep Neural Network===
 
Increasing the number of units within a hidden layer can increase the "flexibility" of the neural network, i.e. the network is able to fit to more complex functions. Increasing the number of hidden layers on the other hand can increase the "generalizability" of the neural network, i.e. the network is able to generalize well to new data points that it was not trained on. A deep neural network is a neural network with many hidden layers. Deep neural networks were introduced in recent years by the same researchers (Hinton et al. <ref name="HintonDeepNN"> G. E. Hinton, S. Osindero and Y. W. Teh, "A Fast Learning Algorithm for Deep Belief Nets", ''Neural Computation'', 2006. </ref>) that introduced the backpropagation algorithm to neural networks. The increased number of hidden layers in deep neural networks cannot be directly trained using backpropagation, because the errors at each layer will become very small and vanish as stated in the [[#Limitations | limitations]] section. To get around this problem, deep neural networks are trained a few layers at a time (i.e. two layers at a time). This process is still not straightforward as the target values for the hidden layers are not well defined (i.e. it is unknown what the correct target values are for the hidden layers given a data point and a label). ''Restricted Boltzmann Machines (RBM)'' and ''Greedy Learning Algorithms'' have been used to address this issue. For more information about how deep neural networks are trained, please refer to <ref name="HintonDeepNN"/>.  A comparison of various neural network layouts including deep neural networks on a database of handwritten digits can be found at [http://yann.lecun.com/exdb/mnist/ THE MNIST DATABASE].
 
one of the advantages of Deep Nets is that we can pre-train network using unlabeled data (Unsupervised learning) to obtain initial weights for final
step training using labeled data(fine-tuning). Since most of data available are usually unlabeled data, this method gives us a great chance of finding better local optima than if we just wanted to use labeled data for training the parameters of the network(the weights). for more details on unsupervised pre-training and learning in Deep Nets see<ref>
http://jmlr.csail.mit.edu/proceedings/papers/v9/erhan10a/erhan10a.pdf
</ref> , <ref>
http://www.cs.toronto.edu/~hinton/absps/tics.pdf
</ref>
 
An interesting structure of the deep neural network is where the number of nodes in each hidden layer decreases towards the "center" of the network and then increases again. See figure below for an illustration.
 
[[File:DeepNNarchitecture.png|500px|thumb|center|A specific architecture for deep neural networks with a "bottleneck".]]
 
The central part with the least number of nodes in the hidden layer can be seen a reduced dimensional representation of the input data features. It would be interesting to compare the dimensionality reduction effect of this kind of deep neural network to a cascade of PCA.
 
It is known that training DNNs is hard <ref>http://ecs.victoria.ac.nz/twiki/pub/Courses/COMP421_2010T1/Readings/TrainingDeepNNs.pdf</ref> since randomly initializing weights for the network and applying gradient descent can find poor local minimums. In order to better train DNNs, [http://ecs.victoria.ac.nz/twiki/pub/Courses/COMP421_2010T1/Readings/TrainingDeepNNs.pdf Exploring Strategies for Training Deep Neural Networks] looks at 3 principles to better train DNNs:
# Pre-training one layer at a time in a greedy way,
# Using unsupervised learning at each layer,
# Fine-tuning the whole network with respect to the ultimate criterion.
Their experiments show that by providing hints at each layer for the representation, the weights can be initialized such that a more optimal minimum can be reached.
 
===Applications of Neural Networks===
* Sales forecasting
* Industrial process control
* Customer research
* Data validation
* Risk management
* Target marketing
<ref>
Reference:http://www.doc.ic.ac.uk/~nd/surprise_96/journal/vol4/cs11/report.html#Applications of neural networks
</ref>
 
==Model Selection (Complexity Control)==
 
 
 
Selecting a proper statistical model for a given data set is a well-known problem in pattern recognition and machine learning. Systems with the optimal complexity have a good [http://www.csc.kth.se/~orre/snns-manual/UserManual/node16.html generalization] to yet unobserved data. In the complexity control problem, we are looking for an appropriate model order which gives us the best generalization capability for the unseen data points, while fitting the seen data well. Model complexity here can be defined in terms of over-fitting and under-fitting situations defined in the following section.
 
== Over-fitting and Under-fitting ==
[[File:overfitting-model.png|500px|thumb|right| Example of overfitting and underfitting situations. The blue line is a high-degree polynomial which goes through most of the training data points and gives a very low training error, however has a very poor generalization for the unseen data points. The red line, on the other hand, is underfitted to the training data samples.]]
There are two situations which should be avoided in classification and pattern recognition systems:
#[http://en.wikipedia.org/wiki/Overfitting Overfitting]
#Underfitting
 
In short, Overfitting occurs when the model tries to capture every detail of the data. This can happen if the model has too many parameters compared to the number of observations. Overfitted models have large testing errors but small training error. On the other hand, Underfitting occurs when the model does not capture the complexity of the data. This happens when the model has a large training error, and can be common when there is missing data.
 
Suppose there is no noise in the training data, then we would face no problem with over-fitting, because in this case every training data point lies on the underlying function, and the only goal is to build a model that is as complex as needed to pass through every training data point.
 
However, in the real-world, the training data are [http://en.wikipedia.org/wiki/Statistical_noise noisy], i.e. they tend to not lie exactly on the underlying function, instead they may be shifted to unpredictable locations by random noise. If the model is more complex than what it needs to be in order to accurately fit the underlying function, then it would end up fitting most or all of the training data. Consequently, it would be a poor approximation of the underlying function and have poor prediction ability on new, unseen data.
 
The danger of overfitting is that the model becomes susceptible to predicting values outside of the range of training data. It can cause wild predictions in multilayer perceptrons, even with noise-free data. To avoid Overfitting, techniques such as Cross Validation and Model Comparison might be necessary. The size of the training set is also important. The training set should have a sufficient number of data points which are sampled appropriately, so that it is representative of the whole data space.
 
In a Neural Network, if the number of hidden layers or nodes is too high, the network will have many degrees of freedom and will learn every characteristic of the training data set. That means it will fit the training set very precisely, but will not be able to generalize the commonality of the training set to predict the outcome of new cases.
 
Underfitting occurs when the model we picked to describe the data is not complex enough, and has a high error rate on the training set.
There is always a trade-off. If our model is too simple, underfitting could occur and if it is too complex, overfitting can occur.
 
=== Different Approaches for Complexity Control ===
 
We would like to have a classifier that minimizes the true error rate <math>\ L(h)</math>:
 
<math>\ L(h)=Pr\{h(x)\neq y\}</math>
 
<span id="prediction-error">[[File:Prediction_Error.jpg|240px|thumb|right| Model complexity]]</span>
 
Because the true error rate cannot be determined directly in practice, we can try using the empirical true error rate (i.e. training error rate):
 
<math>\ \hat L(h)= \frac{1}{n} \sum_{i=1}^{n} I(h(x_{i}) \neq y_{i})</math>
 
However, the empirical true error rate (i.e. training error rate) is biased downward. Minimizing this error rate does not find the best classifier model, but rather ends up overfitting to the training data. Thus, this error rate cannot be used.<br />
 
The complexity of a fitting model depends on the degree of the fitting function. According to the graph, the area on the LHS of the critical point is considered as under-fitting. This inaccuracy is resulted by the low complexity of fitting. The area on the RHS of the critical point is over-fitting, because it's not generalized.<br />
 
As illustrated in the figure to the right, the training error rate is always less than the true error rate, i.e. "biased downward". Also, the training error will always decrease with an increase in the complexity of the model used to fit the data. This does not reflect the behavior of the true error rate. The true error rate will have a unique minimum as the model complexity changes.
 
So, if the training error rate is the only criteria used for picking a model, overfitting can occur. An overfitted model has low training error rate, but is not able to generalize well to new test data points. On the other hand, underfitting can occur when a model that is not complex enough is picked (e.g. using a first order model for data that follows a second order trend). Both training and test error rates will be high in that case. The best choice for the model complexity is where the true error rate reaches its minimum point. Thus, model selection involves ''controlling the complexity'' of the model. The true error rate can be approximated using the test error rate, i.e. the test error follows the same trend that the true error rate does when the model complexity is changed.
In this case, we assume there is a test data set <math>\,x_1, . . . ,x_n</math> and these points follow some unknown distribution. In order to find out this distribution, we can make some estimationg of some unknown parameters, such as <math>\,f</math>, the mean <math>\,E(x_i)</math>, the variance <math>\,var(x_i)</math> and more.
 
To estimate <math>\,f</math>, we use an observation function as our estimator.
 
<math>\hat{f}(x_1,...,x_n)</math>.
 
<math>Bias (\hat{f}) = E(\hat{f}) - f</math>
 
<math>MSE (\hat{f}) = E[(\hat{f} - f)^2]=Variance (\hat f)+Bias^2(\hat f )</math>
 
<math>Variance (\hat{f}) = E[(\hat{f} - E(\hat{f}))^2]</math>
 
This estimator is unbiased.
 
<math>Bias (\hat{f}) = E(\hat{f}) - f=0</math>
 
which means that we just need to minimize <math>MSE (\hat{f})</math>.
 
<math>\implies MSE (\hat{f})=Variance (\hat{f})+Bias ^2(\hat{f}) </math>.
 
Thus given the Mean Squared Error (MSE), if we have a low bias, then we will have a high variance and vice versa.
 
 
 
 
In order to avoid overfitting, there are two main strategies:
 
# ''Estimate the error rate''
## Cross-validation
## Computing error bound ( probability in-equality )
# ''Regulazition''
## We basically make the function (model) smooth by limiting the complexity or by limiting the size of weights.
 
===Cross Validation===
 
[[File:k-fold.png|350px|thumb|right|Graphical illustration of 4-fold cross-validation. V is the part used for validation and T is used for training.]]
 
Cross-validation is an approach for avoiding overfitting while modelling data that bases the choice of model parameters on a portion of the training set, while using the rest of the set for validation, i.e., some of the data is left out when fitting the model. One round of the process involves partitioning the data set into two complementary subsets, fitting the model to one subset (called the training set), and testing the model against the other subset (called the validation or testing subset). This is usually repeated several times using different partitions in order to reduce variability, and the validation results are then averaged over the rounds.
 
====LOO: Leave-one-out cross-validation ====
When the dataset is very small, leaving one tenth out depletes our data too much, but making the validation set too small makes the estimate of the true error unstable (noisy). One solution is to do a kind of round-robin validation: for each complexity setting, learn a classifier on all the training data minus one example and evaluate its error the remaining example. Leave-one-out error is defined as:
                     
'''LOO error''': <math>\frac {1}{n} \sum_{i} 1 (h(x_i; D_-i)\neq y_i)</math>
where <math>D_-i</math> is the dataset minus ith example and <math>h(x_i; D_-i)</math> is the classifier learned on <math>D_-i</math>. LOO error is an unbiased estimate of the error of our learning algorithm (for a given complexity setting) when given <math>n-1</math> examples.
 
====K-Fold Cross Validation====
 
Instead of minimizing the training error, here we minimize the validation error.<br />
 
A common type of cross-validation that is used for relatively small data sets is K-fold cross-validation, the algorithm for which can be stated as follows:
 
Let h denote a classification model to be fitted to a given data set.
 
# Randomly partition the original data set into K subsets of approximately the same size. A common choice for K is K = 10.
# For k = 1 to K do the following
## Remove subset k from the data set
## Estimate the parameters of each different classification model based only on the remaining data points. Denote the resulting function by h(k)
## Use h(k) to predict the data points in subset k. Denote by <math>\begin{align}\hat L_k(h)\end{align}</math> the observed error rate.
# Compute the average error <math>\hat L(h) = \frac{1}{K} \sum_{k=1}^{K} \hat L_k(h)</math>
 
The best classifier is the model that results in the lowest average error rate.
 
A common variation of k-fold cross-validation uses a single observation from the original sample as the validation data, and the remaining observations as the training data. This is then repeated such that each sample is used once for validation. It is the same as a K-fold cross-validation with K being equal to the number of points in the data set, and is referred to as leave-one-out cross-validation. <ref> stat.psu.edu/~jiali/course/stat597e/notes2/percept.pdf</ref>
 
====Alternatives to Cross Validation for model selection:====
# Akaike Information Criterion (AIC): This approach ranks models by their AIC values. The model with the minimum AIC is chosen. The formula of AIC value is: <math>AIC = 2k + 2log(L_{max})</math>, where <math>k</math> is the number of parameters and <math>L_{max}</math> is the maximum value of the likelihood function of the model. This selection method penalizes the number of parameters.<ref>http://en.wikipedia.org/wiki/Akaike_information_criterion</ref>
# Bayesian Information Criterion (BIC): It is similar to AIC but penalizes the number of parameters even more. The formula of BIC value is: <math>BIC = klog(n) - 2log(L)</math>, where <math>n</math> is the sample size.<ref>http://en.wikipedia.org/wiki/Bayesian_information_criterion</ref>
 
== Model Selection Continued (Lecture: Oct. 20, 2011) ==
 
=== Error Bound Computation ===
Apart from cross validation, another approach for estimating the error rates of different models is to find a bound to the error. This works well theoretically to compare different models, however, in practice the error bounds are not a good indication of which model to pick because the error bounds are not ''tight''. This means that the actual error observed in practice may be a lot better than what was indicated by the error bounds. This is because the error bounds indicate the worst case errors and by only comparing the error bounds of different models, the worst case performance of each model is compared, but not the overall performance under normal conditions.
 
=== Penalty Function ===
Another approach for model selection to avoid overfitting is to use ''regularization''. Regularization involves adding extra information or restrictions to the problem in order to prevent overfitting. This additional information can be in the form of a function penalizing high complexity (penalty function). So in regularization, instead of minimizing the squared error alone we attempt to minimize the squared error plus a penalty function. A common penalty function is the euclidean norm of the parameter vector multiplied by some scaling parameter. The scaling parameter allows for balancing the relative importance of the two terms. <br /> This means minimizing the following new objective function:<br />
<math> \left|y-\hat{y}\right|^2+f(\theta)</math><br />
where <math>\ \theta</math> is model complexity and <math>\ f(\theta)</math> is the penalty function.  The penalty function should increase as the model increases in complexity. This way it counteracts the downward bias of the training error rate. There is no optimal choice for a penalty function but they should all increase all the complexity and size of the estimates increase.
 
There is no optimal choice for the penalty function but they all seek to solve the same problem. Suppose you have models of order 1,2,...,K such that the models of class k-1 are a subset of the models in class k. An example of this is linear regression where a model of order k is the model with the first k explanatory covariates. If you do not include a penalty term and minimize the squared error alone you will always choose the largest most complex model (K). But the problem with this is the gain from including more complexity might be incredibly small. The gain in accuracy may in fact be no better than you would expect from including a covariate drawn from a N(0,1) distribution. If this is the case then clearly we don't want to include such a covariate. And in general if the increase in accuracy is below a certain level then it is preferable to stay with the simpler model. By adding a penalty term, no matter how small it is, you know at least at some point these insignificant gains in accuracy will be outweighed by increase in penalty. By effectively choosing and scaling your penalty function you can have your objective function approximate the true error as opposed to the training error.
<br />
 
==== Example: Penalty Function in Neural Network Model Selection ====
 
In MLP neural networks, the activation function is of the form of a logistic function, where the function behaves almost linearly when the input is close to zero (i.e., the weights of the neural network are close to zero), while the function behaves non-linearly as the magnitude of the input increases (i.e., the weights of the neural network become larger). In order to penalize additional model complexity (i.e., unnecessary non-linearities in the model), large weights will be penalized by the penalty function.
 
The objective function to minimize with respect to the weights <math>\ u_{ji}</math> is:<br />
 
<math>\ Reg=\left|y-\hat{y}\right|^2 + \lambda*\sum_{i=1}^{n}(u_{ji})^2</math> 
If the weight start to grow, then <math>\sum_{i=1}^{n}(u_{ji})^2</math> becomes larger and <math>\left|y-\hat{y}\right|^2</math> becomes smaller.
 
The derivative of the objective function with respect to the weights <math>\ u_{ji}</math> is:<br />
<math>\cfrac{\partial Reg}{\partial u_{ji}} = \cfrac{\partial \left|y-\hat{y}\right|^2}{\partial u_{ji}}+2*\lambda*u_{ji}</math>
 
This objective function is used during [http://en.wikipedia.org/wiki/Gradient_descent gradient descent]. In practice, cross validation is used to determine the value of <math>\ \lambda</math> in the objective function.<br />
 
We can do CV to choose <math>\lambda</math>. In case of any model, the least "complex" is the linear model. gradually let the complexity to grow then complexity begins to rise.
 
We want non-linear model but not too curvy.
 
==== Penalty Functions in Practice ====
In practice, we only apply the penalty function to the parametrized terms. That is, the bias term is not regularized, since it is simply the DC component and is not associated with a feature. Although this makes little difference, the concept is clear that the bias term should not be considered when determining the relative weights of the features.
 
In particular, we update the weights as follows:
 
<math>
u_{ji} :=
\begin{cases}
u_{ji} + \alpha * \cfrac{\partial \left|y-\hat{y}\right|^2}{\partial u_{ji}} &bias term\\
u_{ji} + \alpha * \cfrac{\partial \left|y-\hat{y}\right|^2}{\partial u_{ji}}+2*\lambda*u_{ji} &otherwise
\end{cases}
</math>
 
== Radial Basis Function Neural Network (RBF NN) ==
[http://en.wikipedia.org/wiki/Radial_basis_function_network Radial Basis Function Network](RBF) NN is a type of neural network with only one hidden layer in addition to an input and output layer. Each node within the hidden layer uses a radial basis activation function, hence the name of the RBF NN. A radial basis function is a real-valued function whose value depends only on the distance from center. One of the most commonly used radial basis functions is Gaussian. The weights from the input layer to the hidden layer are always "1" in a RBF NN, while the weights from the hidden layer to the output layer are adjusted during training. The output unit implements a weighted sum of hidden unit outputs. The input into an RBF NN is nonlinear while the output is linear. Due to their nonlinear approximation properties, RBF NNs are able to model complex mappings, which perceptron based neural networks can only model by means of multiple hidden layers. It can be trained without back propagation since it has a closed-form solution. RBF NNs have been successfully applied to a large diversity of applications including interpolation, chaotic time series modeling, system identification, control engineering, electronic device parameter modeling, channel equalization, speech recognition, image restoration, shape-form-shading, 3-D object modeling, motion estimation and moving object segmentation, data fusion, etc. <ref>www-users.cs.york.ac.uk/adrian/Papers/Others/OSEE01.pdf</ref>
 
====The Network System====
 
1. Input: <br />n data points <math>\mathbf{x}_i\subset \mathbb{R}^d, \quad i=1,...,n</math><br />
2. Basis function ('''the single hidden layer'''): <br />
<math>\mathbf{\phi}_{n*m}</math>, where <math>m</math> is the number of the neurons/basis functions that project original data points into a new space.  <br />
There are many choices for the basis function.  The commonly used is radial basis:<br />
<math>\phi_j(\mathbf{x}_i)=e^{-|\mathbf{x}_i-\mathbf{\mu}_j|^2}</math><br />
3. Weights associated with the last layer: <math>\mathbf{W}_{m*k}</math>, where k is the number of classes in the output <math>\mathbf{Y}</math>.<br />
4. Output: <math>\mathbf{Y}</math>, where<br />
<math>y_k(x)=\sum_{j=1}^{m}(W_{jk}*\phi_j(x))</math><br />
Alternatively, the output <math>\mathbf{Y}</math> can be written as
<math>
Y=\phi*W
</math>
 
where
 
:<math>\hat{Y}_{n,k} = \left[ \begin{matrix}
\hat{y}_{1,1} & \hat{y}_{1,2} & \cdots & \hat{y}_{1,k} \\
\hat{y}_{2,1} & \hat{y}_{2,2} & \cdots & \hat{y}_{2,k} \\
\vdots &\vdots & \ddots & \vdots \\
\hat{y}_{n,1} & \hat{y}_{n,2} & \cdots & \hat{y}_{n,k}
\end{matrix}\right] </math> is the matrix of output variables.
 
:<math>\Phi_{n,m} = \left[ \begin{matrix}
\phi_{1}(\mathbf{x}_1) & \phi_{2}(\mathbf{x}_1) & \cdots & \phi_{m}(\mathbf{x}_1) \\
\phi_{1}(\mathbf{x}_2) & \phi_{2}(\mathbf{x}_2) & \cdots & \phi_{m}(\mathbf{x}_2) \\
\vdots & \vdots & \ddots & \vdots \\
\phi_{1}(\mathbf{x}_n) & \phi_{2}(\mathbf{x}_n) & \cdots & \phi_{m}(\mathbf{x}_n)
\end{matrix}\right] </math> is the matrix of Radial Basis Functions.
 
:<math>W_{m,k} = \left[ \begin{matrix}
w_{1,1} & w_{1,2} & \cdots & w_{1,k} \\
w_{2,1} & w_{2,2} & \cdots & w_{2,k} \\
\vdots & \vdots & \ddots & \vdots \\
w_{m,1} & w_{m,2} & \cdots & w_{m,k}
\end{matrix}\right] </math> is the matrix of weights.
 
Here, <math>k</math> is the number of outputs, <math>n</math> is the number of data points, and <math>m</math> is the number of hidden units.  If <math>k = 1</math>, <math>\hat Y</math> and <math>W</math> are column vectors. If m = n, then <math>\mathbf{\mu}_i = \mathbf{x}_i</math>, so <math>\phi_{i}</math> checks to see how similar the two data points are.
 
<math>Y=\phi W</math> where Y and <math>\phi</math> are known while W is unknown.
The object function is <math>\psi=|Y-\Phi W|^2  </math> and we want to <math> \underset{W}{\mbox{min}} |Y-\Phi W|^2  </math>. Therefore, to get the optimal weight, <math>W=(\phi^T \phi)^{-1}\phi^TY</math>
 
==== Network Training====
To construct m basis functions, first cluster data points into m groups.  Then find the centre of each cluster <math>\mu_1</math> to <math>\mu_m</math>.<br />
 
'''Clustering: the K-means algorithm''' <ref>This section is taken from Wikicourse notes stat441/841 fall 2010.</ref><br />
K-means is a commonly applied technique in clustering observations into groups by minimizing the distance of individual observations from the center of the cluster it is in. The most common K-means algorithm used is referred to as [http://en.wikipedia.org/wiki/Lloyd%27s_algorithm Lloyd's algorithm]: <br />
 
# Select the number of clusters m <br />
 
# Randomly select m observations from the n observations, to be used as m initial centers. <br />
 
# (Alternative): Randomly assign all data points to clusters and use the means of those clusters as the initial centers. <br />
 
# For each data point from the rest of observations, compute the distance to each of the initial centers and classify it into the cluster with the minimum distance. <br />
 
# Obtain updated cluster centers by computing the mean of all the observations in the corresponding clusters.<br />
 
# Repeat Step 3 and Step 4 until all of the differences between the old cluster centers and new cluster centers are acceptable.<br />
 
Note: K means can be sensitive to the originally selected points, so it may be useful to run K-means repeatedly and use prior knowledge to select the best cluster.
 
Having constructed the basis functions, next minimize the objective function with respect to <math>\mathbf{W}</math>:<br />
<math> min \;\left|| Y-\phi*W\right ||_2^{2}</math>
 
The solution to the problem is
<math>\ 
W=(\phi^T*\phi)^{-1}*\phi^T*Y
</math>
 
Matlab example:
 
clear all;
clc;
load ionosphere.mat;
P=ionosphere(:,1:(end-1));
P=P';
T=ionosphere(:,end);
T=T';
net=newff(minmax(P),[4,1],{'logsig','purelin'},'trainlm');
net.trainParam.show=100;
net.trainParam.mc=0.9;
net.trainParam.mu=0.05;
net.trainParam.mu_dec=0.1;
net.trainParam.mu_inc=5;
net.trainParam.lr=0.5;
net.trainParam.goal=0.01;
net.trainParam.epochs=5000;
net.trainParam.max_fail=10;
net.trainParam.min_grad=1e-20;
net.trainParam.mem_reduc=2;
net.trainParam.alpha=0.1;
net.trainParam.delt_inc=1;
net.trainParam.delt_dec=0.1;
net=init(net);       
net,tr]=train(net,P,T);
A = sim(net,P);
E = T - A;
disp('the training error:')
MSE=mse(E)
 
===Single Basis Function vs. Multiple Basis Functions===
Suppose the data points belong to a mixture of Gaussian distributions.<br />
 
Under '''single basis''' function approach, every class in <math>\mathbf{Y}</math> is represented by a single basis function.  This approach is similar to the approach of linear discriminant analysis.  <br />
 
Compare <math>y_k(x)=\sum_{j=1}^{m}(W_{jk}*\phi_j(x))</math><br />
with <math>P(Y|X)=\frac{P(X|Y)*P(Y)}{P(X)}</math>. <br /> Here, the basis function <math>\mathbf{\phi}_{j}</math> can be thought of as equivalent to <math>\frac{P(X|Y)}{P(X)}</math>.<br />
 
Under '''multiple basis''' function approach, a layer of j basis functions are placed between <math>\mathbf{Y}</math> and <math>\mathbf{X}</math>.  The probability function of the joint distribution of <math>\mathbf{X}</math>, <math>\mathbf{J}</math> and <math>\mathbf{Y}</math> is
 
<math>\,P(X,J,Y)=P(Y)*P(J|Y)*P(X|J)</math>
 
Here, instead of using single Gaussian to represent each class, we use a "mixture of Gaussian" to represent.<br />
The probability funcion of <math>\mathbf{Y}</math> conditional on <math>\mathbf{X}</math> is
 
<math>P(Y|X)=\frac{P(X,Y)}{P(X)}=\frac{\sum_{j}{P(X,J,Y)}}{P(X)}</math>
 
Multiplying both the nominator and the denominator by <math>\ P(J) </math> yields
 
<math>\ P(Y|X)=\sum_{j}{P(J|X)*P(Y|J)}</math><br />
where <math>\ P(J|X)</math> tells that, with given X (data), how likely the data is in the Gaussian J, and <math>\ P(Y|J)</math> tells that, with given Gaussian J, how likely this Gaussian belongs to class K.
 
 
since<br />
<math>\ P(J|X)=\frac{P(X|J)*P(J)}{P(X)}</math>
and <math>\ P(Y|J)=\frac{P(Y|J)*P(Y)}{P(J)}</math>
 
If the weights in the radial basis neural network have proper properties of probability function, then the basis function <math>\mathbf{\phi}_j</math> can be thought of as <math>\ P(J|X)</math>, representing the probability that <math>\mathbf{x}</math> is in Gaussian class j; and the weight function W can be thought of as <math>\ P(Y|J)</math>, representing the probability that a data point belongs to class k given that the point is from Gaussian class j.<br />
 
In conclusion, given a mixture of Gaussian distributions, multiple basis function approach is better than single basis function, since the former produces a non-linear boundary.
 
== RBF Network Complexity Control (Lecture: Oct. 25, 2011) ==
 
When performing model selection, overfitting is a common issue. As model complexity increases, there comes a point where the model becomes worse and worse at fitting real data even though it fits the training data better. It becomes too sensitive to small perturbations in the training data that should be treated as noise to allow flexibility in the general case. In this section we will show that training error (empiricial error from the training data) is a poor estimator for true error and that minimizing training error will increase complexity and result in overfitting. We will show that test error (empirical error from the test data) is a better estimator of true error. This will be done by estimating a model <math> \hat f </math> given training data <math> T={(x_i,y_i)}^n_{i=1}</math>.
 
 
First, some notation is defined.
 
The assumption for the training data set is that it consists of the true model values <math>\ f(x_i) </math> plus some additive Gaussian noise <math>\ \epsilon_i </math>:
 
<math>\ y_i = f(x_i)+\epsilon_i</math> where <math>\ \epsilon \sim N(0,\sigma^2)</math>
 
<math>\ y_i = true\,model + noise</math>
 
===Important Notation===
 
Let:
*<math>\displaystyle f(x)</math> denote the ''true model''.
*<math>\hat f(x)</math> denote the ''prediction/estimated model'', which is generated from a training data set <math>\displaystyle T = \{(x_i, y_i)\}^n_{i=1}</math>. The observation is not accurate.<br />
Remark: <math>\hat f(x_i) = \hat y_i</math>.<br />
*<math>\displaystyle err</math> denote the ''empirical error'' based on actual data points. This can be either test error or training error depending on the data points used. This is the difference between <math>(y-\hat{y})^2 </math>
*<math>\displaystyle Err </math> denote the ''true error'' or ''generalization error'', and is what we are trying to minimize. It is the difference between <math>(f-\hat{f})^2 </math>
*<math>\displaystyle MSE=E[(\hat f(x)-f(x))^2]</math> denote the ''mean squared error''.
 
We use the training data to estimate our model parameters.
 
<math>D=\{(x_i,y_i)\}_{i=1}^n</math>
 
 
For a given point <math>y_0</math>, the expectation of the empirical error is:
 
<math> \begin{align}
 
E[(\hat{y_0}- y_0)^2] &= E[(\hat{f_0}- f_0 -\epsilon_0)^2] \\
&=E[(\hat{f_0}-f_0)^2 + \epsilon_0^2 - 2 \epsilon_0 (\hat{f_0}-f_0)] \\
&=E[(\hat{f_0}-f_0)^2] + E[\epsilon_0^2] - 2 E [ \epsilon_0 (\hat{f_0}-f_0)] \\
&=E[(\hat{f_0}-f_0)^2] + \sigma^2 - 2 E [ \epsilon_0 (\hat{f_0}-f_0)]
\end{align}
</math>
 
This is the formula partitions the training error into the true error and others errors. Our goal is to select the model that minimizes the true error so we must try to understand the effects of these other error terms if we are to use training error as a estimate for the true error.
 
The first term is essentially true error. The second term is a constant. The third term is problematic, since in general this expectation is not 0. We will break this into 2 cases to simplify the third term.
 
=====Case 1: Estimating Error using Data Points from Test Set=====
In Case 1, the empirical error is test error and the data points used to calculate test error are from the test set, not the training set.  That is, <math>y_0 \notin T </math>.
 
We can rewrite the third term in the following way, since both <math>y_0</math> and <math>\hat{f_0}</math> have expectation <math>f_0</math>, the true value, which is a constant and not random.
 
<math> \begin{align}
E [ \epsilon_0 (\hat{f_0}-f_0)] &= E [ (y_0-f_0) (\hat{f_0}-f_0)] \\
& = cov{(y_0,\hat{f_0})}
\end{align}
</math>
 
(The reason why covariance is here since <math>\displaystyle y_i</math> is a new point, <math>\hat f</math> and <math>\displaystyle y_i</math> are independent.)
 
Consider <math>\ f_0 </math> is a mean.
 
Since <math>y_0</math> is not part of the training set, it is independent of the model <math>\hat{f_0}</math> generated by the training set. Therefore,
 
<math>y_0 \notin T \to y_0 \perp \hat{f} </math>
 
<math>\ cov{(y_0,\hat{f}_0)}=0</math>
 
 
The equation for the expectation of empirical error simplifies to the following:
 
<math>E[(y_0-\hat{y_0})^2] = E[(f_0-\hat{f_0})^2] + \sigma^2 </math>
 
 
This result applies to every output value in the test data set, so we can generalize this equation by summing over all m data points that have NOT been seen by the model:
 
<math>\begin{align}
\sum_{i=1}^m{(y_i-\hat{y_i})^2} &= \sum_{i=1}^m{(f_i-\hat{f_i})^2)} + m \sigma^2 \\
err &= Err + m \sigma^2 \\
& = Err + constant\\
\end{align}
</math>
 
Rearranging to solve for true error, we get
 
<math>\ Err = err - m \sigma^2</math>
 
We see that test error is a good estimator for true error upto a constant additive value, since they only differ by a constant. Minimizing test error is equal to minimize true error. Moreover, the true error is less than the empirical error. There is no term adding unnecessary complexity. This is the justification for Cross Validation.
 
To avoid over-fitting or under-fitting using cross-validation, a validation data set selected so that it is independent from the estimated model.
 
===Case 2: Estimating Error using Data Points from Training Set===
 
In Case 2, the data points used to calculate error are from the training set, so <math>\ y_0 \in T </math>, i.e. <math>\ (x_i, y_i)</math> is in the training set. We will show that this results in a worse estimator for true error.
 
Now <math>\ y_0</math> has been used to estimate <math>\ \hat{f}</math> so they are not independent.  We use [http://en.wikipedia.org/wiki/Stein's_lemma Stein's lemma] to simplify the term <math>\ E[\epsilon_0 (\hat{f_0} - f_0)]</math>.
 
Stein's Lemma states that if  <math>\ x \sim N(\theta,\sigma^2)</math> and <math>\ g(x)</math> is differentiable, then 
 
<math>E\left[g(x) (x - \theta)\right] = \sigma^2 E \left[ \frac{\partial g(x)}{\partial x} \right] </math>
 
Substitute <math>\ \epsilon_0</math> for <math>\ x</math> and <math>\ (\hat{f_0}-f_0)</math> for <math>\ g(x)</math>.  Note that <math>\ \hat{f_0}</math> is a function of the noise, since as noise changes, <math>\hat{f_0}</math> will change.  Using Stein's Lemma, we get:
 
<math>
\begin{align}
E[\epsilon_0 (\hat{f_0}-f_0)] &= \sigma^2 E \left[ \frac{\partial (\hat{f_0}-f_0)}{\partial \epsilon_0} \right]\\
&=\sigma^2 E\left[\frac{\partial \hat{f_0}}{\partial \epsilon_0}\right]\\
&=\sigma^2 E\left[\frac{\partial \hat{f_0}}{\partial y_0}\right]\\
&=\sigma^2 E\left[D_0\right]
\end{align}
</math>
 
 
Remark: <math> \frac{\partial (\hat{f_0} - f_0)}{\partial y_0} = \frac{\partial (\hat{f_0} - f_0)}{\partial \epsilon_0} * \frac{\partial \epsilon_0}{\partial y}
= \frac{\partial (\hat{f_0} - f_0)}{\partial \epsilon_0} * \frac{\partial (y_0 - \hat{y_0})}{\partial y} </math> <br />
 
The reason why <math> \frac{\partial (\hat{f_0})}{\partial \epsilon_0} = 0 </math> is that <math>f_0</math> is a constant instead of a function.<br />
 
where <math> \frac{\partial (y_0 - \hat{y_0})}{\partial y} = 1 </math>
 
 
We take <math>\ D_0 = \frac{\partial \hat{f_0}}{\partial y_0}</math>, where <math>\ D_0</math> represents the derivative of the fitted model with respect to the observations. The equation for the expectation of empirical error becomes:
 
<math>E[(y_0-\hat{y_0})^2] = E[(f_0-\hat{f_0})^2] + \sigma^2 - 2 \sigma^2 E[D_0] </math>
 
Generalizing the equation for all n data points in the training set:
 
<math>
\sum_{i=1}^n{(y_i-\hat{y_i})^2} = \sum_{i=1}^n{(f_i-\hat{f_i})^2} + n \sigma^2 - 2 \sigma^2 \sum_{i=1}^n{D_i}
</math>
 
Based on the notation defined above, we then have:
 
<math>
err = Err + n \sigma^2 - 2 \sigma^2 \sum_{i=1}^n{D_i}
</math>
 
<math>Err = err - n \sigma^2 + 2 \sigma^2 \sum_{i=1}^n{D_i}</math>
 
This equation for the true error is called [http://www.reference.com/browse/Stein%27s+unbiased+risk+estimate Stein's unbiased risk estimator (SURE)]. It is an unbiased estimator of the mean-squared error of a given estimator, in a deterministic estimation scenario. In other words, it provides an indication of the accuracy of a given estimator. This is important since, in deterministic estimation, the true mean-squared error of an estimator generally depends on the value of the unknown parameter and thus cannot be determined completely.
 
Note that <math>\ D_i</math> depends on complexity of the model.  It measures how sensitive the model is to small perturbations in a single <math>\ y_i</math> in the training set. As complexity increases, the model will try to chase every little change and will be more sensitive to such perturbations. Minimizing training error without accounting for the impact of this term will result in overfitting. Thus, we need to know how to find <math>\ D_i</math>. Below we show an example, applying SURE to RBFs, where computing <math>\ D_i</math> is straightforward.
 
=== SURE for RBF Network Complexity Control===
Problem:  Assuming we want to fit our data using a radial basis function network, how many radial basis functions should be used?  The network size has to compromise the approximation quality, which usually improves as the network grows, and the training effort, which increases with the network size. Moreover, too complex models can show insufficient generalization properties (overfitting) requiring small networks. Furthermore, in terms of hardware or software realization smaller networks occupy less area due to reduced memory needs. Hence, controlling the network size is one major task during training. For further information about RBF network complexity control check [http://www.dice.ucl.ac.be/Proceedings/esann/esannpdf/es2007-13.pdf]
 
We can use Stein's unbiased risk estimator (SURE) to give us an approximation for how many RBFs to use.
 
The SURE equation is
 
<math>\mbox{Err}=\mbox{err} - n\sigma^2 + 2\sigma^2\sum_{i=1}^n D_i</math>
 
where <math>\ Err </math> is the true error, <math>\ err </math> is the empirical error, <math>\ n</math> is the number of training samples, <math>\ \sigma^2</math> is the variance of the noise of the training samples and <math>\ D_i</math> is derivative of the model output with respect to true output as shown below
 
<math>D_i=\frac{\partial \hat{f_i}}{\partial y_i}</math>
 
Optimal Number of Basis in RBF
 
The optimal number of basis functions should be rearranged in order to minimize the generalization error <math>\ err </math>.
 
The formula for an RBF network is:
 
<math>\hat{f}=\Phi W</math>
 
where <math>\ \hat{f}</math> is a matrix of RBFN outputs for each training sample, <math>\ \Phi</math> is the matrix of neuron outputs for each training sample, and <math>\ W</math> is the weight vector between each neuron and the output. Suppose we have m + 1 neurons in the network, where one has a constant function.
 
Given the training labels <math>\ Y</math> we define the empirical error and minimize it
 
<math>\underset{W}{\mbox{min}} |Y-\Phi W|^2</math>
 
<math>\, W=(\Phi^T \Phi)^{-1} \Phi^T Y</math>
 
<math>\hat{f}=\Phi(\Phi^T \Phi)^{-1} \Phi^T Y</math>
 
 
For simplification let <math>\ H</math> be the ''hat matrix'' defined as
 
<math>\, H=\Phi(\Phi^T \Phi)^{-1} \Phi^T</math>
 
Our optimal output then becomes
 
<math>\hat{f}=H Y</math>
 
We calculate <math>D</math> from the SURE equation. We now consider applying SURE to Radial Basis Function networks specifically. Based on SURE, the optimum number of basis functions should be assigned so that the generalization error <math>\displaystyle err</math> is minimized. Based on the RBF Network, by setting <math>\frac{\partial err}{\partial W}</math> equal to zero we obtain the least squares solution of <math>\ W = (\Phi^{T}\Phi)^{-1}\Phi^{T}Y</math>. Then the fitted values are <math>\hat{Y} = \hat{f} = \Phi W = \Phi(\Phi^{T}\Phi)^{-1}\Phi^{T}Y = HY</math>, where <math>\ H = \Phi(\Phi^{T}\Phi)^{-1}\Phi^{T}</math> is the hat matrix for this model.
 
 
Consider only one node of the network. In this case we can write:
<math>\hat f_i=\,H_{i1}y_1+\,H_{i2}y_2+\cdots+\,H_{ii}y_i+\cdots+\,H_{in}y_n</math>.
 
Note here that <math>\,H</math> depends on the input vector <math>\displaystyle x_i</math> but not on the observation <math>\displaystyle y_i</math>.
 
By taking the derivative of <math>\ \hat f_i</math> with respect to <math>\displaystyle y_i</math>, we can readily obtain:
 
<math>\sum_{i=1}^n \frac {\partial \hat f}{\partial y_i}=\sum_{i=1}^n \,H_{ii}</math>
 
<math>D_i= \frac{\partial \hat f_i}{\partial y_i}=\frac{\partial [HY]_i}{\partial y_i} </math> , <math>\hat f_i=\sum_{j}\,H_{ij}*Y_j</math>
 
 
Here we recall that <math>\sum_{i=1}^n\,D_{i}= \sum_{i=1}^n \,H_{ii}= \,Trace(H)</math>, the sum of the diagonal elements of <math>\,H</math>. Using the permutation property of the trace function we can further simplify the expression as follows:
<math>\,Trace(H)= Trace(\Phi(\Phi^{T}\Phi)^{-1}\Phi^{T})= Trace(\Phi^{T}\Phi(\Phi^{T}\Phi)^{-1})=m</math>, by the trace cyclical permutation property, where <math>\displaystyle m</math> is the number of basis functions in the RBF network (and hence <math>\displaystyle \Phi</math> has dimension <math>\displaystyle n \times m</math>).<br>
 
====Sketch of Trace Cyclical Property Proof:====
For <math>\, A_{mn}, B_{nm}, Tr(AB) = \sum_{i=1}^{n}\sum_{j=1}^{m}A_{ij}B_{ji} = \sum_{j=1}^{m}\sum_{i=1}^{n}B_{ji}A_{ij} = Tr(BA)</math>.<br>
With that in mind, for <math>\, A_{nn}, B_{nn} = CD, Tr(AB) = Tr(ACD) = Tr(BA)</math> (from above) <math>\, = Tr(CDA)</math>.<br><br>
 
Note that since <math>\displaystyle \Phi</math> is a projection of the input matrix <math>\,X</math> onto a basis set spanned by <math>\,m</math>, the number of basis functions, that sometimes an extra <math>\displaystyle \Phi_0</math> term is included without any input to represent the intercept of a fitted model. In this case, if considering an intercept, then <math>\,Trace(H)= m+1</math>.
 
 
The SURE equation then becomes
 
<math>\, \mbox{Err}=\mbox{err} - n\sigma^2 + 2\sigma^2(m+1)</math>
 
As the number of RBFs <math>\ m</math> increases the empirical error <math>\ err</math> decreases, but the right term of the SURE equation increases.  An optimal true error <math>\ Err </math> can be found by increasing <math>\ m</math> until <math>\ Err </math> begins to grow.  At that point the estimate to the minimum true error has been reached.
 
The value of m that gives the minimum true error estimate is the optimal number of basis functions to be implemented in the RBF network, and hence is also the optimal degree of complexity of the model.
 
One way to estimate the noise variance is
 
<math>\hat{\sigma}^2=\frac{\sum (y-\hat{y})^2}{n-1}</math>
 
This application of SURE is straightforward because minimizing Radial Basis Function error reduces to a simple least squares estimator problem with a linear solution. This makes computing <math>\ D_i</math> quite simple. In general, <math>\ D_i</math> can be much more difficult to solve for.
 
=== RBF Network Complexity Control (Alternate Approach) ===
 
An alternate approach (not covered in class) to tackling RBF Network complexity control is controlling the complexity by similarity <ref name="Eickhoff">R. Eickhoff and U. Rueckert, "Controlling complexity of RBF networks by similarity," ''Proceedings of European Symposium on Artificial Neural Networks'', 2007</ref>. In <ref name="Eickhoff" />, the authors suggest looking at the similarity between the basis functions multiplied by their weight by determining the cross-correlations between the functions. The cross-correlation is calculated as follows:
 
<math>\ \rho_{ij} = \frac{E[g_i(x)g_j(x)]}{\sqrt(E[g^2_i(x)]E[g^2_j(x)])} </math>
 
where <math>\ E[] </math> denotes the expectation and <math>\ g_i(x) </math> and <math>\ g_j(x) </math> would denote two of the basis functions multiplied by their respective weights.
 
If the cross-correlation between two functions is high, <ref name="Eickhoff" /> suggests that the two basis functions be replaced with one basis function that covers the same region of both basis functions and that the corresponding weight of this new basis function be the average of the weights of the two basis functions. For the case of Gaussian radial basis functions, the equations for finding the new weight (<math>\ w_{new} </math>), mean (<math>\ c_{new} </math>) and variance (<math>\ \sigma_{new} </math>) are as follows:
 
<math>\ w_{new} = \frac{w_i + w_j}{2} </math>
 
<math>\ c_{new} = \frac{1}{w_i \sigma^n_i + w_j \sigma^n_j}(w_i \sigma^n_i c_i + w_j \sigma^n_j c_j)</math>
 
<math>\ \sigma^2_{new} = \left(\frac{\sigma_i + \sigma_j}{2}+ \frac{min(||m-c_i||,||m-c_j||)}{2}\right)^2</math>
 
where <math>\ n </math> denotes the input dimension and <math>\ m </math> denotes the total number of radial basis functions.
 
This process is repeated until the cross-correlation between the basis functions falls below a certain threshold, which is a tunable parameter.
 
Note 1) Though not extensively discussed in <ref name="Eickhoff" />, this approach to RBF Network complexity control presumably requires a starting RBF Network with a large number basis functions.
 
Note 2) This approach does not require the repeated implementation of differently sized RBF Networks to determine the empirical error, unlike the approach using SURE. However, the SURE approach is backed up by theory to find the number of radial basis functions that optimizes the true error and does not rely on some tunable threshold. It would be interesting to compare the results of both approaches (in terms of the resulting RBF Network obtained and the test error).
 
 
===Generalized SURE for Exponential Families===
As we know, Stein’s unbiased risk estimate (SURE) is limited to be applied for the independent, identically distributed (i.i.d.) Gaussian model. However, in some recent work, some researchers tried to work on obtaining a SURE counterpart for general, instead of deriving estimate by dominating least-squares estimation, and this technique made SURE extend its application to a wider area.
 
You may look at Yonina C. Eldar, Generalized SURE for Exponential Families: Applications to Regularization, IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 57, NO. 2, FEBRUARY 2009 for more information.
 
===Further Reading===
Fully Tuned Radial Basis Function Neural Networks for Flight Control
<ref>
http://www.springer.com/physics/complexity/book/978-0-7923-7518-0;jsessionid=985F21372AC7AE1B654F1EADD11B296F.node3
</ref>
 
Paper about the BBFN for multi-task learning <ref>http://books.nips.cc/papers/files/nips18/NIPS2005_0628.pdf</ref>
 
Radial Basis Function (RBF) Networks <ref>http://documents.wolfram.com/applications/neuralnetworks/index6.html</ref>
 
An Example of RBF Networks <ref>http://reference.wolfram.com/applications/neuralnetworks/ApplicationExamples/12.1.2.html</ref>
 
This paper suggests an objective approach in determining proper samples to find good RBF networks with respect to accuracy <ref>http://www.wseas.us/e-library/conferences/2009/hangzhou/MUSP/MUSP41.pdf</ref>.
 
== Support Vector Machines (Lecture: Oct. 27, 2011) ==
 
[[Image:SVM.png|right|thumb|A series of linear classifiers, H2 represents a SVM, where the SVM attempts to maximize the margin, the distance between the closest point in each data set and the linear classifier.]]
 
[http://en.wikipedia.org/wiki/Support_vector_machine  Support vector machines] (SVMs), also referred to as max-margin classifiers, are learning systems that use a hypothesis space of linear functions in a high dimensional feature space, trained with a learning algorithm from optimization theory that implements a learning bias derived from statistical learning theory. SVMs are kernel machines based on the principle of structural risk minimization, which are used in applications of regression and classification; however, they are mostly used as binary classifiers. Although the subject can be said to have started in the late seventies (Vapnik, 1979), it is receiving increasing attention recently by researchers. It is such a powerful method that in the few years since its introduction has outperformed most other systems in a wide variety of applications, especially in pattern recognition.
 
The current standard incarnation of SVM is known as "soft margin" and was proposed by Corinna Cortes and Vladimir Vapnik [http://en.wikipedia.org/wiki/Vladimir_Vapnik]. In practice the data is not usually linearly separable. Although theoretically we can make the data linearly separable by mapping it into higher dimensions, the issues of how to obtain the mapping and how to avoid overfitting are still of concern. A more practical approach to classifying non-linearly separable data is to add some error tolerance to the separating hyperplane between the two classes, meaning that a data point in class A can cross the separating hyperplane into class B by a certain specified distance. This more generalized version of SVM is the so-called "soft margin" support vector machine and is generally accepted as the standard form of SVM over the hard margin case in practice today. [http://en.wikipedia.org/wiki/Support_vector_machine#Soft_margin]
 
Support Vector Machines are motivated by the idea of training linear machines with margins. It involves preprocessing the data to represent patterns in a high dimension (generally much higher than the original feature space). Note that using a suitable non-linear mapping to a sufficiently high dimensional space, the data will always be separable. (p. 263) <ref>[Duda, Richard O., Hart, Peter E., Stork, David G. "Pattern Classification". Second Edition. John Wiley & Sons, 2001.]</ref>
 
A suitable way to describe the interest in SVM can be seen in the following quote. "The problem which drove the initial development of SVMs occurs in several guises - the bias variance tradeoff (Geman, Bienenstock and Doursat, 1992), capacity control (Guyon et al., 1992), overfitting (Montgomery and Peck, 1992) - but the basic idea is the same. Roughly speaking, for a given learning task, with a given finite amount of training data, the best generalization performance will be achieved if the right balance is struck between the accuracy attained on that particular training set, and the “capacity” of the machine, that is, the ability of the machine to learn any training set without error. A machine with too much capacity is like a botanist with a photographic memory who, when presented with a new tree, concludes that it is not a tree because it has a different number of leaves from anything she has seen before; a machine with too little capacity is like the botanist’s lazy brother, who declares that if it’s green, it’s a tree. Neither can generalize well. The exploration and formalization of these concepts has resulted in one of the shining peaks of the theory of statistical learning (Vapnik, 1979). [http://research.microsoft.com/pubs/67119/svmtutorial.pdf A Tutorial on Support Vector Machines for Pattern Recognition]
 
===== Support Vector Method Solving Real-world Problems=====
 
No matter whether the training data are linearly-separable or not, the linear boundary produced by any of the versions of SVM is calculated using only a small fraction of the training data rather than using all of the training data points. This is much like the difference between the median and the mean.
 
SVM can also be considered a special case of [http://en.wikipedia.org/wiki/Tikhonov_regularization Tikhonov regularization]. A special property is that they simultaneously minimize the empirical classification error and maximize the geometric margin; hence they are also known as maximum margin classifiers. The key features of SVM are the use of kernels, the absence of local minima, the sparseness of the solution (i.e. few training data points are needed to construct the linear decision boundary) and the capacity control obtained by optimizing the margin.(Shawe-Taylor and Cristianini (2004)).
 
Another key feature of SVM, as discussed below, is the use of [http://en.wikipedia.org/wiki/Slack_variable slack variables] to control the amount of tolerable misclassification on the training data, which form the soft margin SVM. This key feature can serve to improve the generalization of SVM to new data. SVM has been used successfully in many real-world problems:
 
- Pattern Recognition, such as Face Detection , Face Verification, Object Recognition, Handwritten Character/Digit Recognition, Speaker/Speech Recognition, Image Retrieval , Prediction;
 
- Text and Hypertext categorization;
 
- Image classification;
 
- Bioinformatics, such as Protein classification, Cancer classification;
 
Please refer to [http://www.clopinet.com/isabelle/Projects/SVM/applist.html here] for more applications.
 
===== Structural Risk Minimization and VC Dimension =====
 
Linear learning machines are the fundamental formulations of SVMs. The objective of the linear learning machine is to find the linear function that minimizes the generalization error from a set of functions which can approximate the underlying mapping between the input and output data. Consider a learning machine that implements linear functions in the plane as decision rules
 
<math>f(\mathbf{x},\boldsymbol{\beta}, \beta_0)=sign (\boldsymbol{\beta}^T\mathbf{x}+\beta_0)</math>
 
 
With ''n'' given training data with input values <math>\mathbf{x}_i \in \mathbb{R}^d</math> and output values <math>y_i\in\{-1,+1\}</math>. The empirical error is defined as
 
<math>\Re_{emp} (\boldsymbol{\theta}) = \frac{1}{n}\sum_{i=1}^n |y_i-f(\mathbf{x},\boldsymbol{\beta}, \beta_0)|= \frac{1}{n}\sum_{i=1}^n |y_i-sign (\boldsymbol{\beta}^T\mathbf{x}+\beta_0)|</math>
 
 
where <math>\boldsymbol{\theta}=(\mathbf{x},\boldsymbol{\beta})</math>
 
The generalization error can be expressed as
 
<math> \Re (\boldsymbol{\theta}) = \int|y-f(\mathbf{x},\boldsymbol{\theta})|p(\mathbf{x},y)dxdy</math>
 
which measures the error for all input/output patterns that are generated from the underlying generator of the data characterized by the probability distribution <math>p(\mathbf{x},y)</math> which is considered to be unknown.
According to statistical learning theory, the generalization (test) error can be upper bounded in terms of training error and a confidence term as shown in
 
<math>\Re (\boldsymbol{\theta})\leq \Re_{emp} (\boldsymbol{\theta}) +\sqrt{\frac{h(ln(2n/h)+1)-ln(\eta/4)}{n}}</math>
 
 
The term on left side represents generalization error. The first term on right hand side is empirical error calculated from the training data and the second term is called ''VC confidence'' which is associated with the ''VC dimension'' h of the learning machine. [http://en.wikipedia.org/wiki/Vc_dimension VC dimension] is used to describe the complexity of the learning system. The relationship between these three items is illustrated in figure below:
 
 
[[File:risk.png|400px|thumb|centre| The relation between expected risk, empirical risk and VC confidence in SVMs.]]
 
 
Thus, even though we don’t know the underlying distribution based on which the data points are generated, it is possible to minimize the upper bound of the generalization error in place of minimizing the generalization error. That means one can minimize the expression in the right hand side of the inequality above.
 
Unlike the principle of Empirical Risk Minimization (ERM) applied in Neural Networks which aims to minimize the training error, SVMs implement Structural Risk Minimization (SRM) in their formulations. SRM principle takes both the training error and the complexity of the model into account and intends to find the minimum of the sum of these two terms as a trade-off solution (as shown in figure above) by searching a nested set of functions of increasing complexity.
 
=====Introduction=====
 
[http://en.wikipedia.org/wiki/Support_vector_machine Support Vector Machine]is a popular linear classifier. Suppose that we have a data set with two classes which could be separated using a hyper-plane. Support Vector Machine (SVM) is a method which will give us the "best" hyper-plane. There are other classifiers that find a hyper-plane that separate the data, namely Perceptron. However, the output of Perceptron and many other algorithms depends on the input parameters, so every run of Percetron can give you a different output. On the other hand, SVM tries to find the hyper-plane that separates the data and have the farthest distance from the points. This is also known as the Max-Margin hyper-plane.
 
No matter whether the training data are linearly-separable or not, the linear boundary produced by any of the versions of SVM is calculated using only a small fraction of the training data rather than using all of the training data points. This is much like the difference between the median and the mean. SVM can also be considered a special case of [http://en.wikipedia.org/wiki/Tikhonov_regularization Tikhonov regularization]. A special property is that they simultaneously minimize the empirical classification error and maximize the geometric margin; hence they are also known as maximum margin classifiers. The key features of SVM are the use of kernels, the absence of local minima, the sparseness of the solution (i.e. few training data points are needed to construct the linear decision boundary) and the capacity control obtained by optimizing the margin.(Shawe-Taylor and Cristianini (2004)). Another key feature of SVM, as discussed below, is the use of [http://en.wikipedia.org/wiki/Slack_variable slack variables] to control the amount of tolerable misclassification on the training data, which form the soft margin SVM. This key feature can serve to improve the generalization of SVM to new data.
 
<gallery>
Image:KwebsterIntroDiagram.png|Infinitely many Perceptron solutions
Image:CorrectChoice.png|Out of many how do we choose?
</gallery>
 
 
With Perceptron, there can be infinitely many separating hyperplanes such that the training error will be zero. But the question is that among all these possible solution which one is the best. Intuitively, a good separation is achieved by the hyperplane that has the largest distance to the nearest training data points of any class (so-called functional margin), since in general the larger the margin the lower the generalization error of the classifier. This makes sense because at test time, more points will be observed and they may be closer to the other class, so the safest choice for the hyper-plane would be the one farthest from both classes.
 
One of the great things about SVM is that not only it has solid theoretical guarantees, but also it works very well in practice.
 
'''To summarize'''
 
[[Image:Margin.png|right|thumb|What we mean by margin is the distance between the hyperplane and the closest point in a class.]]
 
If the data is Linearly separable, then there exists infinitely many solution hyperplanes. Of those, infinitely many hyperplanes,  one of them is the best choice for the solution. Then the best decision to make is the hyperplane which is furthest from both classes. Our goal is to find a hyperplane among all possible hyperplanes which is furthest from both classes. This is to say, find the hyperplane that has maximum margin. If such a hyperplane exists, it is known as the maximum-margin hyperplane and the linear classifier it defines is known as a maximum margin classifier; or equivalently, the perceptron of optimal stability.
 
What we mean by margin is the distance between the hyperplane and the closest point in a class.
 
<!--
If the mean value were to be used instead of the closest point, then an outlier may pull the hyperplane into the data which would incorrectly classify the known data points
<gallery>
Image:NotMean.png|This is the reason why we use the closest point instead of the expected value.
</gallery>
-->
[[Image:NotMean.png|right|thumb|If the mean value were to be used instead of the closest point, then an outlier may pull the hyperplane into the data which would incorrectly classify the known data points. This is the reason why we use the closest point instead of the expected value.]]
 
===== Setting=====
 
[[Image:Thedis.png|right|thumb|What is <math> d_i </math>]]
 
* We assume that the data is linearly separable
* Our classifier will be of the form <math> \boldsymbol\beta^T\mathbf{x} + \beta_0 </math>
* We will assume that our labels are <math> y_i \in \{-1,1\} </math>
 
 
 
The goal is to classify the point <math> \mathbf{x_i} </math> based on the <math>sign \{d_i\}</math> where <math>d_i</math> is the signed distance between <math> \mathbf{x_i}</math> and the hyperplane.
 
<!-- Comments -->
<!--
<gallery>
Image:Thedis.png|What is <math> d_i </math>
</gallery>
-->
 
Now we are going to check how far this point is from the hyperplane, and the parts on one side of the hyperplane will have a negative value and the parts on the other side will have a positive value. Points are classified by the sign of the data point. So <math>\mathbf{x_i}</math> would be classified using <math>d_i</math>
 
===Side Note: A memory from the past of Dr. Ali Ghodsi===
When the aforementioned Professor was a small child, grade 2. He was often careless with the accuracy of certain curly brackets, when writing what one can only assume was math proofs. One day, his teacher grew impatient and demanded that a page of perfect curly brackets be produced by the young Dr. (He may or may not have been a doctor at the time) And now, whenever Dr. Ghodsi writes a tidy curly bracket, he is reminded of this and it always brings a smile to his face.
 
From memories of the past.
 
(the number 20 was involved in the story, either the number of pages or the number of lines)
 
===== Case 1: Linearly Separable (Hard Margin) =====
 
In this case, the classifier will be <math>\boldsymbol {\beta^T} \boldsymbol {x} + \beta_0  </math>  and <math>\ y \in \{-1, 1\} </math>.
The point <math>\boldsymbol {x_i}</math> to classify is based on the sign of <math>\ \{d_i\}</math>, where <math>\ d_i </math> is the signed distance between <math>\boldsymbol {x_i}</math> and the hyperplane.
 
===== Objective Function =====
[[Image:X1X2perpBeta.png|right|thumb|Look at it being perpendicular]]
'''Observation 1:''' <math>\boldsymbol\beta</math> is orthogonal to hyper-plane. Because, for any two arbitrary points <math>\mathbf{x_1, x_2}</math> on the plane we have:
 
<math> \boldsymbol\beta^T\mathbf{x_1} + \beta_0 = 0 </math>
 
<math> \boldsymbol\beta^T\mathbf{x_2} + \beta_0 = 0 </math>
 
So <math>\boldsymbol\beta^T (\boldsymbol{x_1}-\boldsymbol{x_2}) = 0</math>. Thus, <math> \boldsymbol\beta \perp (\boldsymbol{x_1} - \boldsymbol{x_2}) </math>, which implies that <math>\boldsymbol \beta</math> is a normal vector to the hyper-plane.
 
'''Observation 2:''' If <math>\boldsymbol x_0</math> is a point on the hyper-plane, then there exists a <math>\ \beta_0 </math> such that,  <math>\boldsymbol\beta^T\boldsymbol{x_0}+\beta_0 = 0</math>. So <math>\boldsymbol\beta^T\boldsymbol{x_0} = - \beta_0</math>. This along with observation 1 imply there exists a <math>\ \beta_0 </math> such that, <math>\boldsymbol\beta^T\boldsymbol{x} = - \beta_0</math> for all <math> \boldsymbol{x} </math> on the hyperplane.
 
 
'''Observation 3:''' Let <math>\ d_i</math> be the signed distance of point <math>\boldsymbol{x_i}</math> from the plane. The <math>\ d_i</math> is the projection of <math>(\boldsymbol{x_i} - \boldsymbol{x_0})</math> on the direction of <math>\boldsymbol\beta</math>. In other words, <math> d_i \propto \boldsymbol\beta^T(\mathbf{x - x_0}) </math>.(normalize <math>\beta</math>)
 
<math>
\begin{align}
\displaystyle d_i &= \frac{\boldsymbol\beta^T(\boldsymbol{x_i} - \boldsymbol{x_0})}{\vert \boldsymbol\beta\vert}\\ 
& = \frac{\boldsymbol{\beta^Tx_i}- \boldsymbol{\beta^Tx_0}}{\vert \boldsymbol\beta\vert}\\
& = \frac{\boldsymbol{\beta^Tx_i}+ \beta_0}{\vert \boldsymbol\beta\vert}
\end{align}
</math>
 
 
'''Observation 4:''' Let margin be the distance between the hyper-plane and the closest point. Since <math> d_i </math> is the signed distance between the hyperplane and point <math>\boldsymbol{x_i} </math>, we can define the positive distance of point <math>\boldsymbol{x_i} </math> from the hyper-plane as <math>(y_id_i)</math>.
 
<math>
\begin{align}
\displaystyle \text{Margin} &= \min\{y_i d_i\}\\
&= \min\{ \frac{y_i(\boldsymbol\beta^T\mathbf{x_i} + \beta_0)}{|\boldsymbol\beta|}  \}
\end{align}
</math>
 
Our goal is to maximize the margin. This is also known as the Max/Min problem in Optimization. When defining the hyperplane, what is important is the direction of <math>\boldsymbol\beta</math>. Value of <math>\beta_0</math> does not change the direction of the hyper-plane, it is only the distance from the origin. Note that if we assume that the points do not lie on the hyper-plane, then the margin is positive:
 
<math>
\begin{align}
\displaystyle &y_i(\boldsymbol\beta^T\mathbf{x_i} + \beta_0) \geq 0 &&\\
&y_i(\boldsymbol\beta^T\mathbf{x_i} + \beta_0) \geq C &&\mbox{ for some positive C } \\
&y_i(\frac{\boldsymbol\beta^T}{C}\mathbf{x_i} + \frac{\beta_0}{C}) \geq 1 &&\mbox{ Divide by C}\\
&y_i(\boldsymbol\beta^{*T}\mathbf{x_i} + \beta^*_0) \geq 1 && \mbox{ By setting }\boldsymbol\beta^* = \frac{\boldsymbol\beta}{C}, \boldsymbol\beta_0^* = \frac{\boldsymbol\beta_0}{C}\\
&y_i(\boldsymbol\beta^{T}\mathbf{x_i} + \beta_0) \geq 1 && \mbox{ By setting }\boldsymbol\beta\gets\boldsymbol\beta^*, \boldsymbol\beta_0\gets\boldsymbol\beta_0^*\\
\end{align}
</math>
 
 
So with a bit of abuse of notation we can assume that
 
<math> y_i(\boldsymbol\beta^T\mathbf{x_i} + \beta_0) \geq 1 </math>
 
Therefore, the problem translates to:
: <math>\, \max\{\frac{1}{||\boldsymbol\beta||}\}</math>
 
So, it is possible to re-interpret the problem as:
 
: <math>\, \min \frac 12 \vert \boldsymbol\beta \vert^2 \quad</math>  s.t. <math>\quad \,y_i (\boldsymbol\beta^{T} \boldsymbol{x_i}+ \beta_0) \geq 1 </math>
 
<math>\, \vert \boldsymbol\beta \vert </math> could be any norm, but for simplicity we use L2 norm. We use <math>\frac 12 \vert \boldsymbol\beta \vert^2</math> instead of <math>|\boldsymbol\beta|</math> to make the function differentiable. To solve the above optimization problem we can use '''Lagrange multipliers''' as follows
 
=====Support Vectors=====
 
Support vectors are the training points that determine the optimal separating hyperplane that we seek. Also, they are the most difficult points to classify and at the same time the most informative for classification.
 
=====Visualizing the Cost Function=====
Recall the cost function for a single example in the logistic regression model:
 
<math>-\left( y \log \frac{1}{1+e^{-\beta^T \boldsymbol{x}}} + (1-y)\log \frac{e^{-\beta^T\boldsymbol{x}}}{1+e^{-\beta^T \boldsymbol{x}}} \right)</math>
 
where <math>y \in \{0,1\}</math>. Looking at the plot of the cost term (for y=1), if <math>y=1</math> (i.e. the target class is 1), then we want our <math>\beta</math> to be such that <math>\beta^T \boldsymbol{x} \gg 0</math>. This will ensure very accurate classification.
 
[[Image:logreg_cost.jpg|450px]]
 
Now for SVM, consider the generic cost function as follows:
 
<math>-\left( y \cdot \text{cost}_1(\beta^T \boldsymbol{x}) + (1-y)\cdot \text{cost}_0(\beta^T \boldsymbol{x}) \right)</math>
 
We can visualize <math>\text{cost}_1</math> compared with the sigmoid cost term in logistic regression as follows:
 
[[Image:svm_cost.jpg|450px]]
 
What you should take away from this is for y=1, we want <math>\beta^T \boldsymbol{x}\ge 1</math>. In our notes, we have <math>y \in \{-1, 1\}</math>, so that's why we write <math>y_i (\beta^T \boldsymbol{x} + \beta_0) \ge 1</math>.
 
The same rationale can be applied for y=0, using <math>(1-y)\log \frac{1}{1+e^{-\beta^T \boldsymbol{x}}}</math>
 
=====Writing Lagrangian Form of Support Vector Machine =====
 
The Lagrangian form using [http://en.wikipedia.org/wiki/Lagrange_multipliers Lagrange multipliers] and constraints that are discussed below is introduced to ensure that the optimization conditions are satisfied, as well as finding an optimal solution (the optimal saddle point of the Lagrangian for the [http://en.wikipedia.org/wiki/Quadratic_programming classic quadratic optimization]). The problem will be solved in dual space by introducing  <math>\,\alpha_i</math> as dual constraints, this is in contrast to solving the problem in primal space as function of the betas.  A [http://www.cs.wisc.edu/dmi/lsvm/ simple algorithm] for iteratively solving the Lagrangian has been found to run well on very large data sets, making SVM more usable.  Note that this algorithm is intended to solve Support Vector Machines with some tolerance for errors - not all points are necessarily classified correctly.  Several papers by Mangasarian explore different algorithms for solving SVM.
 
the Lagrangian function of the above optimization problem:
 
<math>
\begin{align}
\displaystyle L(\boldsymbol\beta, \beta_0, \boldsymbol\alpha) &= \frac 12 \vert \boldsymbol\beta \vert^2 - \sum_{i=1}^n \alpha_i \left[ y_i (\boldsymbol{\beta^T x_i}+\beta_0) -1 \right]\\
&= \frac 12 \vert \boldsymbol\beta \vert^2 - \boldsymbol\beta^T \sum_{i=1}^n \alpha_i y_i \boldsymbol{x_i} - \sum_{i=1}^n \alpha_i y_i \beta_0 - \sum_{i=1}^n \alpha_i
\end{align}
</math>
 
where <math>\boldsymbol\alpha = (\alpha_1 ,... ,\alpha_n) </math> are lagrange multipliers. <math> 0 \le \alpha_{i} i=1...n </math>
 
To find the optimal value, we set the derivatives equal to zero: <math>\,\frac{\partial L}{\partial \boldsymbol{\beta}} = 0</math> and <math>\,\frac{\partial L}{\partial \beta_0} = 0</math>.
 
<math>
\begin{align}
\displaystyle &\frac{\partial L}{\partial \boldsymbol{\beta}} = \boldsymbol\beta - \sum_{i=1}^n \alpha_i y_i \boldsymbol{x_i} = 0 &\Longrightarrow& \boldsymbol\beta = \sum_{i=1}^n \alpha_i y_i\boldsymbol{x_i}\\
&\frac{\partial L}{\partial \beta_0} = - \sum_{i=1}^n \alpha_i y_i = 0 &\Longrightarrow& \sum_{i=1}^n \alpha_i y_i = 0 
\end{align}
</math>
 
To get the dual form of the optimization problem we replace the above two equations in definition of <math>L(\boldsymbol\beta, \beta_0, \boldsymbol\alpha)</math>.
 
We have:
<math>
\begin{align}
\displaystyle L(\boldsymbol\beta, \beta_0, \boldsymbol\alpha) &= \frac 12 \boldsymbol\beta^T\boldsymbol\beta - \boldsymbol\beta^T \sum_{i=1}^n \alpha_i y_i \boldsymbol{x_i} - \sum_{i=1}^n \alpha_i y_i \beta_0 - \sum_{i=1}^n \alpha_i\\
&= \frac 12 \boldsymbol\beta^T \sum_{i=1}^n \alpha_i y_i\boldsymbol{x_i} - \boldsymbol\beta^T \sum_{i=1}^n \alpha_i y_i\boldsymbol{x_i} - 0 + \sum_{i=1}^n \alpha_i\\
&= - \frac 12 \boldsymbol\beta^T \sum_{i=1}^n \alpha_i y_i\boldsymbol{x_i} + \sum_{i=1}^n \alpha_i\\
&= - \frac 12 \sum_{i=1}^n \alpha_i y_i\boldsymbol{x_i}^T \sum_{i=1}^n \alpha_i y_i\boldsymbol{x_i} + \sum_{i=1}^n \alpha_i\\
&= \sum_{i=1}^n \alpha_i - \frac 12 \sum_{i=1}^n\sum_{j=1}^n \alpha_i\alpha_jy_iy_j\boldsymbol{x_i}^T\boldsymbol{x_j}
\end{align}
</math>
 
The above function is a dual objective function, so we should minimize it:
 
<math>
\begin{align}
\displaystyle \max_\alpha &\sum_{i=1}^n \alpha_i - \frac 12 \sum_{i=1}^n\sum_{j=1}^n \alpha_i \alpha_j y_i y_j \boldsymbol{x_i}^T \boldsymbol{x_j}\\
s.t.\; & \alpha_i \geq 0\\
& \sum_{i=1}^n \alpha_i y_i = 0
\end{align}
</math>
 
The dual function is a quadratic function of several variables subject to linear constraints. This optimization problem is called Quadratic Programming and is much easier than the primal function. It is possible to to write to dual form using matrices:
 
<math>
\begin{align}
\displaystyle \max_\alpha \,& \boldsymbol\alpha^T\boldsymbol{1} - \frac 12 \boldsymbol\alpha^T S \boldsymbol\alpha\\
s.t.\; & \boldsymbol\alpha \geq 0\\
&  \boldsymbol\alpha^Ty = 0\\
& S = ([y_1,\dots, y_n]\odot X)^T ([y_1,\dots, y_n]\odot X)
\end{align}
</math>
 
 
Since <math> S = ([y_1,\dots, y_n]\odot X)^T ([y_1,\dots, y_n]\odot X) </math>, S is a positive semi-definite matrix. This means that the dual function is convex.[http://en.wikipedia.org/wiki/Convex_function]. This means that the dual function does not have any local minimum that is not global. So it is relatively easy to find the global minimum.
 
This is a much simpler optimization problem and we can solve it by [http://en.wikipedia.org/wiki/Quadratic_programming Quadratic programming]. Quadratic programming (QP) is a special type of mathematical optimization problem. It is the problem of optimizing (minimizing or maximizing) a quadratic function of several variables subject to linear constraints on these variables.
The general form of such a problem is minimize with respect to <math>\,x</math>
: <math>f(x) = \frac{1}{2}x^TQx + c^Tx</math>
subject to one or more constraints of the form:
 
<math>\,Ax\le b</math>, <math>\,Ex=d</math>.
 
A good description of general QP problem formulation and solution can be find [http://www.me.utexas.edu/~jensen/ORMM/supplements/methods/nlpmethod/S2_quadratic.pdf link here].
 
===== Discussion on the Dual of the Lagrangian  =====
As mentioned in the previous section, solving the dual form of the Lagrangian requires quadratic programming. Quadratic programming can be used to minimize a quadratic function subject to a set of constraints. In general, for a problem with N variables, the quadratic programming solution has a computational complexity of <math>\ O(N^3) </math>
<ref name="CMBishop" />. The original problem formulation only has (d+1) variables that need to be found (i.e. the values of <math>\ \beta </math> and <math>\ \beta_0 </math>), where d is the dimensionality of the data points. However, the dual form of the Lagrangian has n variables that need to be found (i.e. all the <math>\ \alpha </math> values), where n is the number of data points. It is likely that n is larger than (d+1) (i.e. the number of data points is larger than the dimensionality of the data plus 1), which makes the dual form of the Lagrangian seem computationally inefficient <ref name="CMBishop" />. However, the dual of the Lagrangian allows the inner product <math>\ x_i^T x_j </math> to be expressed using a kernel formulation which allows the data to be transformed into higher feature spaces and thus allowing seemingly non-linearly separable data points to be separated, which is a highly useful feature described in more detail in the next class <ref name="CMBishop" />.
 
===== Support Vector Method Packages=====
 
One of the popular Matlab toolboxes for SVM is [http://www.csie.ntu.edu.tw/~cjlin/libsvm/ LIBSVM], which has been developed in the department of Computer Science and Information Engineering, National Taiwan University, under supervision of Chih-Chung Chang and Chih-Jen Lin. In this page they have provided the society with many different interfaces for LIBSVM like Matlab, C++, Python, Perl, and many other languages, each one of those has been developed in different institutes and by variety of engineers and mathematicians. In this page you can also find a thorough introduction to the package and its various parameters.
 
A very helpful tool which you can find on the [http://www.csie.ntu.edu.tw/~cjlin/libsvm/ LIBSVM] page is a graphical interface for SVM; it is an applet by which we can draw points corresponding to each of the two classes of the classification problem and by adjusting the SVM parameters, observe the resulting solution.
 
If you found LIBSVM helpful and wanted to use it for your research, [http://www.csie.ntu.edu.tw/~cjlin/libsvm/faq.html#f203 please cite the toolbox].
 
A pretty long list of other SVM packages and comparison between all of them in terms of language, execution platform, multiclass and regression capabilities, can be found [http://www.cs.ubc.ca/~murphyk/Software/svm.htm here].
 
The top 3 SVM software are:
 
1. LIBSVM
 
2. SVMlight
 
3. SVMTorch
 
More information which introduces SVM software and their comparison can be found [http://www.svms.org/software.html here] and [http://www.support-vector-machines.org/SVM_soft.html here].
 
== Support Vector Machine Continued (Lecture: Nov. 1, 2011)  ==
 
In the previous lecture we considered the case when data is linearly separable. The goal of the Support Vector Machine classifier is to find the hyperplane that maximizes the margin distance from the hyperplane to each of the two classes. We derived the following optimization problem based on the SVM methodology:
 
<math>\, \min_{\beta} \frac{1}{2}{|\boldsymbol{\beta}|}^2</math>
 
Subject to the constraint:
 
<math>\,y_i(\boldsymbol{\beta}^T\mathbf{x}_i+\beta_0)\geq1, \quad y_i \in \{-1,1\} \quad \forall{i} =1, \ldots , n</math><br />
 
Notice that SVM can only classify 2-class output. Lots of work will be needed for higher classes output.
 
This is the primal form of the optimization problem. Then we derived the dual of this problem:
 
<math>\, \max_\alpha \quad \sum_i \alpha_i - \frac{1}{2} \sum_i \sum_j \alpha_i \alpha_j y_i y_j \mathbf{x}_i^T\mathbf{x}_j </math>
 
Subject to constraints:
 
<math>\,\alpha_i\geq 0  </math>
 
<math>\,\sum_i \alpha_i y_i =0</math>
 
 
The is a quadratic programming problem. QP problems have been thoroughly studied and they can be efficiently solved. This particular problem has a convex objective function as well as convex constraints. This guarantees a global optima, even if we use local optima search algorithms (e.g. gradient descent). These properties are of significant importance for classifiers and thus are one of the most important strengths of the SVM classifier.
 
for an easy implementation of SVM and solving above quadratic optimization problem in R see<ref>
http://cbio.ensmp.fr/~thocking/mines-course/2011-04-01-svm/svm-qp.pdf
</ref>
 
We are able to find <math>\,\beta</math> when <math>\,\alpha</math> is found:
 
<math>\, \boldsymbol{\beta} = \sum_i \alpha_i y_i \mathbf{x}_i </math>
 
But in order to find the hyper-plane uniquely we also need to find <math>\,\beta_0</math>.
 
When finding the dual objective function, there is a set of conditions called '''KKT''' that should be satisfied.
 
=== Examining KKT Conditions ===
KKT stands for [http://en.wikipedia.org/wiki/Karush%E2%80%93Kuhn%E2%80%93Tucker_conditions Karush-Kuhn-Tucker] (initially named after Kuhn and Tucker's work in the 1950's, however, it was later discovered that Karush had stated the conditions back in the late 1930's) <ref name="CMBishop" />
 
The K.K.T. conditions are as follows: stationarity, primal feasibility, dual feasibility, and complementary slackness.
 
It gives us a closer look into the Lagrangian equation and the associated conditions.
 
Suppose we want to find <math>\, \min_x f(x)</math> subject to the constraint <math>\, g_i(x)\geq 0 ,  \forall{x} </math>. The Lagrangian is then computed as:
 
<math>\, \mathcal{L} (x,\alpha_i)=f(x)-\sum_i \alpha_i g_i(x) </math>
 
If <math> \, x^* </math> is the point where <math>\beta</math> is optimal with respect to our cost function, the necessary conditions for <math> \, x^* </math> to be the local minimum :
 
1) '''Stationarity''': <math>\, \frac{\partial \mathcal{L}}{\partial x} (x^*) = 0 </math> that is <math>\, f'(x^*) - \Sigma_i{\alpha_ig'_i(x^*)}=0</math>
 
2) '''Dual Feasibility''': <math>\, \alpha_i\geq 0 , </math>
 
3) '''Complementary Slackness''': <math>\, \alpha_i g_i(x^*)=0 , </math>
 
4) '''Primal Feasibility''': <math>\, g_i(x^*)\geq 0 , </math>
 
 
If any of the above four conditions are not satisfied, then the primal function is not feasible.
 
=====Support Vectors=====
Support vectors are the training points that determine the optimal separating hyperplane that we seek i.e. the margin is calculated as the distance from the hyperplane to the support vectors. Also, they are the most difficult points to classify and at the same time the most informative for classification.
 
In our case, the <math>g_i({x})</math> function is:
:<math>\,g_i(x) = y_i(\beta^Tx_i+\beta_0)-1</math>
 
Substituting <math>\,g_i</math> into KKT condition 3, we get <math>\,\alpha_i[y_i(\beta^Tx_i+\beta_0)-1] = 0</math>. <br\>In order for this condition to be satisfied either <br/><math>\,\alpha_i= 0</math> or <br/><math>\,y_i(\beta^Tx_i+\beta_0)=1</math>
 
All points <math>\,x_i</math> will be either 1 or greater than 1 distance unit away from the hyperplane, since <math>y_i(\beta^T \boldsymbol{x_i} + \beta_0)</math> is the value of the projected distance in the specific direction of the target value.
 
'''Case 1: a point away from the margin'''
 
If <math>\,y_i(\beta^Tx_i+\beta_0) > 1 \Rightarrow \alpha_i = 0</math>.
 
In other words, if point <math>\, x_i</math> is not on the margin (i.e. <math>\boldsymbol{x_i}</math> is not a support vector), then the corresponding <math>\,\alpha_i=0</math>.
 
'''Case 2: a point on the margin'''
 
If  <math>\,y_i(\beta^Tx_i+\beta_0) = 1 \Rightarrow \alpha_i > 0 </math>.
<br\>If point <math>\, x_i</math> is on the margin (i.e. <math>\boldsymbol{x_i}</math> is a support vector), then the corresponding <math>\,\alpha_i>0</math>.
 
 
Points on the margin, with corresponding <math>\,\alpha_i > 0</math>, are called '''''support vectors'''''.
 
Since it is impossible for us to know a priori which of the training data points would end up as the support vectors, it is necessary for us to work with the entire training set to find the optimal hyperplane. It is usually the case that we only use a small number of support vectors, which makes the SVM model very robust to new data.
 
 
To compute <math>\ \beta_0</math>, we need to choose any <math>\,\alpha_i > 0</math>, this will satisfy:
 
<math>\,y_i(\beta^Tx_i+\beta_0) = 1</math>.
 
We can compute <math>\,\beta = \sum_i \alpha_i y_i x_i </math>, substitute <math>\ \beta</math> in <math>\,y_i(\beta^Tx_i+\beta_0) = 1</math> and solve for <math>\ \beta_0</math>.
 
Everything we derived so far was based on the assumption that the data is linearly separable (termed '''Hard Margin SVM'''), but there are many cases in practical applications that the data is not linearly separable.
 
=== Kernel Trick ===
 
[[File:Kerneltrick.JPG|500px|thumb|right|An example of mapping 2D space into 3D such that the inseparable red o's and the blue +'s in 2D space can be separated when mapped into 3D space <ref>
Jordan (2004). ''The Kernel Trick.'' [Lecture]. Available: [http://www.cs.berkeley.edu/~jordan/courses/281B-spring04/lectures/lec3.pdf.]|</ref>]]
 
We talked about the curse of dimensionality at the beginning of this course. However, we now turn to the power of high dimensions in order to find a hyperplane between two classes of data points that can linearly separate the transformed (mapped) data in a space that has a higher dimension than the space in which the training data points reside.
 
To understand this, imagine a two dimensional prison where a two dimensional person is constrained. Suppose magically we give the person a third dimension, then he can escape from the prison. In other words, the prison and the person are linearly separable now with respect to the third dimension. The intuition behind the [http://www.cs.berkeley.edu/~jordan/courses/281B-spring04/lectures/lec3.pdf kernel trick] is basically to map data to a higher dimension in which the mapped data are linearly separable by a hyperplane, even if the original data are not linearly separable.
 
The original optimal hyperplane algorithm proposed by [http://en.wikipedia.org/wiki/Vladimir_Vapnik Vladimir Vapnik] in 1963 was a linear classifier. However, in 1992, Bernhard Boser, Isabelle Guyon and Vapnik suggested a way to create non-linear classifiers by applying the kernel trick to maximum-margin hyperplanes. The algorithm is very similar, except that every dot product is replaced by a non-linear kernel function as below. This allows the algorithm to fit the maximum-margin hyperplane in a transformed feature space. We have seen SVM as a linear classification problem that finds the maximum margin hyperplane in the given input space. However, for many real world problems a more complex decision boundary is required. The following simple method was devised in order to solve the same linear classification problem but in a higher dimensional space, a [http://en.wikipedia.org/wiki/Feature_space feature space], under which the maximum margin hyperplane is better suited.
 
In Machine Learning, the kernel trick is a way of mapping points into an inner product space, hoping that the new space is more suitable for classification.
<math>\phi</math> is function to transfer a m-dimensional data to a higher dimension, so that we can find the connection between the non-linearly separable data the linearly separable ones.
Example:
 
<math> \left[\begin{matrix}
\,x \\
\,y \\
\end{matrix}\right]  \rightarrow\  \left[\begin{matrix}
\,x^2 \\
\,y^2 \\
\, \sqrt{2}xy \\
\end{matrix}\right]</math>
 
<math>k(x,y)=\phi^{T}(x)\phi(y)</math>
 
<math> \left[\begin{matrix}
\,x_1 \\
\,y_1 \\
\end{matrix}\right]  \rightarrow\  \left[\begin{matrix}
\,x_1^2 \\
\,y_1^2 \\
\, \sqrt{2}x_1y_1 \\
\end{matrix}\right]</math>
 
<math> \left[\begin{matrix}
\,x_2 \\
\,y_2 \\
\end{matrix}\right]  \rightarrow\  \left[\begin{matrix}
\,x_2^2 \\
\,y_2^2 \\
\, \sqrt{2}x_2y_2 \\
\end{matrix}\right]</math>
 
 
 
<math>  \left[\begin{matrix}
\,x_1^2 \\
\,y_1^2 \\
\, \sqrt{2}x_1y_1 \\
\end{matrix}\right]  ^{T} *  \left[\begin{matrix}
\,x_2^2 \\
\,y_2^2 \\
\, \sqrt{2}x_2y_2 \\
\end{matrix}\right] = K(\left[\begin{matrix}
\,x_1 \\
\,y_1 \\
\end{matrix}\right],\left[\begin{matrix}
\,x_2 \\
\,y_2 \\
\end{matrix}\right] ) </math>
 
Recall our objective function: <math>\sum_i \alpha_i - \frac{1}{2} \sum_{ij} \alpha_i \alpha_j y_i y_j \mathbf{x}_i^T\mathbf{x}_j</math>
We can replace <math> \mathbf{x}_i^T\mathbf{x}_j </math> by <math> \mathbf{\phi^{T}(x_i)}\mathbf{\phi(x_j)}= k(x_i,x_j) </math>
 
 
<math> \left[\begin{matrix}
\,k(x_1, x_1)& \,k(x_1, x_2)& \cdots &\,k(x_1, x_n) \\
\vdots& \vdots& \vdots& \vdots\\
\,k(x_n, x_1)& \,k(x_n, x_2)& \cdots &\,k(x_n, x_n) \\
\end{matrix}\right] </math>
 
 
In most of the real world cases the data points are not linearly separable. How can the above methods be generalized to the case where the decision function is not a linear function of the data? Boser, Guyon and Vapnik, 1992, showed that a rather old trick (Aizerman, 1964) can be used to accomplish this in an astonishingly straightforward way. First notice that the only way in which the data appears in the dual-form optimization problem is in the form of dot products: <math>\mathbf{x}_i^T.\mathbf{x}_j</math> . Now suppose we first use a non-linear operator <math> \Phi \mathbf(x) </math> to map the data points to some other higher dimensional space (possibly infinite dimensional) <math> \mathcal{H} </math> (called Hilbert space or feature space), where they can be classified linearly. Figure below illustrates this concept:
 
 
[[File:kernell trick.jpg|500px|thumb|centre|Mapping of not-linearly separable data points in a two-dimensional space to a three-dimensional space where they can be linearly separable by means of a kernel function.]]
 
 
In other words, a linear learning machine can be employed in the higher dimensional feature space to solve the original non-linear problem. Then of course the training algorithm would only depend on the data through dot products in <math> \mathcal{H} </math>, i.e. on functions of the form <math><\Phi (\mathbf{x}_i),\Phi (\mathbf{x}_j)> </math>. Note that the actual mapping <math> \Phi \mathbf(x) </math> does not need to be known, only the inner product of the mapping is needed for modifying the support vector machine such that it can separate non-linearly separable data. Avoiding the actual mapping to the higher dimensional space is preferable, because higher dimensional spaces may have problems due to the ''curse of dimensionality''.
 
So the hypothesis in this case would be
 
<math>f(\mathbf{x}) = \boldsymbol{\beta}^T \Phi (\mathbf{x}) + \beta_0</math>
 
which is linear in terms of the new space that <math> \Phi (\mathbf{x}) </math> maps the data to, but non-linear in the original space. Now we can extend all the presented optimization problems for the linear case, for the transformed data in the feature space. If we define the kernel function as
 
<math> K (\mathbf{x}_i,\mathbf{x}_j) = <\Phi (\mathbf{x}_i),\Phi (\mathbf{x}_j)> = \Phi(\mathbf{x}_i)^T \Phi (\mathbf{x}_j)</math>
 
where <math>\ \Phi </math> is a mapping from input space to an (inner product) feature space. Then the corresponding dual form is
 
 
<math>L(\boldsymbol{\alpha}) =\sum_{i=1}^n \alpha_i - \frac 12 \sum_{i=1}^n\sum_{j=1}^n \alpha_i\alpha_jy_iy_j K (\mathbf{x}_i,\mathbf{x}_j)</math>
 
subject to    <math>\sum_{i=1}^n \alpha_i y_i=0 \quad \quad \alpha_i \geq 0,\quad i=1, \cdots, n</math>
 
 
The cost function <math> L(\boldsymbol{\alpha}) </math> is convex and quadratic in terms of the unknown parameters. This problem is solved through quadratic programming. The [http://en.wikipedia.org/wiki/Karush%E2%80%93Kuhn%E2%80%93Tucker_conditions KKT] conditions for this equation lead to the following final decision rule:
 
<math> L(\mathbf{x}, \boldsymbol{\alpha}^{\ast}, \beta_0) =\sum_{i=1}^{N_{sv}} y_i \alpha_i^{\ast} K (\mathbf{x}_i,\mathbf{x}) + \beta_0</math>
 
 
where <math>\ N_{sv} </math> and <math>\ \alpha_i</math> denote number of support vectors and the non-zero Lagrange multipliers corresponding to the support vectors respectively.
 
Several typical choices of kernels are linear, polynomial, Sigmoid or Multi-Layer Perceptron (MLP) and Gaussian or Radial Basis Function (RBF) kernel. Their expressions are as following:
 
Linear kernel: <math> K (\mathbf{x}_i,\mathbf{x}_j) =  \mathbf{x}_i^T\mathbf{x}_j</math>       
 
Polynomial kernel: <math> K (\mathbf{x}_i,\mathbf{x}_j) =  (1 + \mathbf{x}_i^T\mathbf{x}_j)^p</math>               
 
Sigmoid (MLP) kernel: <math> K (\mathbf{x}_i,\mathbf{x}_j) = \tanh (k_1\mathbf{x}_i^T\mathbf{x}_j +k_2)</math>   
 
Gaussian (RBF) kernel: <math>\ K(\mathbf{x}_i,\mathbf{x}_j) = \exp\left[\frac{-(\mathbf{x}_i - \mathbf{x}_j)^T (\mathbf{x}_i - \mathbf{x}_j)}{2\sigma^2 }\right]</math> 
 
 
Kernel functions satisfying [http://en.wikipedia.org/wiki/Mercer%27s_condition Mercer's conditions] not only enables implicit mapping of data from input space to feature space but also ensure the convexity of the cost function which leads to the unique optimum. Mercer condition states that a continuous symmetric function <math> K \mathbf(x,y) </math>  must be positive semi-definite to be a kernel function which can be written as inner product between the data pairs. Note that we would only need to use K in the training algorithm, and would never need to explicitly even know what  <math>\ \Phi </math> is.
 
Furthermore, one can construct new kernels from previously defined kernels.[http://www.cc.gatech.edu/~ninamf/ML10/lect0309.pdf]  Given two kernels <math>K_1 (\mathbf{x}_i,\mathbf{x}_j)</math> and <math>K_2 (\mathbf{x}_i,\mathbf{x}_j)</math>, properties include:
 
1.  <math>K (\mathbf{x}_i,\mathbf{x}_j) = \alpha K_1 (\mathbf{x}_i,\mathbf{x}_j) + \beta K_2 (\mathbf{x}_i,\mathbf{x}_j) </math> for <math> \alpha , \beta \geq 0 </math>
 
2.  <math>K (\mathbf{x}_i,\mathbf{x}_j) = K_1 (\mathbf{x}_i,\mathbf{x}_j) K_2 (\mathbf{x}_i,\mathbf{x}_j) </math>
 
3.  <math>K (\mathbf{x}_i,\mathbf{x}_j) = K_1 (f ( \mathbf{x}_i ) ,f ( \mathbf{x}_j ) ) </math> where <math>\, f \colon X \rightarrow X </math>
 
4.  <math>K (\mathbf{x}_i,\mathbf{x}_j) = f ( K_1 ( \mathbf{x}_i  , \mathbf{x}_j  ) </math> where <math>\, f </math> is a polynomial with positive coefficients.
 
 
In the case of Gaussian or RBF kernel for example, <math> \mathcal{H} </math> is infinite dimensional, so it would not be very easy to work with <math> \Phi </math> explicitly. However, if one replaces <math> <(\mathbf{x}_i). (\mathbf{x}_j)> </math> by  <math> K (\mathbf{x}_i,\mathbf{x}_j) </math> everywhere in the training algorithm, the algorithm will happily produce a support vector machine which lives in an infinite dimensional space, and furthermore do so in roughly the same amount of time it would take to train on the un-mapped data. All the considerations of the previous sections hold, since we are still doing a linear separation, but in a different space.
 
 
The choice of which kernel would be best for a particular application has to be determined through trial and error. Normally, the Gaussian or RBF kernel are best suited for classification tasks including SVM.
 
 
The video below shows a graphical illustration of how a polynomial kernel works to a get better sense of kernel concept:
 
[http://www.youtube.com/watch?v=3liCbRZPrZA Mapping data points to a higher dimensional space using a polynomial kernel]
 
====Kernel Properties====
Kernel functions must be continuous, symmetric, and most preferably should have a positive (semi-) definite Gram matrix. The Gram matrix is the matrix whose elements are <math>\ g_{ij} = K(x_i,x_j) </math>. Kernels which are said to satisfy the Mercer's theorem are positive semi-definite, meaning their kernel matrices have no non-negative Eigen values. The use of a positive definite kernel ensures that the optimization problem will be convex and solution will be unique. <ref> Reference:http://crsouza.blogspot.com/2010/03/kernel-functions-for-machine-learning.html#kernel_properties</ref>
 
 
Furthermore, kernels can be categorized into classes based on their properties <ref name="Genton"> M. G. Genton, "Classes of Kernels for Machine Learning: A Statistics Perspective," ''Journal of Machine Learning Research 2'', 2001</ref>:
* ''Nonstationary kernels'' are explicitly dependent on both inputs (e.g., the polynomial kernel).
* ''Stationary kernels'' are invariant to translation (e.g., the Gaussian kernel which only looks at the distance between the inputs).
* ''Reducible kernels'' are nonstationary kernels that can be reduced to stationary kernels via a bijective deformation (for more detailed information see <ref name = "Genton" />).
 
====Further Information of Kernel Functions====
 
In class we have studied 3 kernel functions, linear, polynomial and gaussian kernel. The following are some properties for each:
# '''Linear Kernel''' is the simplest kernel. Algorithms using this kernel are often equivalent to non-kernel algorithms such as standard PCA
# '''Polynomial Kernel''' is a non-stationary kernel, well suited when training data is normalized.
# '''Gaussian Kernel''' is an example of radial basis function kernel.
 
When choosing a kernel we need to take into account the data we are trying to model. For example, data that clusters in circles (or hyperspheres) is better classified by Gaussian Kernel.
 
Beyond the kernel functions we discussed in class, such as Linear Kernel, Polynomial Kernel and Gaussian Kernel functions, many more kernel functions can be used in the application of kernel methods for machine learning.
 
Some examples are: Exponential Kernel, Laplacian Kernel, ANOVA Kernel, Hyperbolic Tangent (Sigmoid) Kernel, Rational Quadratic Kernel, Multiquadric Kernel, Inverse Multiquadric Kernel, Circular Kernel, Spherical Kernel, Wave Kernel, Power Kernel, Log Kernel, Spline Kernel, B-Spline Kernel, Bessel Kernel, Cauchy Kernel, Chi-Square Kernel, Histogram Intersection Kernel, Generalized Histogram Intersection Kernel, Generalized T-Student Kernel, Bayesian Kernel, Wavelet Kernel, etc.
 
You may visit http://crsouza.blogspot.com/2010/03/kernel-functions-for-machine-learning.html#kernel_functions for more information.
 
=== Case 2: Linearly Non-Separable Data (Soft Margin) ===
 
The original SVM was specifically made for separable data. But, this is a very strong requirement, so it was suggested by Vladimir Vapnik and Corinna Cortes later on to remove this requirement. This is called Soft Margin Support Vector Machine. One of the advantages of SVM is that it is relatively easy to generalize it to the case that the data is not linearly separable.
 
In the case when 2 data sets are not linearly separable, it is impossible to have a hyperplane that completely separates 2 classes of data. In this case the idea is to minimize the number of points that cross the margin and are miss-classified .So we are going to minimize that are going to violate the constraint:
 
<math>\, y_i(\beta^T x_i + \beta_0) \geq 1</math>
 
Hence we allow some of the points to cross the margin (or equivalently violate our constraint) but on the other hand we penalize our objective function (so that the violations of the original constraint remains low):
 
<math>\, min  (\frac{1}{2} |\beta|^2 +\gamma \sum_i \zeta_i) </math>
 
And now our constraint is as follows:
 
<math>\, y_i(\beta^T x_i + \beta_0) \geq 1-\zeta_i</math>
 
<math>\, \zeta_i \geq 0</math>
 
We have to check that all '''KKT''' conditions are satisfied:
 
<math>\, \mathcal{L}(\beta,\beta_0,\zeta_i,\alpha_i,\lambda_i)=\frac{1}{2}|\beta|^2+\gamma \sum_i \zeta_i -\sum_i \alpha_i[y_i(\beta^T x_i +\beta_0)-(1-\zeta_i)] - \sum_i \lambda_i \zeta_i</math>
 
<math>\, 1) \frac{\partial\mathcal{L}}{\partial \beta}=\beta-\sum_i \alpha_i y_i x_i \rightarrow \beta=\sum_i \alpha_i y_i x_i</math>
 
<math>\, 2) \frac{\partial\mathcal{L}}{\partial \beta_0}=\sum_i \alpha_i y_i =0</math>
 
 
<math>\, 3) \frac{\partial\mathcal{L}}{\partial \zeta_i}=\gamma - \alpha_i - \lambda_i </math>
 
Now we have to write this into a Lagrangian form.
 
== Support Vector Machine Continued (Lecture: Nov. 3, 2011)  ==
 
=== Case 2: Linearly Non-Separable Data (Soft Margin [http://fourier.eng.hmc.edu/e161/lectures/svm/node5.html]) Continued ===
 
Recall from last time that soft margins are used instead of hard margins when we are using SVM to classify data points that are '''not''' linearly separable. 
 
===== Soft Margin SVM Derivation of Dual =====
 
The soft-margin SVM optimization problem is defined as:
 
<math>\min \{\frac{1}{2}|\boldsymbol{\beta}|^2 + \gamma\sum_i \zeta_i\}</math>
 
subject to the constraints
<math>y_i(\boldsymbol{\beta}^T \boldsymbol{x_i} + \beta_0) \ge 1-\zeta_i \quad ,\quad \zeta_i \ge 0</math>,
 
where <math>\boldsymbol \gamma \sum_i \zeta_i \quad \quad</math> is the penalty function that penalizes the slack variable. Note that <math>\zeta_i=0</math> denotes the Hard Margin SVM classifier.
 
(where <math>\zeta > 0 </math> represents some points across the margin). 
 
In other words, we have relaxed the constraint for each <math>\boldsymbol{x_i}</math> so that it can violate the margin by an amount <math>\zeta_i</math>.
As such, we want to make sure that all <math>\zeta_i</math> values are as small as possible. So, we penalize them in the objective function by a factor of some chosen <math>\gamma</math>.
 
=====Forming the Lagrangian=====
 
In this case we have have two constraints in the Lagrangian primal form (<math>\beta</math> and <math>\zeta</math>) and therefore we optimize with respect to two dual variables <math>\, \alpha</math> and <math>\,\lambda</math>,
 
<math>
L(\boldsymbol{\beta},\beta_0,\zeta_i,\alpha_i,\lambda_i) = \frac{1}{2} |\boldsymbol{\beta}|^2 + \gamma \sum_i \zeta_i - \sum_i \alpha_i [y_i(\boldsymbol{\beta}^T \boldsymbol{x_i} + \beta_0)-1+\zeta_i] - \sum_i \lambda_i \zeta_i
</math>
 
Note the following simplification:
 
<math>- \sum_i \alpha_i [y_i(\boldsymbol{\beta}^T \boldsymbol{x_i} + \beta_0)-1+\zeta_i] = -\boldsymbol{\beta}^T\sum_i\alpha_i y_i x_i-\beta_0\sum_i\alpha_iy_i+\sum_i\alpha_i-\sum_i\alpha_i\zeta_i</math>
 
=====Apply KKT conditions=====
 
<math>
\begin{align}
1) &\frac{\partial \mathcal{L}}{\partial \boldsymbol{\beta}} = \boldsymbol{\beta}-\sum_i \alpha_i y_i \boldsymbol{x_i} = 0  \\
&  \rightarrow \boldsymbol{\beta} = \sum_i \alpha_i y_i \boldsymbol{x_i}  \\
&\frac{\partial \mathcal{L}}{\partial \beta_0} = \sum_i \alpha_i y_i = 0  \\
&\frac{\partial \mathcal{L}}{\partial \zeta_i} = \gamma - \alpha_i - \lambda_i = 0  \\
&  \rightarrow \boldsymbol{\gamma} = \alpha_i + \lambda_i \\
2) &\text{dual feasibility: } \alpha_i \ge 0, \lambda_i \ge 0 \\
3) &\alpha_i [y_i(\boldsymbol{\beta}^T \boldsymbol{x_i} + \beta_0)-1+\zeta_i] = 0, \text{ and } \lambda_i \zeta_i = 0 \\
4) &y_i(\boldsymbol{\beta}^T \boldsymbol{x_i} + \beta_0) \ge 1-\zeta_i \quad,\quad \zeta_i \ge 0  \\
\end{align}
</math>
 
=====Objective Function=====
Simplifying the Lagrangian the same way we did with the hard margin case, we get the following:
 
<math>
\begin{align}
L &= \frac{1}{2} \sum_{i,j} \alpha_i \alpha_j y_i y_j \boldsymbol{x_i}^T \boldsymbol{x_j} + \gamma \sum_i \zeta_i - \sum_{i,j} \alpha_i \alpha_j y_i y_j \boldsymbol{x_i}^T \boldsymbol{x_j} - \beta_0 \sum_i \alpha_i y_i + \sum_i \alpha_i - \sum_i \alpha_i \zeta_i - \sum_i \lambda_i \zeta_i  \\
&= -\frac{1}{2} \sum_{i,j} \alpha_i \alpha_j y_i y_j \boldsymbol{x_i}^T \boldsymbol{x_j} + \sum_i \alpha_i - 0 + (\sum_i \gamma \zeta_i - \sum_i \alpha_i \zeta_i - \sum_i \lambda_i \zeta_i)  \\
&= -\frac{1}{2} \sum_{i,j} \alpha_i \alpha_j y_i y_j \boldsymbol{x_i}^T \boldsymbol{x_j} + \sum_i \alpha_i + \sum_i (\gamma - \alpha_i - \lambda_i) \zeta_i  \\
&= -\frac{1}{2} \sum_{i,j} \alpha_i \alpha_j y_i y_j \boldsymbol{x_i}^T \boldsymbol{x_j} + \sum_i \alpha_i
\end{align}
</math>
 
subject to the constaints:
 
<math>
\begin{align}
\alpha_i &\ge 0 \\
\sum_i \alpha_i y_i &= 0 \\
\lambda_i &\ge 0
\end{align}
</math>
 
Notice that the simplified Lagrangian is the exact same as the hard margin case. The only difference with the soft margin case is the additional constraint <math>\lambda_i \ge 0</math>. However, <math>\gamma</math> doesn't actually appear directly in the objective function. But, we can discern the following:
 
<math>\lambda_i = 0 \implies \alpha_i = \gamma</math>
 
<math>\lambda_i > 0 \implies \alpha_i < \gamma</math>
 
Thus, we can derive that the only difference with the soft margin case is the constraint <math>0 \le \alpha_i \le \gamma</math>. This problem can be solved with quadratic programming.
 
===== Soft Margin SVM Formulation Summary =====
 
In summary, the primal form of the soft-margin SVM is given by:
 
<math>
\begin{align}
\min_{\boldsymbol{\beta}, \boldsymbol{\zeta}} \quad &  \frac{1}{2}|\boldsymbol{\beta}|^2 + \gamma\sum_i \zeta_i \\
\text{s.t. }  & y_i(\boldsymbol{\beta}^T \boldsymbol{x_i} + \beta_0) \ge 1-\zeta_i \quad, \quad \zeta_i \ge 0  \qquad i=1,...,M
\end{align}
</math>
 
 
The corresponding dual form which we derived above is:
 
<math>
\begin{align}
\max_{\boldsymbol{\alpha}} \quad & \sum_i \alpha_i - \frac{1}{2} \sum_{i,j} \alpha_i \alpha_j y_i y_j \boldsymbol{x_i}^T \boldsymbol{x_j} \\
\text{s.t. }  &  \sum_i \alpha_i y_i = 0  \\
& 0 \le \alpha_i \le \gamma,  \qquad i=1,...,M
\end{align}
</math>
 
Note, the soft-margin dual objective is identical to hard margin dual objective!  The only difference is now <math>\,\alpha_i</math> variables cannot be unbounded and are restricted to be a maximum of <math>\,\gamma</math>.  This restriction allows the optimization problem to become feasible when the data is non-seperable.  In the hard-margin case, when <math>\,\alpha_i</math> is unbounded there may be no finite maximum for the objective and we would not be able to converge to a solution. 
 
Also note, <math>\,\gamma</math> is a model parameter and must be chosen to a fixed constant.  It controls the size of margin versus violations.  In a data set with a lot of noise (or non-seperability) you may want to choose a smaller <math>\,\gamma</math> to ensure a large margin.  In practice, <math>\,\gamma</math> is chosen by cross-validation---which tests the model on a held out sample to determine which <math>\,\gamma</math> gives the best result.  However, it may be troublesome to work with <math>\,\gamma</math> since <math>\,\gamma \in (0, \infty)</math>.  So often a variant formulation, known as <math>\,\nu</math>-SVM is used which uses a better scaled parameter <math>\,\nu \in (0,1)</math> instead of <math>\,\gamma</math> to balance margin versus separability. 
 
Finally note that as <math>\,\gamma \rightarrow \infty</math>, the soft-margin SVM converges to hard-margin, as we do not allow any violation.
 
=====Soft Margin SVM Problem Interpretation =====
 
Like in the case of hard-margin the dual formulation for soft-margin given above allows us to interpret the role of certain points as support vectors. 
 
We consider three cases:
 
'''Case 1:'''  <math>\,\alpha_i=\gamma</math>
 
From KKT condition 1 (third part), <math>\,\gamma - \alpha_i - \lambda_i = 0</math> implies <math>\,\lambda_i = 0</math>.
 
From KKT condition 3 (second part) <math>\,\lambda_i \zeta_i = 0</math> this now suggests <math>\,\zeta_i > 0</math>. 
 
Thus this is a point that violates the margin, and we say <math>\,x_i</math> is inside the margin.
 
'''Case 2:'''  <math>\,\alpha_i=0</math>
 
From KKT condition 1 (third part), <math>\,\gamma - \alpha_i - \lambda_i = 0</math> implies <math>\,\lambda_i > 0</math>.
 
From KKT condition 3 (second part) <math>\,\lambda_i \zeta_i = 0</math> this now implies <math>\,\zeta_i = 0</math>. 
 
Finally, from KKT condition 3 (first part), <math>y_i(\boldsymbol{\beta}^T \boldsymbol{x_i} + \beta_0) > 1-\zeta_i</math>, and since <math>\,\zeta_i = 0</math>, the point is classified correctly and we say <math>\,x_i</math> is outside the margin.  In particular, <math>\,x_i</math> does not play a role in determining the classifier and if we ignored it, we would get the same result.
 
'''Case 3:'''  <math>\,0 < \alpha_i < \gamma</math>
 
From KKT condition 1 (third part), <math>\,\gamma - \alpha_i - \lambda_i = 0</math> implies <math>\,\lambda_i > 0</math>.
 
From KKT condition 3 (second part) <math>\,\lambda_i \zeta_i = 0</math> this now implies <math>\,\zeta_i = 0</math>. 
 
Finally, from KKT condition 3 (first part), <math>y_i(\boldsymbol{\beta}^T \boldsymbol{x_i} + \beta_0) = 1-\zeta_i</math>, and since <math>\,\zeta_i = 0</math>, the point is on the margin and we call it a support vector.
 
These three scenarios are depicted in Fig..
 
'''Case 4:'''  if <math>\boldsymbol \zeta_i > 0</math> implies <math>\boldsymbol \lambda_i=0</math> this now implies <math>\boldsymbol \alpha_i=\gamma </math> from which we know that <math>y_1(\beta^*\mathbf{x}+\beta_0)\ge 1-\zeta_i </math> it is closer to the boundary, so <math>x_i</math> is inside the margin.
 
=====Soft Margin SVM with Kernel =====
 
Like hard-margin SVM, we can use the kernel trick to find a non-linear classifier using the dual formulation.
 
In particular, we define a non-linear mapping for <math> \boldsymbol{x_i} </math> as <math> \Phi(\boldsymbol{x_i}) </math>, then in dual objective we compute <math> \Phi^T(\boldsymbol{x_i}) \Phi(\boldsymbol{x_j}) </math> instead of <math> \boldsymbol{x_i}^T \boldsymbol{x_j} </math>.    Using a kernel function <math> K(\boldsymbol{x_i}, \boldsymbol{x_j}) = \Phi^T(\boldsymbol{x_i}) \Phi(\boldsymbol{x_j}) </math> from the list provided in the previous lecture notes, we then do not need to explicitly map <math> \Phi(\boldsymbol{x_i}) </math>.
 
The dual problem we solve is:
 
<math>
\begin{align}
\max_{\boldsymbol{\alpha}} \quad & \sum_i \alpha_i - \frac{1}{2} \sum_{i,j} \alpha_i \alpha_j y_i y_j K(\boldsymbol{x_i}, \boldsymbol{x_j}) \\
\text{s.t. }  &  \sum_i \alpha_i y_i = 0  \\
& 0 \le \alpha_i \le \gamma,  \qquad i=1,...,M
\end{align}
</math>
 
where <math>\, K(\boldsymbol{x_i}, \boldsymbol{x_i}) </math> is an appropriate kernel function specification.
 
To make it clear why we do not need to explicitly map <math> \Phi(\boldsymbol{x_i}) </math>: If we use the kernel trick, both hard- and soft-margin SVMs find the following value for the optimum <math> \boldsymbol{\beta} </math>:
 
<math> \boldsymbol{\beta} = \sum_i \alpha_i y_i \Phi(\boldsymbol{x_i}) </math>
 
From the definition of the classifier, the class labels for points are given by:
 
<math> \boldsymbol{\beta}^T \Phi(\boldsymbol{x}) + \beta_0 </math>
 
Plugging the formula for <math> \boldsymbol{\beta} </math> in the expression above we get:
 
<math> \sum_i \alpha_i y_i \Phi(\boldsymbol{x_i}) \Phi(\boldsymbol{x}) + \Beta_0 </math>
 
which, from the properties of kernel functions, is equal to:
 
<math> \sum_i \alpha_i y_i K(\boldsymbol{x_i}, \boldsymbol{x_i}) + \Beta_0 </math>
 
Thus, we do not need to explicitly map <math> \boldsymbol{x_i} </math> to a higher dimension.
 
=====Soft Margin SVM Implementation =====
 
The SVM optimization problem is a quadratic program and we can use any quadratic solver to accomplish this.  For example, matlab's optimization toolbox provides <code>quadprog</code>.  Alternatively, CVX (by Stephen Boyd) is an excellent optimization toolbox that integrates with matlab and allows one to enter convex optimization problems as though they are written on paper (and it is free). 
 
We prefer to solve the dual since it is an easier problem (and also allows to use a Kernel).  Using CVX this would be coded as
 
<pre>
K = X*X';          % Linear kernel
H = (y*y') .* K;
cvx_begin
    variable  alpha(M,1);
    maximize  (sum(alpha) - 0.5*alpha'*H*alpha)
    subject to
        y'*alpha == 0;
        alpha >= 0;
        alpha <= gamma
cvx_end
</pre>
 
which provides us with optimal <math>\,\boldsymbol{\alpha}</math>. 
 
Now we can obtain <math>\,\beta_0</math> by using any point on the margin (i.e. <math>\,0 < \alpha_i < \gamma</math>), and solving
 
<math>
y_i \left(\sum_j y_j \alpha_j K(\boldsymbol{x_j}, \boldsymbol{x_i}) + \beta_0 \right) = 1
</math>
 
Note, <math>\,K(\boldsymbol{x_i}, \boldsymbol{x_j}) = \boldsymbol{x_i}^T \boldsymbol{x_j}</math> can also be the linear kernel. 
 
Finally, we can classify a new data point <math>\,\boldsymbol{x}</math>, according to
 
<math>h(\boldsymbol{x})  = 
\begin{cases}
+1,  \ \ \text{if } \sum_j y_j \alpha_j K(\boldsymbol{x_j}, \boldsymbol{x}) + \beta_0 > 0\\
-1,  \ \  \text{if } \sum_j y_j \alpha_j K(\boldsymbol{x_j}, \boldsymbol{x}) + \beta_0 < 0
\end{cases}
</math>
 
Alternatively, using traditional Mat Lab the following code finds b and b0.
 
<pre>
ell = size(X, 1);
H = (y * y') .* (X * X' + (1/gamma) * eye(ell));
f = -ones(1, ell);
LB = zeros(ell, 1);
UB = gamma * ones(ell, 1);
alpha = quadprog(H, f, [], [], y', 0, LB, UB);
b = X*(alpha.*y);
# Here we try to select the closest point to the margin for b0, thus finding the best origin for our classifer
i =min(find((alpha>0.1)&(y==1)));
b0 = 1 - (X * X')(i, :) * (alpha .* y);
</pre>
 
===== Intuitive Connection to Hard Margin Case =====
The form of the dual in both the Hard Margin & Soft Margin case are exceedingly; the only difference is a further restriction(<math>\ \alpha_i < \gamma</math>) on the dual variable. You could even implement the soft margin problem to solve a case where the hard margin problem is feasible. This is not typically done but doing can give considerable insight into how the soft margin problem reacts to changes in <math>\ \gamma </math>. If we let <math>\ \gamma \to +\infty</math> we see that the soft margin problem approaches the hard margin problem. If we examine the primal problem this matches our intuitive expectation. As <math>\ \gamma \to +\infty</math> the penalty for being inside the margin increases to infinity and thus the optimal solution will place paramount importance of having a hard margin.
 
When choosing <math>\ \gamma </math> one needs to be careful and understand the implications. Values of <math>\ \gamma </math> that are too large will result in slavish dedication to getting as close to a hard margin as possible. This can result in poor decisions especially if there are outliers involved. Values of <math>\ \gamma </math> that are too small do not adequate punish the problem for misclassifying points. It is important to both test different values for <math>\ \gamma </math> and to exercise discretion when selecting possible values of <math>\ \gamma </math> to test. It is also important to examine the impact of outliers as their impact can be extremely destructive to the usefulness of the SVM classifier.
 
 
===Multiclass Support Vector Machines===
 
Support vector machines were originally designed for binary classification; therefore we need a methodology to adopt the binary SVMs to a multi-class problem. How to effectively extend SVMs for multi-class classification is still an ongoing research issue. Currently the most popular approach for multi-category SVM is by constructing and combining several binary classifiers.Different coding and decoding strategies can be used for this purpose among which one-against-all and one-against-one (pairwise) are the most popular ones <ref name="CMBishop" />. .
 
====One-Against-All method====
Assume that we have <math>\ k </math> discrete classes. For a one-against-all SVM, we determine <math>\ k </math> decision functions that separate one class from the remaining classes. Let the <math>\ i^{th} </math> decision function, with the maximum margin, that separates class <math>\ i </math> from the remaining classes be:
 
 
<math>D_i(\mathbf{x})=\mathbf{w}_i^Tf(\mathbf{x})+b_i</math>
 
 
The hyperplane<math>\ D_i(\mathbf{x})=0 </math> forms the optimal separating hyperplane and if the classification problem is separable, the training data <math>\mathbf{x}</math> belonging to class <math>\ i</math> satisfy
 
<math>\begin{cases}
F_i(\mathbf{x})\geq1 &,\mathbf{x}\text{ belong to class }i\\
F_i(\mathbf{x})\leq-1 &,\mathbf{x}\text{ belong to remaining classes}\\
\end{cases}
</math>
 
In other words, the decision function is the sign of <math>\ D_i(\mathbf{x})</math> and therefore it is a discrete function. If the above equation is satisfied for plural <math>\ i's </math> , or there is no <math>\ i </math> that satisfies this equation, <math>\mathbf{x})</math> is unclassifiable. Figure below demonstrates the one-vs-all multi-class scheme where the pink area is the unclassifiable region.
 
[[File:one-vs-all multiclass.jpg|400px|thumb|centre|one-against-all multi-class scheme]]
 
====One-Against-One (Pairwise) method====
 
In this method we construct a binary classifier for each possible pair of classes and therefore for <math>\ k </math>  classes we will have <math>\frac{(k)(k-1)}{2} </math>  decision functions. The decision function for the pair of classes <math>i</math>  and <math>j</math>  is given by
 
<math>D_{ij}=\mathbf{w}_{ij}^Tf(\mathbf{x})+b_{ij}</math>
 
 
where <math>D_{ij}(\mathbf{x})=-D_{ij}(\mathbf{x})</math>.
 
 
The final decision is achieved by maximum voting scheme. That is for the datum <math>\mathbf{x}</math> we calculate
 
 
<math>D_i(\mathbf{x})=\sum_{j\neq i,i=1}sign(D_{ij}(\mathbf{x}))</math>
 
 
And <math>\mathbf{x}</math> is classified into the class:  <math>arg\quad \max_i\quad D_i({\mathbf{x}})</math>
 
 
Figure below demonstrates the one-vs-one multi-class scheme where the pink area is the unclassifiable region.
 
 
 
[[File:one-vs-one multiclass.jpg|400px|thumb|centre|one-vs-one multi-class scheme]]
 
===Advantages of Support Vector Machines===
 
* SVMs provide a good out-of-sample generalization. This means that, by choosing an appropriate generalization grade,
SVMs can be robust, even when the training sample has some bias. This is mainly due to selection of optimal hyperplane.
* SVMs deliver a unique solution, since the optimality problem is convex. This is an advantage compared
to Neural Networks, which have multiple solutions associated with local minima and for this reason may
not be robust over different samples.
*State-of-the-art accuracy on many problems.
*SVM can handle any data types by changing the kernel.
 
===Disadvantages of Support Vector Machines===
 
*Difficulties in choice of the kernel (Which we will study about in future).
 
* limitation in speed and size, both in training and testing
 
*Discrete data presents another problem, although with suitable rescaling excellent results have nevertheless been obtained.
 
*The optimal design for multiclass SVM classifiers is a further area for research.
 
*A problem with SVMs is the high algorithmic complexity and extensive memory requirements of the required quadratic programming in large-scale tasks.
 
===Comparison with Neural Networks <ref>www.cs.toronto.edu/~ruiyan/csc411/Tutorial11.ppt</ref>===
 
#Neural Networks:
##Hidden Layers map to lower dimensional spaces
##Search space has multiple local minima
##Training is expensive
##Classification extremely efficient
##Requires number of hidden units and layers
##Very good accuracy in typical domains
#SVMs
##Kernel maps to a very-high dimensional space
##Search space has a unique minimum
##Training is extremely efficient
##Classification extremely efficient
##Kernel and cost the two parameters to select
##Very good accuracy in typical domains
##Extremely robust
 
=== The Naive Bayes Classifier  ===
 
The naive Bayes classifier is a very simple (and often effective) classifier based on Bayes rule.
For further reading check [http://www.saylor.org/site/wp-content/uploads/2011/02/Wikipedia-Naive-Bayes-Classifier.pdf]
 
Bayes assumption is that all the features are conditionally independent given the class label. Even though this is usually false (since features are usually dependent), the resulting model is easy to fit and works surprisingly well.
 
Each feature or variable <math>\,x_{ij}</math> is independent for <math>\,j = 1, ..., d</math>, where <math>\, \mathbf{x}_i \in \mathbb{R}^d</math>.
 
Thus the Bayes classifier is
<math> h(\mathbf{x}) = \arg\max_k \quad \pi_k f_k(\mathbf{x})</math>
 
where <math>\hat{f}_k(\mathbf{x}) = \hat{f}_k(x_1 x_2 ... x_d)= \prod_{j=1}^d \hat{f}_{kj}(x_j)</math>.
 
We can see this a direct application of Bayes rule
<math> P(Y=k|X=\mathbf{x})  =\frac{P(X=\mathbf{x}|Y=y) P(Y=y)} {P(X=\mathbf{x})} =  \frac{f_k(\mathbf{x}) \pi_k} {\sum_k f_k \pi_k}</math>,
 
with <math>\, f_k(\mathbf{x})=f_1(\mathbf{x})f_2(\mathbf{x})...f_k(\mathbf{x})</math> and <math>\ \mathbf{x} \in \mathbb{R}^d</math>.
 
Note, earlier we assume class-conditional densitites which were multivariate normal with a dense covariance matrix.  In this case we are forcing the covariance matrix to be a diagonal.  This simplification, while not realistic, can provide a more robust model.
 
As another example, consider the 'iris' dataset in R. We would like to use known data (sepal length, sepal width, petal length, and petal width) to predict species of iris. As is typically done, we will use the maximum a posteriori (MAP) rule to decide the class to which each observation belongs. The code for using a built-in function in R to classify is:
 
<pre style="align:left; width: 75%; padding: 2% 2%">
#If you were to use a built-in function for Naive Bayes Classification,
#this is how it would work:
 
library(lattice) #these are the libraries from which packages are needed
library(class)
library(e1071)
 
count = 0 #This will keep track of properly classified objects
attach(iris)
model <- (Species ~ Sepal.Length + Sepal.Width + Petal.Length + Petal.Width)
m <- naiveBayes(model, data = iris)
p <- predict(m, iris) #You could also use a table here
for(i in 1:length(Species)) {
if (p[i] == Species[i]) {
count = count + 1
}}
misclass = (length(Species)-count)/length(Species)
misclass
#So we get that 4% of the points are misclassified.
</pre>
 
In this particular dataset, we would not expect naïve Bayes to be the best approach for classification, since the assumption of independent predictor variables is violated (sepal length and sepal width are related, for example). However, misclassification rate is low, which indicates that naïve Bayes does a good job of classifying these data.
 
=== [http://en.wikipedia.org/wiki/K-nearest_neighbor_algorithm K-Nearest-Neighbors(k-NN)] ===
 
[[File:KNN.jpg|250px|thumb|right|Classifying x by assigning it the label most frequently represented among k nearest samples and use a voting scheme.]]
 
Given a data point x, find the k nearest data points to x and classify x using the majority vote of these k neighbors (k is a positive
integer, typically small.) If k=1, then the object is simply assigned to the class of its nearest neighbor.
 
 
# Ties can be broken randomly.
# k can be chosen by cross-validation
# k-nearest neighbor algorithm is sensitive to the local structure of the data<ref>
http://www.saylor.org/site/wp-content/uploads/2011/02/Wikipedia-k-Nearest-Neighbor-Algorithm.pdf</ref>.
# Nearest neighbor rules in effect compute the decision boundary in an implicit manner.
 
=====Requirements of k-NN:=====
<ref>http://courses.cs.tamu.edu/rgutier/cs790_w02/l8.pdf</ref>
# An integer k
# A set of labeled examples (training data)
# A metric to measure “closeness”
 
=====Advantages:=====
# Able to obtain optimal solution in large sample.
# Simple implementation
# There are some noise reduction techniques that work only for k-NN to improve the efficiency and accuracy of the classifier.
 
=====Disadvantages:=====
# If the training set is too large, it may have poor run-time performance.
# k-NN is very sensitive to irrelevant features since all features contribute to the similarity and thus to classification.<ref>
http://www.google.ca/url?sa=t&rct=j&q=k%20nearest%20neighbors%20disadvantages&source=web&cd=1&ved=0CCIQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.100.1131%26rep%3Drep1%26type%3Dpdf&ei=3feyToHMG8Xj0QGOoMDKBA&usg=AFQjCNFF1XsYgZy1W2YLQMNTq_7s07mfqg&sig2=qflY4MffEHwP9n-WpnWMdg</ref>
# small training data can lead to high misclassification rate.
# kNN suffers from the curse of dimensionality.  As the number of dimensions of the feature space increases, points become further apart from each other, making it harder to classify new points. In 10 dimensions, each point needs to cover an area of approximately 80% the value of each coordinate to capture 10% of the data. (See textbook page 23). Algorithms to solve this problem include approximate nearest neighbour. <ref>P. Indyk and R. Motwani, Approximate nearest neighbors: towards removing the curse of dimensionality. STOC '98 Proceedings of the thirtieth annual ACM symposium on Theory of computing. pg 604-613.</ref>
 
=====Extensions and Applications=====
 
In order to improve the obtained results, we can do following:
# Preprocessing: smoothing the training data (remove any outliers and isolated points)
# Adapt metric to data
 
Besides classification, k-nearest-neighbours is useful for other tasks as well. For example, the k-NN has been used in Regression or Product Recommendation system<ref>
http://www.cs.ucc.ie/~dgb/courses/tai/notes/handout4.pdf</ref>.
 
In 1996 Support Vector Regression <ref>"Support Vector Regression Machines". Advances in Neural Information Processing Systems 9, NIPS 1996, 155–161, MIT Press.</ref> was proposed. SVR depends only on a subset of training data since the cost function ignores training data close to the prediction withing a threshold.
 
SVM is commonly used in Bioinformatics. Common uses include classification of DNA sequences and promoter recognition and identifying disease-related microRNAs. Promoters are short sequences of DNA that act as a signal for gene expression. In one paper, Robertas Damaševičius tries using a power series kernel function and 11 classification rules for data projection to classifty these sequences, to aid active gene location.<ref>Damaševičius, Robertas. "Analysis of Binary Feature Mapping Rules for Promoter Recognition in Imbalanced DNA Sequence Datasets using Support Vector Machine". Proceedings from 4th International IEEE Conference "Intelligent Systems". 2008.</ref> MicroRNAs are non-coding RNAs that target mRNAs for cleavage in protein synthesis. There is growing evidence suggesting that mRNAs "play important roles in human disease development, progression, prognosis, diagnosis and evaluation of treatment response". Therefore, there is increasing research in the role of mRNAs underlying human diseases. SVM has been proposed as a method of classifying positive mRNA disease-associations from negative ones.<ref>Jiang, Qinghua; Wang, Guohua; Zhang, Tianjiao; Wang, Yadong. "Predicting Human microRNA-disease Associations Based on Support Vector Machine". Proceedings from IEEE International Conference on Bioinformatics and Biomedicine. 2010.</ref>
 
=====Selecting k=====
Generally speaking, a large k classifies data more precisely than a smaller k as it reduces the overall noise. But as k increases so does the complexity of computation. To determine an optimal k, cross-validation can be used.<ref>http://chem-eng.utoronto.ca/~datamining/dmc/k_nearest_neighbors_reg.htm</ref> Traditionally, k is fixed for each test example. Another approach, namely Adaptive k-nearest neighbor algorithm, was proposed to improve the selection of k. In the algorithm, k is not a fixed number but is dependent on the nearest neighbour of the data point. In training phase, the algorithm calculates the optimal k for each training data point, which is the minimum number of neighbors required to get the correct class label. In the testing phase, it finds out the nearest neighbor of the testing data point and its corresponding optimal k. Then it performs the k-NN algorithm using such k to classify the data point. <ref>Shiliang Sun, Rongqing Huang, "An adaptive k-nearest neighbor algorithm",  2010 Seventh International Conference on Fuzzy Systems and Knowledge Discovery (FSKD), 2010.</ref>
 
=====Further Readings=====
1- SVM-KNN: Discriminative Nearest Neighbor Classification for Visual Category Recognition [http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=1641014 here]
 
2- SVM application list[http://www.clopinet.com/isabelle/Projects/SVM/applist.html here]
 
3- The kernel trick for distances [http://74.125.155.132/scholar?q=cache:AfKdFY6a1cMJ:scholar.google.com/&hl=en&as_sdt=2000 here]
 
4- Exploiting the kernel trick to correlate fragment ions for peptide identification via tandem mass spectrometry [http://bioinformatics.oxfordjournals.org/content/20/12/1948.short here]
 
5- General overview of SVM and Kernel Methods. Easy to understand presentation. [http://www.support-vector.net/icml-tutorial.pdf here]
 
== Supervised Principal Component Analysis (Lecture: Nov. 8, 2011) ==
 
Recall that '''PCA''' finds the direction of maximum variation of <math>d</math>-dimensional data, and may be used as a dimensionality reduction pre-processing operation for classification. '''FDA''' is a form of supervised dimensionality reduction or feature extraction that finds the best direction to project the data in order for the data points to be easily separated into their respective classes by considering inter- and intra-class distances (i.e. minimize intra-class distance and variance, maximize inter-class distance and variance). PCA differs from FDA in that PCA is an unsupervised classifier, whereas FDA is supervised classifier. Thus, FDA is better at finding the directions separating the data points for classification in a supervised problem.
 
'''Supervised PCA (SPCA)''' is a generalization of PCA. SPCA can use label information for classification tasks and it has some advantages over FDA. For example, FDA will only project onto <math>\ k-1 </math> dimensional space regardless of the dimensionality of the data where <math>\ k </math> is the number of classes. This is not always desirable for dimensionality reduction.
 
SPCA estimates the sequence of principal components having the maximum dependency on the response variable. It can be solved in closed-form, has a dual formulations that reduces the computational complexity when the dimension of the data is significantly greater than the number of data points, and it can be kernelized. <ref>Elnaz Barshan, Ali Ghodsi, Zohreh Azimifar, and Mansoor Zolghadri. Supervised Principal Component Analysis: Visualization, Classification and Regression on Subspaces and Submanifolds , Journal of  Pattern Recognition, to appear 2011</ref>
 
===SPCA Problem Statement===
Suppose we are given a set of data <math>\ \{x_i, y_i\}_{i=1}^n , x_i \in R^{p}, y_i \in R^{l}</math>. Note that <math>\ y_i</math> is not restricted to binary classes. So the assumption of having only discrete values for labels is relaxed here, which means this model can be used for regression as well. Target values (<math>\ y </math>) don't have to be in a one dimensional space. Just as for PCA, we are looking for a lower dimensional subspace <math>\ S = U^T X </math>, where <math>\ U </math> is an orthogonal projection. However, instead of finding the direction of maximum variation (as is the case in regular PCA), we are looking for the subspace that contains as much predictive information about <math>\ Y </math> as the original covariate <math>\ X </math>, i.e. we are trying to determine a projection matrix <math>\ U</math> such that <math>\ P(Y|X)=P(Y|U^TX) </math>. We know that the predictive information must exist between the original covariate <math>\ X </math> and <math>\ Y </math>, which are assumed to be drawn iid from the distribution <math>\ \{x_i, y_i\}_{i=1}^n </math>, because if they are completely independent there is no way of doing classification or regression.
 
===Warning===
If we project our data into a high enough dimension, we can fit any data - even noise. In his book "The God gene: how faith is hardwired into our genes", Dean H. Hamer discusses how factor analysis (model which "uses regression modelling techniques to test hypotheses producing error terms" <ref>use regression modelling techniques to test hypotheses producing error terms</ref>) was used to find a correlations between the gene (VMAT2) and a person's belief in God. The full book is available at: <ref>http://books.google.ca/books?id=TmR6uAAHEssC&pg=PA33&lpg=PA33&dq=god+gene+statistics&source=bl&ots=8q-jSwKZ8O&sig=O8OBe2YaPbE0vMp9A6PxEC9DwL0&hl=en&ei=lWO8Tp_nN4H40gGA2uXjBA&sa=X&oi=book_result&ct=result&resnum=2&ved=0CCEQ6AEwAQ#v=onepage&q&f=false </ref>.
 
It appears as though finding a correlation between seemingly uncorrelated data is sometimes statistically trivial. One study found correlations between people shopping habits and their genetics. Family members were shown to have far more similar consumer habits than those who did not share DNA. This was then used to explain "fondness for specific products such as chocolate, science-fiction movies, jazz, hybrid cars and mustard." <ref>http://www.businessnewsdaily.com/genetics-incluence-shopping-habits-0593/</ref>.
 
The main idea is that when we are in a highly dimensional space <math>\ \mathbb{R}^d</math>, if we do not have enough data (i.e. <math>n \approx d</math>), then it is easy to find a classifier that separates the data across its many dimensions.
 
===Different Techniques for Dimensionality Reduction===
* Classical '''Fisher's Discriminant Analysis (FDA)'''
 
The goal of FDA is to reduce the dimensionality of data in <math>\ \mathbb{R}^d</math> in order to have separable data points in a new space <math>\ \mathbb{R}^{d-1}</math>.
 
* '''Metric Learning (ML)'''
 
This is a large family of methods.
 
* '''Sufficient Dimensionality Reduction (SDR)'''
 
This is also a family of methods. In recent years SDR has been used to denote a body of new ideas and methods for dimension reduction. Like Fisher's classical notion of a sufficient statistic, SDR strives for reduction without loss of information. But unlike sufficient statistics, sufficient reductions may contain unknown parameters and thus need to be estimated.
 
* '''Supervised Principal Components (BSPC)'''
 
A method proposed by Bair et al. This is a different method from the SPCA method discussed in class despite having a similar name.
 
===Metric Learning ===
First define a new metric as:
 
<math>\ d_A(\mathbf{x}_i, \mathbf{x}_j)=||\mathbf{x}_i -\mathbf{x}_j|| = \sqrt{(\mathbf{x}_i - \mathbf{x}_j)^TA(\mathbf{x}_i - \mathbf{x}_j)}</math>
 
This metric will only satisfy the requisite properties of a metric if <math>\ A </math> is a positive definite matrix.
This restriction is often relaxed to positive semi-definate. Relaxing this condition may be required if we wish to disregard uninformative covariated.
 
''Note 1:'' <math>\ A </math> being positive semi-definite ensures that this metric respects non-negativity and the triangle inequality, but allows <math>\ d_A(\mathbf{x}_i,\mathbf{x}_j)=0</math> to not imply <math>\ \mathbf{x}_i=\mathbf{x}_j</math> <ref name="Xing">Xing, EP. Distance metric learning with application to clustering with side-information. [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.67.7952&rep=rep1&type=pdf]</ref>.
 
''Common choices for A''
 
1)<math>\ A=I</math> This represents Euclidean distance.
 
2)<math>\ A=D</math> where <math>\ D</math> is a diagonal matrix. The diagonal values can be thought of reweighting the importance of each covariate and these weights are learned can be learned from training data.
 
3)<math>\ A=D</math> where <math>\ D</math> is a diagonal matrix with <math>\ D_{ii} = Var(i^{th} covariate)^{-1} </math> This represents scaling down each covariate so that they all have equal variance and thus have equal impact on the distance. This metric is consistant with and works very well for covariates that are independant and normally distributed.
 
4)<math>\ A=\Sigma^{-1} </math> where <math>\ \Sigma </math> is the covariance matrix for your set of covariates. This metric is consistant with and works very well for covariates that are normally distributed. The corresponding metric is called Mahalanobis distance.
 
When dealing with data that are on different measurement scales using choices 3 or 4 are vastly preferable to Euclidean distance as it prevents covariates with large measurement scales from dominating the metric. 
 
 
For metric learning, construct the Mahalonobis distance over the input space and use it instead of the Euclidean distance. This is really equivalent to transforming the data points using a linear transformation and then computing the Euclidean distance in the new transformed space. To see that this is true, suppose we project each data points on a subspace <math>\ S </math> using <math>\ x^' = U^Tx</math> and calculate the Euclidean distance:
 
<math>\ ||\mathbf{x}_i^' - \mathbf{x}_j^'||_2^2= (U^T\mathbf{x}_i -U^T\mathbf{x}_j)^T(U\mathbf{x}_i -U\mathbf{x}_j) = (\mathbf{x}_i -\mathbf{x}_j)^TU^TU(\mathbf{x}_i -\mathbf{x}_j)</math>
 
This is the same as Mahalanobis distance in the new space for <math>\ A=UU^T</math>.
 
1
One way to find <math>\ A</math> is to consider the set of similar pairs <math>\ (\mathbf{x}_i,\mathbf{x}_j) \in S</math> and the set of dissimilar pairs <math>\ (\mathbf{x}_i,\mathbf{x}_j) \in D</math>. Then we can solve the convex optimization problem below <ref name="Xing" />.
 
<math> min_A \sum_{(\mathbf{x}_i,\mathbf{x}_j)\in S} (\mathbf{x}_i - \mathbf{x}_j)^TA(\mathbf{x}_i - \mathbf{x}_j) </math>
 
s.t. <math>  \sum_{(\mathbf{x}_i,\mathbf{x}_j)\in D} (\mathbf{x}_i - \mathbf{x}_j)^TA(\mathbf{x}_i - \mathbf{x}_j)\ge 1 </math> and <math>\ A</math> positive semi-definite.
 
 
Overall, the metric learning technique will attempt to minimize the squared induced distance between similar points while maximizing the squared induced distance between dissimilar points and search for a metric which allows points from the same class to be near one another and points from different classes to be far from one another.
 
===Sufficient Dimensionality Reduction (SDR)===
 
The goal of dimensionality reduction is to find a function <math>\ S(\mathbf{x}) </math> that maps <math>\ \mathbf{x} </math> from <math>\ \mathbb{R}^n </math> to a proper subspace, which means that the dimension of <math>\ \mathbf{x} </math> is being reduced. An example of <math>\ S(\mathbf{x}) </math> would be a function that uses several linear combinations of <math>\ \mathbf{x} </math>.
 
For a dimensionality reduction to be sufficient the following condition must hold:
 
::<math>\ P_{Y|X}(y|x) = P_{Y|S(X)}(y|S(x)) </math>
 
Which is equivalent to saying that the distribution of <math>\ y|S(\mathbf{x})</math> is the same as <math>\ y |\mathbf{x} </math> [http://rsta.royalsocietypublishing.org/content/367/1906/4385.full]
 
This method aims to find a linear subspace <math>\ R </math> such that the projection onto this subspace preserves <math>\ P_{Y|X}(y|x) </math>.
 
Suppose that <math>\ S(\mathbf{x}) = U^T\mathbf{x} </math> is a sufficient dimensional reduction, then
 
<math>\ P_{Y|X}(y|x) = P_{Y|U^TX}(y|U^T x) </math>
 
for all <math>\ x \in X </math>, and <math>\ y \in Y </math>, where <math>\ U^T X </math> is the orthogonal projection of <math>\ X </math> onto <math>\ R </math>.
 
====Graphical Motivation====
In a regression setting, it is often useful to summarize the distribution of <math>y|\textbf{x}</math> graphically. For instance, one may consider a scatter plot of <math>y</math> versus one or more of the predictors. A scatter plot that contains all available regression information is called a sufficient summary plot.
 
When <math>\textbf{x}</math> is high-dimensional, particularly when the number of features of <math>\ X </math> exceed 3, it becomes increasingly challenging to construct and visually interpret sufficiency summary plots without reducing the data. Even three-dimensional scatter plots must be viewed via a computer program, and the third dimension can only be visualized by rotating the coordinate axes. However, if there exists a sufficient dimension reduction <math>R(\textbf{x})</math> with small enough dimension, a sufficient summary plot of <math>y</math> versus <math>R(\textbf{x})</math> may be constructed and visually interpreted with relative ease.
 
Hence sufficient dimension reduction allows for graphical intuition about the distribution of <math>y|\textbf{x}</math>, which might not have otherwise been available for high-dimensional data.
 
Most graphical methodology focuses primarily on dimension reduction involving linear combinations of <math>\textbf{x}</math>. The rest of this article deals only with such reductions.[http://en.wikipedia.org/wiki/Sufficient_dimension_reduction#Graphical_motivation]
 
====Other Methods for Reduction====
Two very common examples of SDR are Sliced Inverse Regression (SIR) and Sliced Average Variance Estimation (SAVE). More information on SIR can be found here [http://en.wikipedia.org/wiki/Sliced_inverse_regression]. In addition [http://mars.wiwi.hu-berlin.de/mediawiki/teachwiki/index.php/Sliced_Inverse_Regression] also provides some examples for SIR.
 
===Supervised Principal Components (BSPC)===
 
BSPC algorithm:
 
1. Compute (univariate) standard regression coefficients for each feature j using the following formula:
 
<math>\ s_j=\frac{{X_j}^TY}{\sqrt{X_j^T X_j}} </math>
 
2. Reduce the data matrix <math>Xo </math> corresponding to all the columns where <math>\ |S_j|>\theta</math>. Find <math>\ \theta</math> by cross-validation.
 
3. Compute the first principal component of the reduced data matrix <math>Xo </math>
 
4. Use the principal component calculated in step (3) in a regression model or a classification algorithm to produce the outcome
 
 
Bair's SPCA is consistent. In Normal PCA as the number of data points increases PCA takes different directions for the components.  However the direction of the first component of SPCA remains consistent as the number of points increase <ref>Bair E., Prediction by supervised principal components. [http://stat.stanford.edu/~tibs/ftp/spca.pdf]</ref>.
 
===Hilbert-Schmidt Independence Criterion (HSIC)===
"Hilbert-Schmidt Norm of the Cross-Covariance operator" is proposed as an independence criterion in reproducing kernel Hilbert spaces (RKHSs).
 
The measure is refered to as '''Hilbert-Schmidt Indepence Criterion (HSIC)'''.
 
Let <math>\ z=\{(x_1,y_1),...,(x_n,y_n)\} \in\ \mathcal{X}</math>x<math>\mathcal{Y}</math> be a series of <math>\ n</math> independent observation drawn from <math>\ P_{(X,Y)}(x,y)</math> . An estimator of HSIC is given by
 
<math>HSIC=\frac{1}{(n-1)^2}Tr(KHBH)</math>
 
where H,K,B <math>\in\mathbb{R}^{n x n}</math>
 
<math>K_{ij} =k(x_i,x_j),B_{ij}=b(y_i,y_j), H=I-\frac{1}{n}\boldsymbol{e} \boldsymbol{e}^{T}  </math>, where <math>\ k</math> and <math>\ b</math> are positive semidefinite kernel functions, and <math>\ \boldsymbol{e} = [1 1 \ldots 1]^T</math>.
 
XH is centralized version of X ( subtracting the mean of each row):
 
<math>XH=X(I- \frac{1}{n}\boldsymbol{e} \boldsymbol{e}^T)=X -\frac{1}{n}X\boldsymbol{e} \boldsymbol{e}^T</math> where each entry in row i of <math>\frac{1}{n}Xee^T</math> is mean of <math>i^{th}</math> row of X
 
<math>HBH</math> is double centeralized version of B (subtracting mean of row and column)
 
We introduced a way of measuring independence between two distributions. The key idea is that good features should maximize such dependence. Feature selection for various supervised learning problems is unified under HSIC, and the solutions can be approximated using a backward-elimination algorithm. To explain this, we started by explaining how to tell if two distributions are same. Specifically, if two distributions have different mean values, then we can say right away that these are two different distributions. However, if they share a same mean value, then we need to look at second moments of these distributions, from which we can derive variance. Hence we need to look at higher dimension to tell if two distributions are equal.
 
It can be mathematically shown(although not done in class) that if we define a mapping,<math>\ \phi </math> of random variable X, which maps X to higher dimension, then there exists a unique mapping between the <math>\ \mu_x</math>, which is the average of x in the higher dimension, and the distribution of X. This suggests that <math>\ \mu_x</math> can reproduce the distribution of P.
 
Hence to figure out if two random variables X and Y have the same distribution, we can take the difference between E<math>\ \phi </math>(x) and E<math>\ \phi</math>(y), and take the norm of this to see if two distributions are equal.
i.e.
<math>|| E \phi (x) - E\phi(y) ||^2</math>
If this value is equal to 0, then we know that they have the same distribution.
 
Now to test the independence of <math>\ P_x</math> and <math>\ P_y</math> then we can use the previous formula on <math>\ P_{xy}</math> and <math>\ (P_x)(P_y)</math> - if it equals 0, then two distributions <math>\ P_x</math> and <math>\ P_y</math> are independent. The larger the difference is, then the distributions of X and Y are more different.
 
Utilizing this, we can find the <math>\ U^TX </math> from <math>\ P(Y|X)=P(Y|U^TX)  </math> such that it maximizes the HSIC between <math>\ Y</math>, which implies the maximum dependence between  <math>\ U^TX </math> and <math>\ Y</math>.
 
 
you come up with index called HSIC:
 
<math>\ KHBH </math>
 
X, Y random variables.
 
K- kernel matrix over X.
 
B- kernel matrix over Y.
 
==='''Kernel Function'''===
A positive definite kernel can always be written as inner products of a feature mapping.<br />
To prove a valid kernel function:<br />
1. define a feature <math> \phi(x) </math> mapping into some vector space.<br />
2. define a dot product in a strictly positive definite form<br />
3. Show that <math>\ k(x, x') = <\phi(x),\phi(x')></math><br />
[http://www.public.asu.edu/~ltang9/presentation/kernel.pdf]</ref>.
 
Kernel function will be used when calculating <math>\|| E\phi(x) - E\phi(y) ||^2</math>
The possible kernel functions we can choose are:
 
* Linear kernel: <math>\,k(x,y)=x \cdot y</math>
* Polynomial kernel: <math>\,k(x,y)=(x \cdot y)^d</math>
* Gaussian kernel: <math>e^{-\frac{|x-y|^2}{2\sigma^2}}</math>
* Delta Kernel: <math>\,k(x_i,x_j) =
    \begin{cases}
    1 & \text{if }x_i=x_j \\ 0 & \text{if }x_i\ne x_j
    \end{cases}
    </math>
 
H is a constant matrix of the form: <math>\ H = I - \frac{1}{n}ee^T </math>
 
where, <math>\ e = \left( \begin{array}{c}1
 
\\ \\
  \vdots \\ \\
  1 \\ \\
  1 \end{array} \right) </math>.
 
H centralizes any matrix that you multiply it to.
So HBH makes B double centred
 
 
We wanted the transformation <math>\ U^TX </math> such that it had the maximum dependance to Y. So we use the index HSIC to find the dependance between U^TX and Y and maximize it.
 
'''H''' centralize the mean of X by XH
<math>X-\mu</math>: the larger the value is, they dependence more of each other.
 
So basically we want to maximize  <math>\ Tr(KHBH)</math>
 
<math>\ max Tr(KHBH)</math>
 
<math>\ max Tr(X^TUU^TXHBH)</math>
 
<math>\ max Tr(U^TXHBHX^TU)</math>
 
we add a constraint to solve this problem
 
<math>\ U^TU=I</math>
 
Then this is identical to PCA if  <math>\ B=I</math>
 
===SPCA: Supervised Principle Compenent Analysis===
 
We need to find <math>\ U </math> to maximize <math>\ Tr(HKHB) </math>
where K is a Kernel of <math>\ U^T X </math> (eg: <math>\ X^T UU^T X </math>) and <math>\ B </math> is a Kernel of <math>\ Y </math>(eg: <math>\ Y^T Y </math>):
 
      {| class="wikitable" cellpadding="5"
|- align="center"
! <math>\ X </math>
!    <math>\ Y </math>
|- align="center"
| <math>\ U^T X </math>
|    <math>\ Y </math>
|-
| <math>\ (U^T X)^T (U^T X) = X^T UU^T X </math>
| <math>\ B </math>
|}
 
    <math>\max \; Tr(HKHB) </math>
    <math>\ \; \; = \; \max Tr(HX^T UU^T XHB) </math>
    <math>\ \; \; = \; \max Tr(U^T XHBHX^T U) </math>
    <math>\ subject \; to \; U^T U = I </math>
 
===Supervised Principle Components Analysis and Conventional PCA===
 
[[File:012DR-PCA.jpg|300px|thumb|right|Dimensionality Reduction of the 0-1-2 Data, Using PCA]]
[[File:012DR-SPCA.jpg|300px|thumb|right|Dimensionality Reduction of the 0-1-2 Data, Using Supervised PCA]]
 
 
This is idential to PCA if B = I
 
    <math>(XHBHX^T) = cov(x) = (x-\mu)(x-\mu)^T</math>
 
===SPCA===
Algorithm 1  <br />
- Recover basis: Calculate <math>Q=XHBHX^T</math> and let u=eigenvector of Q corresponding to the top d eigenvalues.<br />
- Encode training data: <math>Y=U^TXH</math> where Y is dxn matrix of the original data <br />
- Reconstruct training data:  <math>\hat{X}=UY=UU^TX</math>  <br />
- Encode test example: <math>y=U^T(x-\mu)</math> where y is a d dimensional encoding of x.  <br />
- Reconstruct test example:  <math>\hat{X}=U_y=UU^T(x-\mu)</math>  <br />
 
Find U that would maximize <math>Tr(HKHB)</math> where K is a kernel of <math>U^TX</math> (e.g. <math>K=x^Tuu^Tx</math>) and B is a kernel of Y (e.g. <math>B=y^Ty</math>).
 
<math>
max_U Tr(KHBH)
= max_U Tr(x^Tuu^TxHBH)
= max_U Tr(u^TxHBHx^Tu) </math> since we can switch the order around for traces
 
===Dual Supervised Principle Component Analysis===
 
 
Let <math>Q = XHBHX^T</math> and B are both PSD
 
      <math>Q = \psi\psi^T</math>
      <math>B = \Delta\Delta^T</math>
      <math>\psi = XH\Delta^T</math>
 
The solution for U can be expressed as singular value decomposition (SVD) of <math>\psi</math>:
 
      <math>\psi = U \Sigma V^T</math>
  <math>\rightarrow \psi V = U \Sigma</math>
  <math>\rightarrow \psi V \Sigma^-1 = U</math>
  <math>\rightarrow \Sigma^{-1} V^T \psi^T XH </math>
  <math>\rightarrow \Sigma^{-1} V^T V \Sigma^T U^T XH </math>
 
It gives a relationship between V and U. Your can replace these in the algorithm above and define everything based on V instead of U. By doing this you do not need to find eigenvectors of Q which have a high dimensionality.
 
 
Algorithm 2  <br />
Recover basis: calculate <math>\psi^T \psi</math> and let V=eigenvector of <math>\psi^T \psi</math> corresponding to the top d eigenvalues. Let <math>\Sigma</math>=diagonal matrix of square roots of the top d eigenvalues. <br />
 
Reconstruct training data:
<math>\hat{X}=UZ=XH\Delta^T V \Sigma^{-2}V^T\Delta H(X^T X)H </math>  <br />
 
Encode test examples: <math>y=U^T(x-\mu)=\Sigma^{-1}V^T \Delta H[X^T(x-\mu)] </math> where y is a d dimensional encoding of x.
 
===Towards a Unified Network===
 
{| class="wikitable"
|-
!
! B
! Constraint
! Component
|-
| PCA
| I
| <math>\omega^T \omega = I</math>
|
|-
| FDA<math>^{(1)}</math>
| <math>B_0</math>
| <math>\omega^T S_\omega \omega = I</math>
| <math>S_\omega = X B_s X^T</math>
|-
| CFML I<math>^{(2)}</math>
| <math>B_0 - B_s</math>
| <math>\omega^T \omega = I</math>
|
|-
| CFML II<math>^{(2)}</math>
| <math>B_0</math>
| <math>\omega^T S_\omega \omega = I</math>
| <math>S_\omega = X B_s X^T</math>
|}
(1)<math>B_s=F(F^{T}F)^{-1}F^T</math>, (2) <math>B_s=\tfrac{1}{n}FF^{T}</math> ,<math>B_D=H-B_s</math>, <math>n</math> # of data points,
<math>F</math> indicator matrix of cluster, <math>H</math> the centering matrix
 
===Dual Supervised PCA===
{| class="wikitable"
|-
!
! B
! Constraint
! Component
|-
| KPCA
| I
| <math>UU^T = I</math>
| Arbitrary
|-
| K-means
| I
| <math>UU^T = I, U\ge 0</math>
| Linear
|}
 
== Boosting (Lecture: Nov. 10, 2011) ==
 
Boosting is a meta-algorithm for starting with a simple classifier and improving the classifer by refitting the data giving higher weight to misclassified samples.
 
 
Suppose that <math>\mathcal{H}</math> is a collection of classifiers. Assume that
<math>\ y_i \in \{-1, 1\} </math>  and that each <math>\ h(x)\in \{-1, 1\} </math>. Start with <math>\ h_1(x) </math>. Based on how well <math>\ h_1 (x) </math> classifies points, adjust the weights of each input and reclassify. Misclassified points are given higher weight to ensure the classifier "pays more attention" to them, to fit better in the next iteration. The idea behind boosting is to obtain a classification rule from each classifer <math> h_i(x)\in\mathcal{H}</math>, regardless of how well it classifies the data on its own (with the proviso that its performance be better than chance), and combine all of these rules to obtain a final classifier that performs well.
 
[[File:boosting1.jpg]]
 
 
An intuitive way to look at boosting and the concept of weight is to think about extreme weightings. Suppose you are doing classification on a set with some points being misclassified. Suppose that any points that have been classified correctly are to be removed from the data. So the weak classifier may do a good job on these new data. This is how early versions of boosting worked, instead of re-weighting.
 
=== AdaBoost ===
'''Adaptive Boosting (AdaBoost)''' was formulated by Yoav Freund and Robert Schapire. AdaBoost is defined as an algorithm for constructing a “strong” classifier as linear combination <math>f(\mathbf{x}) = \sum_{t=1}^T \alpha_t h_t(\mathbf{x}) </math> of simple “weak” classifiers <math>\ h_t(\mathbf{x})</math>. It is very popular and widely known as the first algorithm that could adapt to weak learners <ref>http://www.cs.ubbcluj.ro/~csatol/mach_learn/bemutato/BenkKelemen_Boosting.pdf </ref>.
 
It has the following properties:
 
* It is a linear classifier with all its desirable properties
* It has good generalization properties
* It is a feature selector with a principled strategy (minimisation of upper bound on empirical error)
* It is close to sequential decision making
 
====Algorithm Version 1====
The AdaBoost algorithm presented in the lecture is as follows (for more info see [http://www.site.uottawa.ca/~stan/csi5387/boost-tut-ppr.pdf]):
 
1 Set the weights <math>\ w_i=\frac{1}{n},  i = 1,...,n. </math> <br />
 
2 For <math>\ j =1,...,J </math>, do the following steps:
 
:a) Find the classifier <math>\ h_j: \mathbf{x} \rightarrow \{-1,1\} </math> that minimizes the weighted error <math>\ L_j </math>:
 
:<math>\ h_j= arg \underset{h_j\in \mathcal{H}}{\mbox{min}} L_j</math>
 
:where <math>\ L_j = \frac{\sum_{i=1}^{n}w_iI[y_i\ne h_j(x_i)]}{\sum_{i=1}^{n} w_i}</math>
 
:<math>\ H </math> is a set of classifiers which need to be improved and <math>\ I</math> is
::<math>\, I= \left\{\begin{matrix}
1 &  for \quad  y_i\neq h_j(\mathbf{x}_i) \\
0 &  for \quad y_i = h_j(\mathbf{x}_i)  \end{matrix}\right.</math><br />
 
:b) Let <math>\alpha_j= log(\frac{1-L_j}{L_j})</math>
 
::Note that <math>\ \alpha</math> indicates the "goodness" of the classifier, where a larger <math>\ \alpha</math> value indicates a better classifier. Also, <math>\ \alpha</math> is always 0 or positive as long as the classification accuracy is 0.5 or higher. For example, if working with coin flips, then <math>\ L_j=0.5 </math> and <math>\ \alpha=0</math>.
 
:c) Update the weights:
 
::<math>\ w_i \leftarrow w_i e^{\alpha_j I[y_i\ne h_j(\mathbf{x}_i)]}</math>
::Note that the weights are only increased for points that have been misclassified by a good classifier.<br />
 
3 The final classifier is: <math>\ h(\mathbf{x}) =  sign (\sum_{j=1}^{J}\alpha_j h_j(\mathbf{x}))</math>.
 
:Note that this is basically an aggregation of all the classifiers found and the classification outcomes of better classifiers are weighted more using <math>\ \alpha</math>.
 
====Algorithm Version 2 <ref>http://www.cs.ubbcluj.ro/~csatol/mach_learn/bemutato/BenkKelemen_Boosting.pdf</ref>====
One of the main ideas of this algorithm is to maintain a distribution or set of weights over the training set. Initially, all weights are set equally, but on each round, the weights of incorrectly classified examples are increased so that the weak learner is forced to focus on the hard examples in the training set.
 
* Given <math>\left(\mathbf{x}_1,y_1\right),\dots,\left(\mathbf{x}_m,y_m\right)</math> where <math>{\mathbf{x}_i \in X}</math>, <math>{y_i \in \{-1,+1\}}</math>.
* Initialize weights <math>D_1(i) = \frac{1}{m}</math>
* Iterate <math>t=1,\dots, T</math>
** Train weak learner using distribution <math>\ D_t</math>
** Get weak classifier: <math>h_t:X\rightarrow R</math>
** Choose <math>{\alpha_t \in R}</math>
** Update the weights: <math>D_{t+1}(i) = \frac {D_i e^{-\alpha_t y_i h_t(\mathbf{x}_i)}} {Z_t}</math>
:: where <math>\ Z_t</math> is a normalization factor (chosen so that <math>\ D_t+1</math> will be a distribution)
* The final classifier is:
:: <math>H(\mathbf{x})=\mbox{sign}\left(\sum_{t=1}^T \alpha_t h_t(\mathbf{x})\right)</math>
 
====Example====
 
In R, we can do boosting on a simulated classifer. Suppose we are working with the built-in R dataset "iris". These data consist of petal length, sepal length, petal width, and sepal width of three different species of iris. This is an adaptive boosting algorithm as applied to these data.
<pre style = "align:left; width:100%; padding: 2% 2%">
> crop1 <- iris[1:100,1] #the function "ada" will only handle two classes
> crop2 <- iris[1:100,2] #and the iris dataset has 3. So crop the third off.
> crop3 <- iris[1:100,3]
> crop4 <- iris[1:100,4]
> crop5 <- iris[1:100,5] #This is the response variable, indicating species of iris
> x <- cbind(crop1, crop2, crop3, crop4, crop5) #combine all the columns
> fr1 <- as.data.frame(x, row.names=NULL) #and coerce into a data frame
>
> a = 2 #number of iterations
> AdaBoostDiscrete <- ada(crop5~., data=fr1, iter=a, loss="e", type = "discrete", control = rpart.control())
> AdaBoostDiscrete
Call:
ada(crop5 ~ ., data = fr1, iter = a, loss = "e", type = "discrete",
    control = rpart.control())
 
Loss: exponential Method: discrete  Iteration: 2
 
Final Confusion Matrix for Data:
          Final Prediction
True value  1  2
        1 50  0
        2  0 50
 
Train Error: 0
 
Out-Of-Bag Error:  0  iteration= 1
 
Additional Estimates of number of iterations:
 
train.err1 train.kap1
        1          1
 
> #Since this yields "perfect" results, we may not need boosting here after all.
> #This was just an illustration of the ada function in R.
</pre>
 
====Advantages and Disadvantages====
The advantages and disadvantages of AdaBoost are listed below.
 
Advantages :
* Very simple to implement
* Fairly good generalization
* The prior error need not be known ahead of time
 
Disadvantages:
* Suboptimal solution
* Can over fit in presence of noise
 
===Other boosters===
There are many other more recent boosters such as LPBoost, TotalBoost, BrownBoost, MadaBoost, LogitBoost, stochastic boost etc. The main difference between many of them is the way they weigh the points in the training data set at each iteration. Some of these boosters, such as AdaBoost, MadaBoost and LogitBoost, can be interpreted as performing a gradient descent to minimize the convex cost function (They fit into the AnyBoost framework). However, a recent research study showed that this class of boosters are prone to random classification noise, thereby questioning their applicability to real world noisy classification problems. <ref>Pillip M. Long, Rocco A. Servedio, "Random Classification Noise Defeats All Convex Potential Boosters", 2000</ref>
 
=== Relation to SVM ===
SVM and Boosting are very similar except for the way to measure the margin or the way they optimize their weight vector. SVMs use the <math>l_2</math> norm for both the instance vector and the weight vector, while Boosting uses the <math>l_1</math> norm for the weight vector. ie. SVMs need to use the <math>l_2</math> norm to implicitly compute scalar products in feature space with the help of the kernel trick. No other norm can be expressed in terms of scalar products.
 
Although SVM and AdaBoost share some similarities. However, there are several important differences:
* Different norms can result in very different margins: In boosting or in SVM, the dimension is usually very high, this makes the difference between <math>l_1</math> norm and <math>l_2</math> norm can be significant enough in the margin values.
 
e.g suppose the weak hypotheses all have range {-1,1} and that the label y on all examples can be computed by a majority vote of k of the weak hypotheses. In this case, it can be shown that if the number of relevant weak hypotheses is a small fraction of the total number of weak hypotheses then the margin associated with AdaBoost will be much larger than the one associated with support vector machines.
 
* The computation requirements are different: The difference between the two methods in this regard is that SVM cor-responds to quadratic programming, while AdaBoost corresponds only to linear programming.
 
* A different approach is used to search efficiently in high dimensional space: SVM deals with overfitting problem through the method of kernels which allow algorithms to perform low dimensional calculations that are mathematically equivalent to inner products in a high dimensional “virtual” space. While, boosting approach often employ greedy search method.<ref>http://www.iuma.ulpgc.es/camellia/components/com_docman/dl2.php?archive=0&file=c3ZtX2FuZF9ib29zdGluZ19vbmUucGRm</ref>
 
== Bagging ==
 
[[File: Bagging.jpg|250px|thumb|When bagging, we split up the data, train separate classifiers and then recreate a final classifier]]
 
'''Bagging (Bootstrap aggregating)''' was proposed by Leo Breiman in 1994. Bagging is another meta-algorithm for improving classification results by combining the classification of randomly generated training sets. [http://www.wikicoursenote.com/wiki/Stat841f10.htm#Bagging][http://en.wikipedia.org/wiki/Bootstrap_aggregating]
 
 
 
The idea behind bagging is very similar to that behind boosting. However, instead of using multiple classifiers on essentially the same dataset (but with adaptive weights), we sample from the original dataset containing m items B times with replacement, obtaining B samples each with m items. This is called bootstrapping. Then, we train the classifier on each of the bootstrapped samples. Taking a majority vote of a combination of all the classifiers, we arrive at a final classifier for the original dataset. [http://www.cs.princeton.edu/courses/archive/spr07/cos424/assignments/boostbag/index.html]
 
Bagging is the effective intensive procedure that can improve on unstable classifiers. It is most useful for highly nonlinear classifiers, such as trees.
 
As we know the  idea of boosting is to incorporate unequal weights in learning h given higher weight to misclassified points. Bagging is a method for reducing the variability of a classifier. The idea is to train classifiers <math>\ h_{1}(x)</math> to <math>\ h_{B}(x)</math> using B bootstrap samples from the data set.  The final classification is obtained using an average or 'plurality vote' of the B classifiers as follows:
 
 
:<math>\, h(x)= \left\{\begin{matrix}
1 &  \frac{1}{B} \sum_{i=1}^{B} h_{b}(x) \geq \frac{1}{2}  \\
0 &  \mathrm{otherwise}  \end{matrix}\right.</math>
 
=== Boosting vs. Bagging ===
 
• boosting can help us do the procedure on stable models, but bagging may not work for stable models.
 
• bagging is easier to parallelize and more helpful in practice.
 
• Many classifiers, such as trees, already have underlying functions that estimate the class probabilities at x. An alternative strategy is to average these class probabilities instead of the final classifiers. This approach can produce bagged estimates with lower variance and usually better performance.
 
• Bagging doesn’t work so well with stable models.Boosting might still help.
 
• Boosting might hurt performance on noisy datasets. Bagging doesn’t have this problem.
 
• In practice bagging almost always helps.
 
• On average, boosting usually helps more than bagging, but it is also more common for boosting to hurt performance.
 
• The weights grow exponentially.
 
• Bagging is easier to parallelize.
 
== Decision Trees ==
 
 
[[File: simple_decision_tree.jpg|right|frame|A basic example of a decision tree, iteratively ask questions to navigate the tree until we reach a decision node.]]
 
'''Decision tree learning''' is a method commonly used in statistics, data mining and machine learning. The goal is to create a model that predicts the value of a target variable based on several input variables. It is a very flexible classifier, can classify non-linear data and it can be used for classification, regression, or both. A tree is usually used as a visual and analytical decision support tool, where the expected values of competing alternatives are calculated.
 
 
It uses principle of divide and conquer for classification. The trees have traditionally been created manually. Trees map features of a decision problem onto a conclusion, or label. We fit a tree model by minimizing some measure of impurity. For a single covariate <math>\ X_1 </math> we choose a point t on the real line that splits the real line into two sets <math>\ R_1 = (-\infty, t] , R_2 = [ t, \infty) </math> in a way that minimizes impurity.<br />
 
[[File: p.jpg|right|frame|Node impurity for two-class classification, as a function of the proportion p in class 2. Cross-entropy has been scaled to pass through (0.5,0.5).]]
 
Let <math>\hat{p_s}(j) </math> be the proportion of observations in <math>\boldsymbol R_s </math>  such that <math>\ Y_i = j</math> <br />
 
<math>\hat{p_s}(j) = \frac {\sum_{i=1}^n I(Y_i = j, X_i \in \boldsymbol R_s)}{\sum_{i=1}^n I(x_i \in \boldsymbol R_s)}</math><br />
 
 
Node impurity measures (see figure to the right):
 
:Misclassification error: <math>\ 1 - \hat{p_s}(j) </math><br />
:Gini index:<math>\sum_{j \neq 1} \hat{p_s}(j)\hat{p_s}(i)</math>
 
'''Limitions in Decision Trees'''
 
1. Overfitting problem:
Decision Trees are extremely flexible models; this flexibility means that they can easily perfectly match any training set. This makes overfitting a prime consideration when training a decision tree. There is no robust way to avoid fitting noise in the data but two common approaches include:
 
* do not fit all trees, stop when the training set reaches perfection
* fully grow the tree and then prune the resulting tree.  Pruning algorithms include cost complexity pruning, minimum description length pruning and pessimistic pruning. This results in a tree with less branches, which can generalize better.  <ref>J. R. Quinlan, Decision Trees and Decision Making, IEEE Transactions on Systems, Man and Cybernetics,  vol 20, no 2, March/April 1990, pg 339-346.</ref>
 
 
2. time-consuming and complex:
compare to other decision-making models, decision trees is a relatively easier tool to use, however, if the tree contains a large amount branches, it will become complex in nature and take time to solve the problem.
Moreover, decision trees only examine a single field at a time, which leads to rectangular classification boxes. And the complexity adds costs to train people to have the extensive knowledge to complete the decision tree analysis. <ref>
http://www.brighthub.com/office/project-management/articles/106005.aspx
</ref>
 
 
Some specific decision-tree algorithms:
* ID3 algorithm [http://en.wikipedia.org/wiki/ID3_algorithm]
* C4.5 algorithm [http://en.wikipedia.org/wiki/C4.5_algorithm]
* C5 algorithm
 
A comparison of bagging and boosting methods using the decision trees classifiers: [http://www.doiserbia.nb.rs/img/doi/1820-0214/2006/1820-02140602057M.pdf]
 
=== CART (Classifcation and Regression Tree)===
 
The '''Classification and Regression Tree (CART)''' is a non-parametric Decision tree learning technique that produces either classification or regression trees, depending on whether the dependent variable is categorical or numeric, respectively. (Wikipedia) The CART is good in working with outliers during the process. CART will isolate the outliers in a separate node.
 
Advantages<ref>http://www.statsoft.com/textbook/classification-and-regression-trees/</ref>:
* '''Simplicity of results'''. In most cases the results are summarized in a very simple tree. This is important for fast classification and for creating a simple model for explaining the observations.
* '''Tree methods are nonparametric and nonlinear'''. There is no implicit assumption that the underlying relationships between the predictor variables and the dependent variable is linear or monotonic. Thus tree methods are well suited to data mining tasks where there is little a priori knowledge of any related variables.
 
===Advantages and Disadvantages===
 
Decision Tree Advantages
 
1. Easy to understand
 
2. Map nicely to a set of business rules
 
3. Applied to real problems
 
4. Make no prior assumptions about the data
 
5. Able to process both numerical and categorical data
 
Decision Tree Disadvantages
 
1. Output attribute must be categorical
 
2. Limited to one output attribute
 
3. Decision tree algorithms are unstable
 
4. Trees created from numeric datasets can be complex
 
Read more: http://wiki.answers.com/Q/List_the_advantages_and_disadvantages_for_both_decision_table_and_decision_tree#ixzz1dNGFaOpi
 
===Ranking Features===
In implementation of a tree model it is important how the features are ranked (i.e. in what order the features appear in the tree). The general way to do this is to choose the features with the highest dependence on Y to be the first feature in the tree and then going down the tree with lower dependence.
 
'''Feature ranking strategies'''
 
1. Fisher score (F-score)
* simple in nature
* efficient in measuring the the discrimination between a feature and the label.
* independent of the classifier.
 
2 Linear SVM Weight
 
The following is an algorithm based on linear SVM weights:
 
* input the training sets: <math>(x_i, y_i), i = 1, \dots l</math> 
* obtain the sorted feature ranking list as output:
** Using grid search to find the best parameter C.
** Training a <math>L2-</math>loss linear SVM model using the best available C.
** Then features can be sorted according to the absolute values of weights.
 
3. Change of AUC with/without Removing Each Feature
 
4. Change of Accuracy with/without Removing Each Feature
 
5. Normalized [http://en.wikipedia.org/wiki/Information_gain Information Gain] (difference in entropy)
 
note: for details, please read <ref>
http://jmlr.csail.mit.edu/proceedings/papers/v3/chang08a/chang08a.pdf
</ref>
 
===Random Forest===
Decision trees are unstable. An application of bagging is to combine trees into random forest. A random forest is a classifier consisting of a collection of tree-structured classifiers  <math>\left \lbrace \ h(x, \Theta_k ), k = 1, . . . \right \rbrace</math> where the <math>{\Theta_k } </math> are independent identically distributed random vectors and each tree casts a unit vote for the most popular class at input <math>x</math> <ref>Breiman N., Random Forests ''Machine learning'' [http://www.springerlink.com/content/u0p06167n6173512/fulltext.pdf]</ref>.
 
In a random forest, the trees are grown quite similarly to the standard classification tree. However, no pruning is done in the random forest technique.
 
Compared with other methods, random forests have some positive characteristics:
 
* runs faster than bagging or boosting
* has similar accuracy as Adaboost, and sometimes even better than Adaboost
* relatively robust to noise
* delivers useful estimates of error, correlation
 
For larger data sets, more accuracy can be obtained by combining random features with boosting.
 
'''This is how a single tree is grown:'''
 
First, suppose the number of elements in the training set is K. We then sample K elements with replacement.
Second, if there are a total of N inputs to the tree, choose an integer n << N such that for each node of the tree, n variables are randomly selected from N. the best split on these n variables is used to allow the node to make a decision (hence a "decision tree").
Third, grow the tree as large as possible.
 
Each tree contributes one classification. That is, each tree gets one "vote" to classify an element. The beauty of random forest is that all of these votes are added up, similar to boosting, and the final decision is the result of the vote. This is an extremely robust algorithm.
 
There are two things that can contribute to error in random forest:
 
1. correlation between trees
2. the ability of an individual tree to classify well.
 
This is seen intuitively, since if many trees are very similar to one another, then it is likely they will all classify the elements in the same way. If a single tree is not a very good classifier, it does not matter in the long run because the other trees will compensate for its error. However, if many trees are bad classifiers, the result will be garbage.
 
To avoid both of the above problems, there is an algorithm to optimize n, the number of variables to use in each decision tree. Unfortunately, an optimal value is not found on its own; instead, an optimal range is found. Thus, to properly program a random forest, there is a parameter that must be "tuned". Looking at various types of error rate, this is easily found (we want to minimize error, as characterized by the Gini index, or the misclassification rate, or the entropy). [http://www.stat.berkeley.edu/~breiman/RandomForests/cc_home.htm#intro]
 
An algorithm for the Random Forest can be described as below: we first let <math>N_trees</math> to be the number of trees need to build for each of <math>N_trees</math> iterations, then we select a new bootstrap sample from training set and grow an un-pruned tree on this bootstrap, next, at each internal node, randomly select m predictors and determine the best split using only these predictors. Finally do not perform cost complexity pruning and save tree as is, along side those built thus far. <ref>
Albert A. Montillo,Guest lecture: Statistical Foundations of Data Analysis "Random Forests", April,2009. <http://www.dabi.temple.edu/~hbling/8590.002/Montillo_RandomForests_4-2-2009.pdf>
</ref>
 
===Further Reading===
 
Boosting: <ref>Chunhua Shen; Zhihui Hao. “A direct formulation for totally-corrective multi-class boosting”. IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2011.</ref>
 
Bagging: <ref>Xiaoyuan Su; Khoshgoftarr, T.M.; Xingquan Zhu. “VoB predictors: Voting on bagging classifications”. 19th IEEE International Conference on Pattern Recognition. 2008.</ref>
 
Decision Tree: <ref> Zhuowen Tu. “Probabilistic boosting-tree: learning discriminative models for classification, recognition, and clustering”. Tenth IEEE International Conference on Computer Vision. 2005.</ref>
 
== Graphical Models ==
 
A graphical model is a probabilistic model for which a graph denotes the conditional independence structure between random variables. They are commonly used in probability theory, statistics—particularly Bayesian statistics—and machine learning.(Wikipedia)
 
Graphical models provide a compact representation of the joint distribution where V vertices (nodes) represent random variables and edges E represent the dependency between the variables. There are two forms of graphical models (Directed and Undirected graphical model). Directed graphical models consist of arcs and nodes where arcs indicate that the parent is a explanatory variable for the child. Undirected graphical models are based on the assumptions that two nodes or two set of nodes are conditionally independent given their neighbour[http://www.cs.ubc.ca/~murphyk/Bayes/bnintro.html 1].
 
Similiar types of analysis predate the area of Probablistic Graphical Models and it's terminology. Bayesian Network and Belief Network are preceeding terms used to a describe directed acyclical graphical model. Similarly Markov Random Field (MRF) and Markov Network are preceeding terms used to decribe a undirected graphical model. Probablistic Graphical Models have united some of the theory from these older theories and allow for more generalized distributions than were possible in the previous methods.
 
[[File:directed.png|thumb|right|Fig.1 A directed graph.]]
[[File:undirected.png|thumb|right|Fig.2 An undirected graph.]]
 
In the case of directed graphs, the direction of the arrow indicates "causation". This assumption makes these networks useful for the cases that we want to model causality. So these models are more useful for applications such as computational biology and bioinformatics, where we study effect (cause) of some variables on another variable. For example:
<br />
<math>A \longrightarrow B</math>:  <math>A\,\!</math> "causes" <math>B\,\!</math>.
 
 
 
{| class="wikitable"
|-
! Y
! Y
|-
| <math>\downarrow</math>
| <math>\uparrow</math>
|-
| Generative LDA
| Linear Discrimanation
|}
 
Probabilistic ''Discriminative'' Models: Model posterior probability P(Y|X) directly (example: LDA).
 
Advantages of discriminative models
* Obtain desired posterior probability directly
* Less parameters
 
''Generative'' Model: Compute posterior probabilities using Bayes Rule - class-conditional densities and class priors. <ref>http://www.google.ca/imgres?q=generative+vs+discriminative+model&hl=en&client=firefox-a&hs=9tQ&sa=X&rls=org.mozilla:en-GB:official&biw=1454&bih=840&tbm=isch&prmd=imvns&tbnid=GZd3ZvkGOWmvnM:&imgrefurl=https://liqiangguo.wordpress.com/2011/05/26/discriminative-model-vs-generative-model/&docid=9D6p6EAceYNlSM&imgurl=http://liqiangguo.files.wordpress.com/2011/05/d_g1.jpg&w=938&h=336&ei=4pjBTrmjOqHc0QG-u_WCAw&zoom=1&iact=hc&vpx=369&vpy=193&dur=203&hovh=72&hovw=202&tx=162&ty=89&sig=116704843266645309182&page=1&tbnh=72&tbnw=202&start=0&ndsp=25&ved=1t:429,r:1,s:0</ref>
 
Advantages of generative models:
*Can generate new points
*Can sample a new point
 
for an introduction to Graphical model you can see: [http://www.cs.ubc.ca/~murphyk/Papers/intro_gm.pdf]
 
=Boltzmann Machines=
 
==Introduction==
 
[[Image:GBMRBM.jpg|thumb|200px|right|Reference: [2]]]
 
Boltzmann machines are networks of connected nodes which, using a stochastic decision-making process, decide to be on or off. These connections need not be directive; that is, they can go back and forth between layers. This type of formulation leads the reader to think immediately of a binomial distribution, with some probability p of each node being on or off. In a classification problem, a Boltzmann Machine is presented with a set of binary vectors, each entry of the vector called a “unit”, with the goal of learning to generate these vectors. [1]
 
Similar to the neural networks already discussed in class, a Boltzmann Machine must assign weights to inputs, compute some combination of the weights times contributing node values, and optimize the weights such that a certain cost function (such as the relative entropy, as discussed later) is minimized. The cost function depends on the complexity of the model and the “correctness” of the classification. The main idea is to make small updates in the connection weights iteratively.
 
Boltzmann Machines are often used in generative models. That is, we start with some process seen in real life and try to reproduce it, with a goal of predicting future behaviour of the system by generating from the probability distribution created by the Boltzmann Machine.
 
==How a Boltzmann Machine Works==
 
Suppose we start with a pattern ɣ that represents some real life dynamical system. The true probability distribution function of this system is f_ɣ. For each element in the vectors associated with this system, we create a visible unit in the Boltzmann Machine whose function is directly related to the value of that element. Then, usually, to capture higher order regularities in the pattern, we create hidden units (similar to Feed-Forward Neural Networks).  Sometimes researchers choose not to use hidden units, but this leads to a lack of ability to learn high order regularity [5]. There are two possible values for each node in the Boltzmann Machine: “on” or “off”. There is a difference in energy between these states. Each node must then compute the difference in energy to see which state would be more favourable. This difference is called the “energy gap”.
 
Each node of the Boltzmann Machine is presented an opportunity to update its status. When a set of input vectors is shown to the layer, a computation takes place within each node to decide to convert to “on” or to remain “off”. The computation is as follows:
 
<math> \Delta E_i = E_{-1} - E_{+1} = \sum_j w_{ij}S_j  </math>
 
Where <math> w_{ij} </math> represents the weight between nodes i and j, and  <math> S_j </math> is the state of the jth component. 
 
Then the probability that the node will adopt the “on” state is:
 
<math> P(+1) = \frac{1}{(1 + exp( \frac{\delta E}{T}))} </math>
 
Where T is the temperature of the system. The probability of any vector v being an output of the system is just the energy of the vector divided by the total energy of the system, or
 
<math> P(v) = \frac{e^{-E(v)}}{e^{E(system)}} </math>
 
And the energy of a vector is defined as:
 
<math> E({v}) = -\sum_i s^{v}_i b_i -\sum_{i<j} s^{v}_i s^{v}_j w_{ij} </math> [1]
 
Simulated annealing, a method to improve the search for a global minimum, is being used here. It may not succeed in finding the global minimum on its own [3]. This may be a foreign concept to statisticians. For more information, consult [6] and [7]. The state gets changed to whichever calculation in the logistic function step yields a decrease in energy.
 
Eventually, through learning, the Boltzmann Machine will reach an equilibrium state, much like a Markov Chain. This equilibrium state will have a low temperature. Once equilibrium has been reached, we can estimate the probability distribution across the nodes of the Boltzmann Machine. Using this information, we can model how the dynamical system will behave in the long run.
 
Since the system is in equilibrium, we can use the mean value of each visible unit to build a probability model. We wouldn’t want to do these calculations before reaching equilibrium, because they would not be representative of the long-term behaviour of the system.  Let this measured distribution be denoted f_δ.  Then we are interested in measuring the difference between the true distribution and this measured distribution.
 
There are several different methods that can be used to compare distributions. One that is commonly used is the relative entropy:
 
<math> G(f_\gamma ||f_\delta) = \sum_\gamma f_\gamma ln(\frac{f_\gamma}{f_\delta}) </math> [5]
 
We want to minimize this distance, since we want the measured distribution to be as close as possible to the true distribution
 
==Learning in Boltzmann Machines==
 
The Two-Phase Method
 
Boltzmann Machines using hidden units are very robust tools. Visible units are coupled, leading to a problem when trying to capture the effects of higher-dimensional regularities. When hidden units are introduced, the system has the ability to define and use these regularities.
 
One approach to learning Boltzmann Machines is discussed thoroughly in [5]. To summarize, this approach makes use of two phases.


::<math> \mathbf{W} = ( \mathbf{w}_1 , \mathbf{w}_2 , \dots ,\mathbf{w}_k ) </math>
Phase 1: Fix all visible units. Allow the hidden units to change as necessary to obtain equilibrium. Then, look at pairs of units. If two elements of a pair are both “on”, then increment the weight associated with them. So this phase consists entirely of “learning”. There is no control for spurious data.


oberserve that, for <math> \mathbf{A} = \mathbf{S}_B , \mathbf{S}_W </math>,  
Phase 2: No units are fixed. Allow all units to change as necessary to obtain equilibrium. Then sample the final equilibrium distribution to find reliable averages of the term s¬_i s_j. Then as before, look for pairs of units that are both “on”, and decrement the weight associated with them.  So this is the phase in which spurious data are eliminated.


<math> Tr(\mathbf{W}^{T} \mathbf{A} \mathbf{W}= \mathbf{w}_1^{T} \mathbf{A} \mathbf{w}_1^{T}  + \dots + \mathbf{w}_k \mathbf{A} \mathbf{w}_k </math>
Alternate between these two phases. Eventually, the equilibrium distribution will be reached and we see that <math> “ \frac{\partial {G}}{\partial {w_{ij}}} = \frac{-1}{T} (<s_i s_j>^{+} - <s_i s_j>^{-}) </math> where <math> s_i s_j </math> are the probabilities of finding units i and j both “on”, when the network is ‘fixed’ and ‘free-running’, respectively” [5].
Another method, for learning Deep Boltzmann Machines, is presented in [2].


where <math>Tr()</math> is the trace of a matrix. Thus, following the same steps as in the two-class case, we have the new optimization problem:
==Pros and Cons of using Boltzmann Machines==


::<math> \max_{\mathbf{W}} \frac{ Tr(\mathbf{W}^{T} \mathbf{S}_B \mathbf{W}) }{Tr(\mathbf{W}^{T} \mathbf{S}_W \mathbf{W})} \text{subject to:} </math>
Pros


:::: <math> \mathbf{W} \mathbf{S_W} \mathbf{W}^{T} = \mathbf{I} </math>
* More accurate than backpropagation [5]
* Bayesian interpretation of how good a model is [5]


We again use Lagrange multipliers to solve this and derive:
Cons


:: <math> \Rightarrow  \mathbf{S}_W^{-1} \mathbf{S}_B \mathbf{W} = \lambda \mathbf{W} </math>
* Very slow, because of nested loops necessary to perform phases [5]


i.ewe should choose the k-1 eigenvectors corresponding to the k-1 largest eigenvectors as our projection matrix, <math>\mathbf{W}</math>.
There are many topics on which this discussion could be expanded. For example, we could get into a more in-depth discussion of simulated annealing, or look at Restricted Boltzmann Machines (RBMs) for deep learning, or different methods of learning and different measures of error. Another interesting topic would be a discussion on mean field approximation of Boltzmann Machines, which supposedly runs faster.
   
*The time the machine must be run in order to collect equilibrium statistics grows exponentially with the machine's size, and with the magnitude of the connection strengths
*Connection strengths are more plastic when the units being connected have activation probabilities intermediate between zero and one, leading to a so-called variance trap. The net effect is that noise causes the connection strengths to random walk until the activities saturate.
References: <br/>
[1] http://www.scholarpedia.org/article/Boltzmann_machine <br/>
[2] http://www.mit.edu/~rsalakhu/papers/dbm.pdf <br/>
[3] http://mathworld.wolfram.com/SimulatedAnnealing.html <br/>
[4] http://waldron.stanford.edu/~jlm/papers/PDP/Volume%201/Chap7_PDP86.pdf <br/>
[5] http://cs.nyu.edu/~roweis/notes/boltz.pdf <br/>
[6] http://neuron.eng.wayne.edu/tarek/MITbook/chap8/8_3.html <br/>
[7] Bertsimas and Tsitsiklis. Simulated Annealing. Statistical Science. 1993. Vol. 8, No. 1, 10 – 15. <br/>


==References==
==References==
<references />
<references />

Latest revision as of 11:30, 18 November 2020

Data Visualization (Stat 442 / 842, CM 762 - Fall 2014)

Archive

Proposal for Final Project

Presentation Sign Up

Editor Sign Up

STAT 441/841 / CM 463/763 - Tuesday, 2011/09/20

Wiki Course Notes

Students will need to contribute to the wiki for 20% of their grade. Access via wikicoursenote.com Go to editor sign-up, and use your UW userid for your account name, and use your UW email.

primary (10%) Post a draft of lecture notes within 48 hours. You will need to do this 1 or 2 times, depending on class size.

secondary (10%) Make improvements to the notes for at least 60% of the lectures. More than half of your contributions should be technical rather than editorial. There will be a spreadsheet where students can indicate what they've done and when. The instructor will conduct random spot checks to ensure that students have contributed what they claim.


Classification (Lecture: Sep. 20, 2011)

Introduction

Machine learning (ML) methodology in general is an artificial intelligence approach to establish and train a model to recognize the pattern or underlying mapping of a system based on a set of training examples consisting of input and output patterns. Unlike in classical statistics where inference is made from small datasets, machine learning involves drawing inference from an overwhelming amount of data that could not be reasonably parsed by manpower.

In machine learning, pattern recognition is the assignment of some sort of output value (or label) to a given input value (or instance), according to some specific algorithm. The approach of using examples to produce the output labels is known as learning methodology. When the underlying function from inputs to outputs exists, it is referred to as the target function. The estimate of the target function which is learned or output by the learning algorithm is known as the solution of learning problem. In case of classification this function is referred to as the decision function.

In the broadest sense, any method that incorporates information from training samples in the design of a classifier employs learning. Learning tasks can be classified along different dimensions. One important dimension is the distinction between supervised and unsupervised learning. In supervised learning a category label for each pattern in the training set is provided. The trained system will then generalize to new data samples. In unsupervised learning , on the other hand, training data has not been labeled, and the system forms clusters or natural grouping of input patterns based on some sort of measure of similarity and it can then be used to determine the correct output value for new data instances.

The first category is known as pattern classification and the second one as clustering. Pattern classification is the main focus in this course.


Classification problem formulation : Suppose that we are given n observations. Each observation consists of a pair: a vector [math]\displaystyle{ \mathbf{x}_i\subset \mathbb{R}^d, \quad i=1,...,n }[/math], and the associated label [math]\displaystyle{ y_i }[/math]. Where [math]\displaystyle{ \mathbf{x}_i = (x_{i1}, x_{i2}, ... x_{id}) \in \mathcal{X} \subset \mathbb{R}^d }[/math] and [math]\displaystyle{ Y_i }[/math] belongs to some finite set [math]\displaystyle{ \mathcal{Y} }[/math].

The classification task is now looking for a function [math]\displaystyle{ f:\mathbf{x}_i\mapsto y }[/math] which maps the input data points to a target value (i.e. class label). Function [math]\displaystyle{ f(\mathbf{x},\theta) }[/math] is defined by a set of parametrs [math]\displaystyle{ \mathbf{\theta} }[/math] and the goal is to train the classifier in a way that among all possible mappings with different parameters the obtained decision boundary gives the minimum classification error.

Definitions

The true error rate for classifier [math]\displaystyle{ h }[/math] is the error with respect to the unknown underlying distribution when predicting a discrete random variable Y from a given input X.

[math]\displaystyle{ L(h) = P(h(X) \neq Y ) }[/math]


The empirical error rate is the error of our classification function [math]\displaystyle{ h(x) }[/math] on a given dataset with known outputs (e.g. training data, test data)

[math]\displaystyle{ \hat{L}_n(h) = (1/n) \sum_{i=1}^{n} \mathbf{I}(h(X_i) \neq Y_i) }[/math] where h is a clssifier and [math]\displaystyle{ \mathbf{I}() }[/math] is an indicator function. The indicator function is defined by

[math]\displaystyle{ \mathbf{I}(x) = \begin{cases} 1 & \text{if } x \text{ is true} \\ 0 & \text{if } x \text{ is false} \end{cases} }[/math]

So in this case, [math]\displaystyle{ \mathbf{I}(h(X_i)\neq Y_i) = \begin{cases} 1 & \text{if } h(X_i)\neq Y_i \text{ (i.e. misclassification)} \\ 0 & \text{if } h(X_i)=Y_i \text{ (i.e. classified properly)} \end{cases} }[/math]


For example, suppose we have 100 new data points with known (true) labels

[math]\displaystyle{ X_1 ... X_{100} }[/math] [math]\displaystyle{ y_1 ... y_{100} }[/math]

To calculate the empirical error, we count how many times our function [math]\displaystyle{ h(X) }[/math] classifies incorrectly (does not match [math]\displaystyle{ y }[/math]) and divide by n=100.

Bayes Classifier

The principle of the Bayes Classifier is to calculate the posterior probability of a given object from its prior probability via Bayes' Rule, and then assign the object to the class with the largest posterior probability<ref> http://www.wikicoursenote.com/wiki/Stat841#Bayes_Classifier </ref>.

First recall Bayes' Rule, in the format [math]\displaystyle{ P(Y|X) = \frac{P(X|Y) P(Y)} {P(X)} }[/math]

P(Y|X)  : posterior , probability of [math]\displaystyle{ Y }[/math] given [math]\displaystyle{ X }[/math]

P(X|Y)  : likelihood, probability of [math]\displaystyle{ X }[/math] being generated by [math]\displaystyle{ Y }[/math]

P(Y)  : prior, probability of [math]\displaystyle{ Y }[/math] being selected

P(X)  : marginal, probability of obtaining [math]\displaystyle{ X }[/math]


We will start with the simplest case: [math]\displaystyle{ \mathcal{Y} = \{0,1\} }[/math]

[math]\displaystyle{ r(x) = P(Y=1|X=x) = \frac{P(X=x|Y=1) P(Y=1)} {P(X=x)} = \frac{P(X=x|Y=1) P(Y=1)} {P(X=x|Y=1) P(Y=1) + P(X=x|Y=0) P(Y=0)} }[/math]

Bayes' rule can be approached by computing either one of the following:

1) The posterior: [math]\displaystyle{ \ P(Y=1|X=x) }[/math] and [math]\displaystyle{ \ P(Y=0|X=x) }[/math]

2) The likelihood: [math]\displaystyle{ \ P(X=x|Y=1) }[/math] and [math]\displaystyle{ \ P(X=x|Y=0) }[/math]


The former reflects a Bayesian approach. The Bayesian approach uses previous beliefs and observed data (e.g., the random variable [math]\displaystyle{ \ X }[/math]) to determine the probability distribution of the parameter of interest (e.g., the random variable [math]\displaystyle{ \ Y }[/math]). The probability, according to Bayesians, is a degree of belief in the parameter of interest taking on a particular value (e.g., [math]\displaystyle{ \ Y=1 }[/math]), given a particular observation (e.g., [math]\displaystyle{ \ X=x }[/math]). Historically, the difficulty in this approach lies with determining the posterior distribution. However, more recent methods such as Markov Chain Monte Carlo (MCMC) allow the Bayesian approach to be implemented <ref name="PCAustin">P. C. Austin, C. D. Naylor, and J. V. Tu, "A comparison of a Bayesian vs. a frequentist method for profiling hospital performance," Journal of Evaluation in Clinical Practice, 2001</ref>.

The latter reflects a Frequentist approach. The Frequentist approach assumes that the probability distribution (including the mean, variance, etc.) is fixed for the parameter of interest (e.g., the variable [math]\displaystyle{ \ Y }[/math], which is not random). The observed data (e.g., the random variable [math]\displaystyle{ \ X }[/math]) is simply a sampling of a far larger population of possible observations. Thus, a certain repeatability or frequency is expected in the observed data. If it were possible to make an infinite number of observations, then the true probability distribution of the parameter of interest can be found. In general, frequentists use a technique called hypothesis testing to compare a null hypothesis (e.g. an assumption that the mean of the probability distribution is [math]\displaystyle{ \ \mu_0 }[/math]) to an alternative hypothesis (e.g. assuming that the mean of the probability distribution is larger than [math]\displaystyle{ \ \mu_0 }[/math]) <ref name="PCAustin"/>. For more information on hypothesis testing see <ref>R. Levy, "Frequency hypothesis testing, and contingency tables" class notes for LING251, Department of Linguistics, University of California, 2007. Available: http://idiom.ucsd.edu/~rlevy/lign251/fall2007/lecture_8.pdf </ref>.

There was some class discussion on which approach should be used. Both the ease of computation and the validity of both approaches were discussed. A main point that was brought up in class is that Frequentists consider X to be a random variable, but they do not consider Y to be a random variable because it has to take on one of the values from a fixed set (in the above case it would be either 0 or 1 and there is only one correct label for a given value X=x). Thus, from a Frequentist's perspective it does not make sense to talk about the probability of Y. This is actually a grey area and sometimes Bayesians and Frequentists use each others' approaches. So using Bayes' rule doesn't necessarily mean you're a Bayesian. Overall, the question remains unresolved.


The Bayes Classifier uses [math]\displaystyle{ \ P(Y=1|X=x) }[/math]

[math]\displaystyle{ P(Y=1|X=x) = \frac{P(X=x|Y=1) P(Y=1)} {P(X=x|Y=1) P(Y=1) + P(X=x|Y=0) P(Y=0)} }[/math]

P(Y=1) : The Prior, probability of Y taking the value chosen

denominator : Equivalent to P(X=x), for all values of Y, normalizes the probability

[math]\displaystyle{ h(x) = \begin{cases} 1 \ \ \hat{r}(x) \gt 1/2 \\ 0 \ \ otherwise \end{cases} }[/math]

The set [math]\displaystyle{ \mathcal{D}(h) = \{ x : P(Y=1|X=x) = P(Y=0|X=x)... \} }[/math]

which defines a decision boundary.

[math]\displaystyle{ h^*(x) = \begin{cases} 1 \ \ if \ \ P(Y=1|X=x) \gt P(Y=0|X=x) \\ 0 \ \ \ \ \ \ otherwise \end{cases} }[/math]

Theorem: The Bayes Classifier is optimal, i.e., if [math]\displaystyle{ h }[/math] is any other classification rule, then [math]\displaystyle{ L(h^*) \lt = L(h) }[/math]

Proof: Consider any classifier [math]\displaystyle{ h }[/math]. We can express the error rate as

[math]\displaystyle{ P( \{h(X) \ne Y \} ) = E_{X,Y} [ \mathbf{1}_{\{h(X) \ne Y \}} ] = E_X \left[ E_Y[ \mathbf{1}_{\{h(X) \ne Y \}}| X] \right] }[/math]

To minimize this last expression, it suffices to minimize the inner expectation. Expanding this expectation:

[math]\displaystyle{ E_Y[ \mathbf{1}_{\{h(X) \ne Y \}}| X] = \sum_{y \in Supp(Y)} P( h(X) \ne y | X) \mathbf{1}_{\{h(X) \ne y \} } }[/math]

which, in the two-class case, simplifies to

[math]\displaystyle{ = P( h(X) \ne 0 | X) \mathbf{1}_{\{h(X) \ne 0 \} } + P( h(X) \ne 1 | X) \mathbf{1}_{\{h(X) \ne 1 \} } }[/math]
[math]\displaystyle{ = r(X) \mathbf{1}_{\{h(X) \ne 0 \} } + (1-r(X))\mathbf{1}_{\{h(X) \ne 1 \} } }[/math]

where [math]\displaystyle{ r(x) }[/math] is defined as above. We should 'choose' h(X) to equal the label that minimizes the sum. Consider if [math]\displaystyle{ r(X)\gt 1/2 }[/math], then [math]\displaystyle{ r(X)\gt 1-r(X) }[/math] so we should let [math]\displaystyle{ h(X) = 1 }[/math] to minimize the sum. Thus the Bayes classifier is the optimal classifier.

Why then do we need other classification methods? Because X densities are often/typically unknown. I.e., [math]\displaystyle{ f_k(x) }[/math] and/or [math]\displaystyle{ \pi_k }[/math] unknown.

[math]\displaystyle{ P(Y=k|X=x) = \frac{P(X=x|Y=k)P(Y=k)} {P(X=x)} = \frac{f_k(x) \pi_k} {\sum_k f_k(x) \pi_k} }[/math]

[math]\displaystyle{ f_k(x) }[/math] is referred to as the class conditional distribution (~likelihood).

Therefore, we must rely on some data to estimate these quantities.

Three Main Approaches

1. Empirical Risk Minimization: Choose a set of classifiers H (e.g., linear, neural network) and find [math]\displaystyle{ h^* \in H }[/math] that minimizes (some estimate of) the true error, L(h).

2. Regression: Find an estimate ([math]\displaystyle{ \hat{r} }[/math]) of function [math]\displaystyle{ r }[/math] and define [math]\displaystyle{ h(x) = \begin{cases} 1 \ \ \hat{r}(x) \gt 1/2 \\ 0 \ \ otherwise \end{cases} }[/math]

The [math]\displaystyle{ 1/2 }[/math] in the expression above is a threshold set for the regression prediction output.

In general regression refers to finding a continuous, real valued y. The problem here is more difficult, because of the restricted domain (y is a set of discrete label values).

3. Density Estimation: Estimate [math]\displaystyle{ P(X=x|Y=0) }[/math] from [math]\displaystyle{ X_i }[/math]'s for which [math]\displaystyle{ Y_i = 0 }[/math] Estimate [math]\displaystyle{ P(X=x|Y=1) }[/math] from [math]\displaystyle{ X_i }[/math]'s for which [math]\displaystyle{ Y_i = 1 }[/math] and let [math]\displaystyle{ P(Y=y) = (1/n) \sum_{i=1}^{n} I(Y_i = y) }[/math]

Define [math]\displaystyle{ \hat{r}(x) = \hat{P}(Y=1|X=x) }[/math] and [math]\displaystyle{ h(x) = \begin{cases} 1 \ \ \hat{r}(x) \gt 1/2 \\ 0 \ \ otherwise \end{cases} }[/math]

It is possible that there may not be enough data to use density estimation, but the main problem lies with high dimensional spaces, as the estimation results may have a high error rate and sometimes estimation may be infeasible. The term curse of dimensionality was coined by Bellman <ref>R. E. Bellman, Dynamic Programming. Princeton University Press, 1957</ref> to describe this problem.

As the dimension of the space goes up, the data points required for learning increases exponentially.

To learn more about methods for handling high-dimensional data see <ref> https://docs.google.com/viewer?url=http%3A%2F%2Fwww.bios.unc.edu%2F~dzeng%2FBIOS740%2Flecture_notes.pdf</ref>

The third approach is the simplest.

Multi-Class Classification

Generalize to case Y takes on k>2 values.


Theorem: [math]\displaystyle{ Y \in \mathcal{Y} = \{1,2,..., k\} }[/math] optimal rule

[math]\displaystyle{ \ h^{*}(x) = argmax_k P(Y=k|X=x) }[/math]

where [math]\displaystyle{ P(Y=k|X=x) = \frac{f_k(x) \pi_k} {\sum_r f_r(x) \pi_r} }[/math]

Examples of Classification

  • Face detection in images.
  • Medical diagnosis.
  • Detecting credit card fraud (fraudulent or legitimate).
  • Speech recognition.
  • Handwriting recognition.

There are also some interesting reads on Bayes Classification:

LDA and QDA

Discriminant function analysis finds features that best allow discrimination between two or more classes. The approach is similar to analysis of Variance (ANOVA) in that discriminant function analysis looks at the mean values to determine if two or more classes are very different and should be separated. Once the discriminant functions (that separate two or more classes) have been determined, new data points can be classified (i.e. placed in one of the classes) based on the discriminant functions <ref> StatSoft, Inc. (2011). Electronic Statistics Textbook. [Online]. Available: http://www.statsoft.com/textbook/discriminant-function-analysis/. </ref>. Linear discriminant analysis (LDA) and Quadratic discriminant analysis (QDA) are methods of discriminant analysis that are best applied to linearly and quadradically separable classes, respectively. Fisher discriminant analysis (FDA) another method of discriminant analysis that is different from linear discriminant analysis, but oftentimes both terms are used interchangeably.

LDA

The simplest method is to use approach 3 (above) and assume a parametric model for densities. Assume class conditional is Gaussian.

[math]\displaystyle{ \mathcal{Y} = \{ 0,1 \} }[/math] assumed (i.e., 2 labels)

[math]\displaystyle{ h(x) = \begin{cases} 1 \ \ P(Y=1|X=x) \gt P(Y=0|X=x) \\ 0 \ \ otherwise \end{cases} }[/math]

[math]\displaystyle{ P(Y=1|X=x) = \frac{f_1(x) \pi_1} {\sum_k f_k \pi_k} \ \ }[/math] (denom = P(x))

1) Assume Gaussian distributions

[math]\displaystyle{ f_k(x) = \frac{1}{(2\pi)^{d/2} |\Sigma_k|^{1/2}} \text{exp}\big(-\frac{1}{2}(\mathbf{x-\mu_k}) \Sigma_k^{-1}(\mathbf{x-\mu_k}) ) }[/math]

must compare [math]\displaystyle{ \frac{f_1(x) \pi_1} {p(x)} }[/math] with [math]\displaystyle{ \frac{f_0(x) \pi_0} {p(x)} }[/math] Note that the p(x) denom can be ignored: [math]\displaystyle{ f_1(x) \pi_1 }[/math] with [math]\displaystyle{ f_0(x) \pi_0 }[/math]

To find the decision boundary, set [math]\displaystyle{ f_1(x) \pi_1 = f_0(x) \pi_0 }[/math]

[math]\displaystyle{ \frac{1}{(2\pi)^{d/2} |\Sigma_1|^{1/2}} exp(-\frac{1}{2}(\mathbf{x - \mu_1}) \Sigma_1^{-1}(\mathbf{x-\mu_1}) )\pi_1 = \frac{1}{(2\pi)^{d/2} |\Sigma_0|^{1/2}} exp(-\frac{1}{2}(\mathbf{x -\mu_0}) \Sigma_0^{-1}(\mathbf{x-\mu_0}) )\pi_0 }[/math]

2) Assume [math]\displaystyle{ \Sigma_1 = \Sigma_0 }[/math], we can use [math]\displaystyle{ \Sigma = \Sigma_0 = \Sigma_1 }[/math].

[math]\displaystyle{ \frac{1}{(2\pi)^{d/2} |\Sigma|^{1/2}} exp(-\frac{1}{2}(\mathbf{x -\mu_1}) \Sigma^{-1}(\mathbf{x-\mu_1}) )\pi_1 = \frac{1}{(2\pi)^{d/2} |\Sigma|^{1/2}} exp(-\frac{1}{2}(\mathbf{x- \mu_0}) \Sigma^{-1}(\mathbf{x-\mu_0}) )\pi_0 }[/math]

3) Cancel [math]\displaystyle{ (2\pi)^{-d/2} |\Sigma|^{-1/2} }[/math] from both sides.


[math]\displaystyle{ exp(-\frac{1}{2}(\mathbf{x - \mu_1}) \Sigma^{-1}(\mathbf{x-\mu_1}) )\pi_1 = exp(-\frac{1}{2}(\mathbf{x - \mu_0}) \Sigma^{-1}(\mathbf{x-\mu_0}) )\pi_0 }[/math]

4) Take log of both sides.

[math]\displaystyle{ -\frac{1}{2}(\mathbf{x - \mu_1}) \Sigma^{-1}(\mathbf{x-\mu_1}) )+ \text{log}(\pi_1) = -\frac{1}{2}(\mathbf{x - \mu_0}) \Sigma^{-1}(\mathbf{x-\mu_0}) )+ \text{log}(\pi_0) }[/math]

5) Subtract one side from both sides, leaving zero on one side.


[math]\displaystyle{ -\frac{1}{2}(\mathbf{x - \mu_1})^T \Sigma^{-1} (\mathbf{x-\mu_1}) + \text{log}(\pi_1) - [-\frac{1}{2}(\mathbf{x - \mu_0})^T \Sigma^{-1} (\mathbf{x-\mu_0}) + \text{log}(\pi_0)] = 0 }[/math]


[math]\displaystyle{ \frac{1}{2}[-\mathbf{x}^T \Sigma^{-1}\mathbf{x - \mu_1}^T \Sigma^{-1} \mathbf{\mu_1} + 2\mathbf{\mu_1}^T \Sigma^{-1} \mathbf{x} + \mathbf{x}^T \Sigma^{-1}\mathbf{x} + \mathbf{\mu_0}^T \Sigma^{-1} \mathbf{\mu_0} - 2\mathbf{\mu_0}^T \Sigma^{-1} \mathbf{x} ] + \text{log}(\frac{\pi_1}{\pi_0}) = 0 }[/math]


Cancelling out the terms quadratic in [math]\displaystyle{ \mathbf{x} }[/math] and rearranging results in

[math]\displaystyle{ \frac{1}{2}[-\mathbf{\mu_1}^T \Sigma^{-1} \mathbf{\mu_1} + \mathbf{\mu_0}^T \Sigma^{-1} \mathbf{\mu_0} + (2\mathbf{\mu_1}^T \Sigma^{-1} - 2\mathbf{\mu_0}^T \Sigma^{-1}) \mathbf{x}] + \text{log}(\frac{\pi_1}{\pi_0}) = 0 }[/math]


We can see that the first pair of terms is constant, and the second pair is linear in x. Therefore, we end up with something of the form [math]\displaystyle{ ax + b = 0 }[/math]. For more about LDA <ref>http://sites.stat.psu.edu/~jiali/course/stat597e/notes2/lda.pdf</ref>

LDA and QDA Continued (Lecture: Sep. 22, 2011)

If we relax assumption 2 (i.e. [math]\displaystyle{ \Sigma_1 \neq \Sigma_0 }[/math]) then we get a quadratic equation that can be written as [math]\displaystyle{ {x}^Ta{x}+b{x} + c = 0 }[/math]

Generalizing LDA and QDA

Theorem:


Suppose that [math]\displaystyle{ \,Y \in \{1,\dots,K\} }[/math], if [math]\displaystyle{ \,f_k(\mathbf{x}) = Pr(X=\mathbf{x}|Y=k) }[/math] is Gaussian. The Bayes Classifier is

[math]\displaystyle{ \,h^*(\mathbf{x}) = \arg\max_{k} \delta_k(\mathbf{x}) }[/math]

Where

[math]\displaystyle{ \,\delta_k(\mathbf{x}) = - \frac{1}{2}log(|\Sigma_k|) - \frac{1}{2}(\mathbf{x}-\boldsymbol{\mu}_k)^\top\Sigma_k^{-1}(\mathbf{x}-\boldsymbol{\mu}_k) + log (\pi_k) }[/math]

When the Gaussian variances are equal [math]\displaystyle{ \Sigma_1 = \Sigma_0 }[/math] (e.g. LDA), then

[math]\displaystyle{ \,\delta_k(\mathbf{x}) = \mathbf{x}^\top\Sigma^{-1}\boldsymbol{\mu}_k - \frac{1}{2}\boldsymbol{\mu}_k^\top\Sigma^{-1}\boldsymbol{\mu}_k + log (\pi_k) }[/math]

(To compute this, we need to calculate the value of [math]\displaystyle{ \,\delta }[/math] for each class, and then take the one with the max. value).

In practice

We estimate the prior to be the chance that a random item from the collection belongs to class k, e.g.

[math]\displaystyle{ \,\hat{\pi_k} = \hat{Pr}(y=k) = \frac{n_k}{n} }[/math]

The mean to be the average item in set k, e.g.

[math]\displaystyle{ \,\hat{\mu_k} = \frac{1}{n_k}\sum_{i:y_i=k}x_i }[/math]

and calculate the covariance of each class e.g.

[math]\displaystyle{ \,\hat{\Sigma_k} = \frac{1}{n_k}\sum_{i:y_i=k}(x_i-\hat{\mu_k})(x_i-\hat{\mu_k})^\top }[/math]

If we wish to use LDA we must calculate a common covariance, so we average all the covariances e.g.

[math]\displaystyle{ \,\Sigma=\frac{\sum_{r=1}^{k}(n_r\Sigma_r)}{\sum_{r=1}^{k}n_r} }[/math]

Where: [math]\displaystyle{ \,n_r }[/math] is the number of data points in class [math]\displaystyle{ \,r }[/math], [math]\displaystyle{ \,\Sigma_r }[/math] is the covariance of class [math]\displaystyle{ \,r }[/math], [math]\displaystyle{ \,n }[/math] is the total number of data points, and [math]\displaystyle{ \,k }[/math] is the number of classes.

Computation

For QDA we need to calculate: [math]\displaystyle{ \,\delta_k(\mathbf{x}) = - \frac{1}{2}log(|\Sigma_k|) - \frac{1}{2}(\mathbf{x}-\boldsymbol{\mu}_k)^\top\Sigma_k^{-1}(\mathbf{x}-\boldsymbol{\mu}_k) + log (\pi_k) }[/math]

Lets first consider when [math]\displaystyle{ \, \Sigma_k = I, \forall k }[/math]. This is the case where each distribution is spherical, around the mean point.

Case 1

When [math]\displaystyle{ \, \Sigma_k = I }[/math]

We have:

[math]\displaystyle{ \,\delta_k = - \frac{1}{2}log(|I|) - \frac{1}{2}(\mathbf{x}-\boldsymbol{\mu}_k)^\top I(\mathbf{x}-\boldsymbol{\mu}_k) + log (\pi_k) }[/math]

but [math]\displaystyle{ \ \log(|I|)=\log(1)=0 }[/math]

and [math]\displaystyle{ \, (\mathbf{x}-\boldsymbol{\mu}_k)^\top I(\mathbf{x}-\boldsymbol{\mu}_k) = (\mathbf{x}-\boldsymbol{\mu}_k)^\top(\mathbf{x}-\boldsymbol{\mu}_k) }[/math] is the squared Euclidean distance between two points [math]\displaystyle{ \,\mathbf{x} }[/math] and [math]\displaystyle{ \,\boldsymbol{\mu}_k }[/math]

Thus in this condition, a new point can be classified by its distance away from the center of a class, adjusted by some prior.

Further, for two-class problem with equal prior, the discriminating function would be the bisector of the 2-class's means.

Case 2

When [math]\displaystyle{ \, \Sigma_k \neq I }[/math]


Using the Singular Value Decomposition (SVD) of [math]\displaystyle{ \, \Sigma_k }[/math] we get [math]\displaystyle{ \, \Sigma_k = U_kS_kV_k^\top }[/math]. In particular, [math]\displaystyle{ \, U_k }[/math] is a collection of eigenvectors of [math]\displaystyle{ \, \Sigma_k\Sigma_k^* }[/math], and [math]\displaystyle{ \, V_k }[/math] is a collection of eigenvectors of [math]\displaystyle{ \,\Sigma_k^*\Sigma_k }[/math]. Since [math]\displaystyle{ \, \Sigma_k }[/math] is a symmetric matrix<ref> http://en.wikipedia.org/wiki/Covariance_matrix#Properties </ref>, [math]\displaystyle{ \, \Sigma_k = \Sigma_k^* }[/math], so we have [math]\displaystyle{ \, \Sigma_k = U_kS_kU_k^\top }[/math].

For [math]\displaystyle{ \,\delta_k }[/math], the second term becomes what is also known as the Mahalanobis distance <ref>P. C. Mahalanobis, "On The Generalised Distance in Statistics," Proceedings of the National Institute of Sciences of India, 1936</ref> :

[math]\displaystyle{ \begin{align} (\mathbf{x}-\boldsymbol{\mu}_k)^\top\Sigma_k^{-1}(\mathbf{x}-\boldsymbol{\mu}_k)&= (\mathbf{x}-\boldsymbol{\mu}_k)^\top U_kS_k^{-1}U_k^T(\mathbf{x}-\boldsymbol{\mu}_k)\\ & = (U_k^\top \mathbf{x}-U_k^\top\boldsymbol{\mu}_k)^\top S_k^{-1}(U_k^\top \mathbf{x}-U_k^\top \boldsymbol{\mu}_k)\\ & = (U_k^\top \mathbf{x}-U_k^\top\boldsymbol{\mu}_k)^\top S_k^{-\frac{1}{2}}S_k^{-\frac{1}{2}}(U_k^\top \mathbf{x}-U_k^\top\boldsymbol{\mu}_k) \\ & = (S_k^{-\frac{1}{2}}U_k^\top \mathbf{x}-S_k^{-\frac{1}{2}}U_k^\top\boldsymbol{\mu}_k)^\top I(S_k^{-\frac{1}{2}}U_k^\top \mathbf{x}-S_k^{-\frac{1}{2}}U_k^\top \boldsymbol{\mu}_k) \\ & = (S_k^{-\frac{1}{2}}U_k^\top \mathbf{x}-S_k^{-\frac{1}{2}}U_k^\top\boldsymbol{\mu}_k)^\top(S_k^{-\frac{1}{2}}U_k^\top \mathbf{x}-S_k^{-\frac{1}{2}}U_k^\top \boldsymbol{\mu}_k) \\ \end{align} }[/math]

If we think of [math]\displaystyle{ \, S_k^{-\frac{1}{2}}U_k^\top }[/math] as a linear transformation that takes points in class [math]\displaystyle{ \,k }[/math] and distributes them spherically around a point, like in case 1. Thus when we are given a new point, we can apply the modified [math]\displaystyle{ \,\delta_k }[/math] values to calculate [math]\displaystyle{ \ h^*(\,x) }[/math]. After applying the singular value decomposition, [math]\displaystyle{ \,\Sigma_k^{-1} }[/math] is considered to be an identity matrix such that

[math]\displaystyle{ \,\delta_k = - \frac{1}{2}log(|I|) - \frac{1}{2}[(S_k^{-\frac{1}{2}}U_k^\top \mathbf{x}-S_k^{-\frac{1}{2}}U_k^\top\boldsymbol{\mu}_k)^\top(S_k^{-\frac{1}{2}}U_k^\top \mathbf{x}-S_k^{-\frac{1}{2}}U_k^\top \boldsymbol{\mu}_k)] + log (\pi_k) }[/math]

and,

[math]\displaystyle{ \ \log(|I|)=\log(1)=0 }[/math]

For applying the above method with classes that have different covariance matrices (for example the covariance matrices [math]\displaystyle{ \ \Sigma_0 }[/math] and [math]\displaystyle{ \ \Sigma_1 }[/math] for the two class case), each of the covariance matrices has to be decomposed using SVD to find the according transformation. Then, each new data point has to be transformed using each transformation to compare its distance to the mean of each class (for example for the two class case, the new data point would have to be transformed by the class 1 transformation and then compared to [math]\displaystyle{ \ \mu_0 }[/math] and the new data point would also have to be transformed by the class 2 transformation and then compared to [math]\displaystyle{ \ \mu_1 }[/math]).


The difference between Case 1 and Case 2 (i.e. the difference between using the Euclidean and Mahalanobis distance) can be seen in the illustration below.

Illustration of Euclidean distance (a) and Mahalanobis distance (b) where the contours represent equidistant points from the center using each distance metric. Source: <ref>R. De Maesschalck, D. Jouan-Rimbaud and D. L. Massart, "Tutorial - The Mahalanobis distance," Chemometrics and Intelligent Laboratory Systems, 2000 </ref>

As can be seen from the illustration above, the Mahalanobis distance takes into account the distribution of the data points, whereas the Euclidean distance would treat the data as though it has a spherical distribution. Thus, the Mahalanobis distance applies for the more general classification in Case 2, whereas the Euclidean distance applies to the special case in Case 1 where the data distribution is assumed to be spherical.

Generally, we can conclude that QDA provides a better classifier for the data then LDA because LDA assumes that the covariance matrix is identical for each class, but QDA does not. QDA still uses Gaussian distribution as a class conditional distribution. In our real life, this distribution can not be happened each time, so we have to use other distribution as a complement.

The Number of Parameters in LDA and QDA

Both LDA and QDA require us to estimate some parameters. Here is a comparison between the number of parameters needed to be estimated for LDA and QDA:

LDA: Since we just need to compare the differences between one given class and remaining [math]\displaystyle{ \,K-1 }[/math] classes, totally, there are [math]\displaystyle{ \,K-1 }[/math] differences. For each of them, [math]\displaystyle{ \,a^{T}x+b }[/math] requires [math]\displaystyle{ \,d+1 }[/math] parameters. Therefore, there are [math]\displaystyle{ \,(K-1)\times(d+1) }[/math] parameters.

QDA: For each of the differences, [math]\displaystyle{ \,x^{T}ax + b^{T}x + c }[/math] requires [math]\displaystyle{ \frac{1}{2}(d+1)\times d + d + 1 = \frac{d(d+3)}{2}+1 }[/math] parameters. Therefore, there are [math]\displaystyle{ (K-1)(\frac{d(d+3)}{2}+1) }[/math] parameters. Thus QDA may suffer much more extremely from the curse of dimensionality.

A plot of the number of parameters that must be estimated, in terms of (K-1). The x-axis represents the number of dimensions in the data. As is easy to see, QDA is far less robust than LDA for high-dimensional data sets.

Trick: Using LDA to do QDA

There is a trick that allows us to use the linear discriminant analysis (LDA) algorithm to generate as its output a quadratic function that can be used to classify data. This trick is similar to, but more primitive than, the Kernel trick that will be discussed later in the course.

In this approach the feature vector is augmented with the quadratic terms (i.e. new dimensions are introduced) where the original data will be projected to that dimensions. We then apply LDA on the new higher-dimensional data.

The motivation behind this approach is to take advantage of the fact that fewer parameters have to be calculated in LDA , as explained in previous sections, and therefore have a more robust system in situations where we have fewer data points.

If we look back at the equations for LDA and QDA, we see that in LDA we must estimate [math]\displaystyle{ \,\mu_1 }[/math], [math]\displaystyle{ \,\mu_2 }[/math] and [math]\displaystyle{ \,\Sigma }[/math]. In QDA we must estimate all of those, plus another [math]\displaystyle{ \,\Sigma }[/math]; the extra [math]\displaystyle{ \,\frac{d(d-1)}{2} }[/math] estimations make QDA less robust with fewer data points.

Theoretically

Suppose we have a quadratic function to estimate: [math]\displaystyle{ g(\mathbf{x}) = y = \mathbf{x}^T\mathbf{v}\mathbf{x} + \mathbf{w}^T\mathbf{x} }[/math].

Using this trick, we introduce two new vectors, [math]\displaystyle{ \,\hat{\mathbf{w}} }[/math] and [math]\displaystyle{ \,\hat{\mathbf{x}} }[/math] such that:

[math]\displaystyle{ \hat{\mathbf{w}} = [w_1,w_2,...,w_d,v_1,v_2,...,v_d]^T }[/math]

and

[math]\displaystyle{ \hat{\mathbf{x}} = [x_1,x_2,...,x_d,{x_1}^2,{x_2}^2,...,{x_d}^2]^T }[/math]

We can then apply LDA to estimate the new function: [math]\displaystyle{ \hat{g}(\mathbf{x},\mathbf{x}^2) = \hat{y} =\hat{\mathbf{w}}^T\hat{\mathbf{x}} }[/math].

Note that we can do this for any [math]\displaystyle{ \, x }[/math] and in any dimension; we could extend a [math]\displaystyle{ D \times n }[/math] matrix to a quadratic dimension by appending another [math]\displaystyle{ D \times n }[/math] matrix with the original matrix squared, to a cubic dimension with the original matrix cubed, or even with a different function altogether, such as a [math]\displaystyle{ \,sin(x) }[/math] dimension. Note, we are not applying QDA, but instead extending LDA to calculate a non-linear boundary, that will be different from QDA. This algorithm is called nonlinear LDA.

Principal Component Analysis (PCA) (Lecture: Sep. 27, 2011)

Principal Component Analysis (PCA) is a method of dimensionality reduction/feature extraction that transforms the data from a D dimensional space into a new coordinate system of dimension d, where d <= D ( the worst case would be to have d=D). The goal is to preserve as much of the variance in the original data as possible when switching the coordinate systems. Give data on D variables, the hope is that the data points will lie mainly in a linear subspace of dimension lower than D. In practice, the data will usually not lie precisely in some lower dimensional subspace.


The new variables that form a new coordinate system are called principal components (PCs). PCs are denoted by [math]\displaystyle{ \ \mathbf{u}_1, \mathbf{u}_2, ... , \mathbf{u}_D }[/math]. The principal components form a basis for the data. Since PCs are orthogonal linear transformations of the original variables there is at most D PCs. Normally, not all of the D PCs are used but rather a subset of d PCs, [math]\displaystyle{ \ \mathbf{u}_1, \mathbf{u}_2, ... , \mathbf{u}_d }[/math], to approximate the space spanned by the original data points [math]\displaystyle{ \ \mathbf{x}=[x_1, x_2, ... , x_D]^T }[/math]. We can choose d based on what percentage of the variance of the original data we would like to maintain.

The first PC, [math]\displaystyle{ \ \mathbf{u}_1 }[/math] is called first principal component and has the maximum variance, thus it accounts for the most significant variance in the data. The second PC, [math]\displaystyle{ \ \mathbf{u}_2 }[/math] is called second principal component and has the second highest variance and so on until PC, [math]\displaystyle{ \ \mathbf{u}_D }[/math] which has the minimum variance.

Let [math]\displaystyle{ u_i = \mathbf{w}^T\mathbf{x_i} }[/math] be the projection of the data point [math]\displaystyle{ \mathbf{x_i} }[/math] on the direction of w if w is of length one.


[math]\displaystyle{ \mathbf{u = (u_1,....,u_D)^T}\qquad }[/math] , [math]\displaystyle{ \quad\mathbf{w^Tw = 1 } }[/math]


[math]\displaystyle{ var(u) =\mathbf{w}^T X (\mathbf{w}^T X)^T = \mathbf{w}^T X X^T\mathbf{w} = \mathbf{w}^TS\mathbf{w} \quad }[/math] Where [math]\displaystyle{ \quad X X^T = S }[/math] is the sample covariance matrix.


We would like to find the [math]\displaystyle{ \ \mathbf{w} }[/math] which gives us maximum variation:

[math]\displaystyle{ \ \max (Var(\mathbf{w}^T \mathbf{x})) = \max (\mathbf{w}^T S \mathbf{w}) }[/math]


Note: we require the constraint [math]\displaystyle{ \ \mathbf{w}^T \mathbf{w} = 1 }[/math] because if there is no constraint on the length of [math]\displaystyle{ \ \mathbf{w} }[/math] then there is no upper bound. With the constraint, the direction and not the length that maximizes the variance can be found.


Lagrange Multiplier

Before we proceed, we should review Lagrange multipliers.

"The red line shows the constraint g(x,y) = c. The blue lines are contours of f(x,y). The point where the red line tangentially touches a blue contour is our solution." [Lagrange Multipliers, Wikipedia]


Lagrange multipliers are used to find the maximum or minimum of a function [math]\displaystyle{ \displaystyle f(x,y) }[/math] subject to constraint [math]\displaystyle{ \displaystyle g(x,y)=0 }[/math]

we define a new constant [math]\displaystyle{ \lambda }[/math] called a Lagrange Multiplier and we form the Lagrangian,

[math]\displaystyle{ \displaystyle L(x,y,\lambda) = f(x,y) - \lambda g(x,y) }[/math]

If [math]\displaystyle{ \displaystyle f(x^*,y^*) }[/math] is the max of [math]\displaystyle{ \displaystyle f(x,y) }[/math], there exists [math]\displaystyle{ \displaystyle \lambda^* }[/math] such that [math]\displaystyle{ \displaystyle (x^*,y^*,\lambda^*) }[/math] is a stationary point of [math]\displaystyle{ \displaystyle L }[/math] (partial derivatives are 0).
In addition [math]\displaystyle{ \displaystyle (x^*,y^*) }[/math] is a point in which functions [math]\displaystyle{ \displaystyle f }[/math] and [math]\displaystyle{ \displaystyle g }[/math] touch but do not cross. At this point, the tangents of [math]\displaystyle{ \displaystyle f }[/math] and [math]\displaystyle{ \displaystyle g }[/math] are parallel or gradients of [math]\displaystyle{ \displaystyle f }[/math] and [math]\displaystyle{ \displaystyle g }[/math] are parallel, such that:

[math]\displaystyle{ \displaystyle \nabla_{x,y } f = \lambda \nabla_{x,y } g }[/math]

where,
[math]\displaystyle{ \displaystyle \nabla_{x,y} f = (\frac{\partial f}{\partial x},\frac{\partial f}{\partial{y}}) \leftarrow }[/math] the gradient of [math]\displaystyle{ \, f }[/math]
[math]\displaystyle{ \displaystyle \nabla_{x,y} g = (\frac{\partial g}{\partial{x}},\frac{\partial{g}}{\partial{y}}) \leftarrow }[/math] the gradient of [math]\displaystyle{ \, g }[/math]

Example :

Suppose we want to maximize the function [math]\displaystyle{ \displaystyle f(x,y)=x-y }[/math] subject to the constraint [math]\displaystyle{ \displaystyle x^{2}+y^{2}=1 }[/math]. We can apply the Lagrange multiplier method to find the maximum value for the function [math]\displaystyle{ \displaystyle f }[/math]; the Lagrangian is:

[math]\displaystyle{ \displaystyle L(x,y,\lambda) = x-y - \lambda (x^{2}+y^{2}-1) }[/math]

We want the partial derivatives equal to zero:


[math]\displaystyle{ \displaystyle \frac{\partial L}{\partial x}=1+2 \lambda x=0 }[/math]

[math]\displaystyle{ \displaystyle \frac{\partial L}{\partial y}=-1+2\lambda y=0 }[/math]

[math]\displaystyle{ \displaystyle \frac{\partial L}{\partial \lambda}=x^2+y^2-1 }[/math]

Solving the system we obtain two stationary points: [math]\displaystyle{ \displaystyle (\sqrt{2}/2,-\sqrt{2}/2) }[/math] and [math]\displaystyle{ \displaystyle (-\sqrt{2}/2,\sqrt{2}/2) }[/math]. In order to understand which one is the maximum, we just need to substitute it in [math]\displaystyle{ \displaystyle f(x,y) }[/math] and see which one as the biggest value. In this case the maximum is [math]\displaystyle{ \displaystyle (\sqrt{2}/2,-\sqrt{2}/2) }[/math].

Determining w :

Use the Lagrange multiplier conversion to obtain: [math]\displaystyle{ \displaystyle L(\mathbf{w}, \lambda) = \mathbf{w}^T S\mathbf{w} - \lambda (\mathbf{w}^T \mathbf{w} - 1) }[/math] where [math]\displaystyle{ \displaystyle \lambda }[/math] is a constant

Take the derivative and set it to zero: [math]\displaystyle{ \displaystyle{\partial L \over{\partial \mathbf{w}}} = 0 }[/math]


To obtain: [math]\displaystyle{ \displaystyle 2S\mathbf{w} - 2 \lambda \mathbf{w} = 0 }[/math]


Rearrange to obtain: [math]\displaystyle{ \displaystyle S\mathbf{w} = \lambda \mathbf{w} }[/math]


where [math]\displaystyle{ \displaystyle w }[/math] is eigenvector of [math]\displaystyle{ \displaystyle S }[/math] and [math]\displaystyle{ \ \lambda }[/math] is the eigenvalue of [math]\displaystyle{ \displaystyle S }[/math] as [math]\displaystyle{ \displaystyle S\mathbf{w}= \lambda \mathbf{w} }[/math] , and [math]\displaystyle{ \displaystyle \mathbf{w}^T \mathbf{w}=1 }[/math] , then we can write

[math]\displaystyle{ \displaystyle \mathbf{w}^T S\mathbf{w}= \mathbf{w}^T\lambda \mathbf{w}= \lambda \mathbf{w}^T \mathbf{w} =\lambda }[/math]

Note that the PCs decompose the total variance in the data in the following way :

[math]\displaystyle{ \sum_{i=1}^{D} Var(u_i) }[/math]

[math]\displaystyle{ = \sum_{i=1}^{D} (\lambda_i) }[/math]

[math]\displaystyle{ \ = Tr(S) }[/math] ---- (S is a co-variance matrix, and therefore it's symmetric)

[math]\displaystyle{ = \sum_{i=1}^{D} Var(x_i) }[/math]

Principal Component Analysis (PCA) Continued (Lecture: Sep. 29, 2011)

As can be seen from the above expressions, [math]\displaystyle{ \ Var(\mathbf{w}^\top \mathbf{w}) = \mathbf{w}^\top S \mathbf{w}= \lambda }[/math] where lambda is an eigenvalue of the sample covariance matrix [math]\displaystyle{ \ S }[/math] and [math]\displaystyle{ \ \mathbf{w} }[/math] is its corresponding eigenvector. So [math]\displaystyle{ \ Var(u_i) }[/math] is maximized if [math]\displaystyle{ \ \lambda_i }[/math] is the maximum eigenvalue of [math]\displaystyle{ \ S }[/math] and the first principal component (PC) is the corresponding eigenvector. Each successive PC can be generated in the above manner by taking the eigenvectors of [math]\displaystyle{ \ S }[/math]<ref>www.wikipedia.org/wiki/Eigenvalues_and_eigenvectors</ref> that correspond to the eigenvalues:

[math]\displaystyle{ \ \lambda_1 \geq ... \geq \lambda_D }[/math]

such that

[math]\displaystyle{ \ Var(u_1) \geq ... \geq Var(u_D) }[/math]

Alternative Derivation

Another way of looking at PCA is to consider PCA as a projection from a higher D-dimension space to a lower d-dimensional subspace that minimizes the squared reconstruction error. The squared reconstruction error is the difference between the original data set [math]\displaystyle{ \ X }[/math] and the new data set [math]\displaystyle{ \hat{X} }[/math] obtained by first projecting the original data set into a lower d-dimensional subspace and then projecting it back into the the original higher D-dimension space. Since information is (normally) lost by compressing the the original data into a lower d-dimensional subspace, the new data set will (normally) differ from the original data even though both are part of the higher D-dimension space. The reconstruction error is computed as shown below.

Reconstruction Error

[math]\displaystyle{ e = \sum_{i=1}^{n} || x_i - \hat{x}_i ||^2 }[/math]

Minimize Reconstruction Error

Suppose [math]\displaystyle{ \bar{x} = 0 }[/math] where [math]\displaystyle{ \hat{x}_i = x_i - \bar{x} }[/math]

Let [math]\displaystyle{ \ f(y) = U_d y }[/math] where [math]\displaystyle{ \ U_d }[/math] is a D by d matrix with d orthogonal unit vectors as columns.

Fit the model to the data and minimize the reconstruction error:

[math]\displaystyle{ \ min_{U_d, y_i} \sum_{i=1}^n || x_i - U_d y_i ||^2 }[/math]

Differentiate with respect to [math]\displaystyle{ \ y_i }[/math]:

[math]\displaystyle{ \frac{\partial e}{\partial y_i} = 0 }[/math]

we can rewrite reconstruction-error as : [math]\displaystyle{ \ e = \sum_{i=1}^n(x_i - U_d y_i)^T(x_i - U_d y_i) }[/math]

[math]\displaystyle{ \ \frac{\partial e}{\partial y_i} = 2(-U_d)(x_i - U_d y_i) = 0 }[/math]

since [math]\displaystyle{ \ U_d(x_i - U_d y_i) }[/math] is a linear combination of the columns of [math]\displaystyle{ \ U_d }[/math],

which are independent (orthogonal to each other) we can conclude that:

[math]\displaystyle{ \ x_i - U_d y_i = 0 }[/math] or equivalently,

[math]\displaystyle{ \ x_i = U_d y_i }[/math]

[math]\displaystyle{ \ y_i = U_d^T x_i }[/math]

Find the orthogonal matrix [math]\displaystyle{ \ U_d }[/math]:

[math]\displaystyle{ \ min_{U_d} \sum_{i=1}^n || x_i - U_d U_d^T x_i||^2 }[/math]

PCA Implementation Using Singular Value Decomposition

A unique solution can be obtained by finding the Singular Value Decomposition (SVD) of [math]\displaystyle{ \ X }[/math]:

[math]\displaystyle{ \ X = U S V^T }[/math]

For each rank d, [math]\displaystyle{ \ U_d }[/math] consists of the first d columns of [math]\displaystyle{ \ U }[/math]. Also, the covariance matrix can be expressed as follows [math]\displaystyle{ \ S = \frac{1}{n-1}\sum_{i=1}^n (x_i - \mu)(x_i - \mu)^T }[/math].

Simply put, by subtracting the mean of each of the data point features and then applying SVD, one can find the principal components:

[math]\displaystyle{ \tilde{X} = X - \mu }[/math]

[math]\displaystyle{ \ \tilde{X} = U S V^T }[/math]

Where [math]\displaystyle{ \ X }[/math] is a d by n matrix of data points and the features of each data point form a column in [math]\displaystyle{ \ X }[/math]. Also, [math]\displaystyle{ \ \mu }[/math] is a d by n matrix with identical columns each equal to the mean of the [math]\displaystyle{ \ x_i }[/math]'s, ie [math]\displaystyle{ \mu_{:,j}=\frac{1}{n}\sum_{i=1}^n x_i }[/math]. Note that the arrangement of data points is a convention and indeed in Matlab or conventional statistics, the transpose of the matrices in the above formulae is used.

As the [math]\displaystyle{ \ S }[/math] matrix from the SVD has the eigenvalues arranged from largest to smallest, the corresponding eigenvectors in the [math]\displaystyle{ \ U }[/math] matrix from the SVD will be such that the first column of [math]\displaystyle{ \ U }[/math] is the first principal component and the second column is the second principal component and so on.

Examples

Note that in the Matlab code in the examples below, the mean was not subtracted from the datapoints before performing SVD. This is what was shown in class. However, to properly perform PCA, the mean should be subtracted from the datapoints.

Example 1

Consider a matrix of data points [math]\displaystyle{ \ X }[/math] with the dimensions 560 by 1965. 560 is the number of elements in each column. Each column is a vector representation of a 20x28 grayscale pixel image of a face (see image below) and there is a total of 1965 different images of faces. Each of the images are corrupted by noise, but the noise can be removed by projecting the data back to the original space taking as many dimensions as one likes (e.g, 2, 3 4 0r 5). The corresponding Matlab commands are shown below:

</ref>
 >> % start with a 560 by 1965 matrix X that contains the data points
 >> load(noisy.mat);
 >>  
 >> % set the colors to grayscale 
 >> colormap gray
 >> 
 >> % show image in column 10 by reshaping column 10 into a 20 by 28 matrix
 >> imagesc(reshape(X(:,10),20,28)')
 >> 
 >> % perform SVD, if X matrix if full rank, will obtain 560 PCs
 >> [S U V] = svd(X);
 >> 
 >> % reconstruct X ( project X onto the original space) using only the first ten principal components
 >> Y_pca = U(:, 1:10)'*X;
 >> 
 >> % show image in column 10 of X_hat which is now a 560 by 1965 matrix
 >> imagesc(reshape(X_hat(:,10),20,28)')

The reason why the noise is removed in the reconstructed image is because the noise does not create a major variation in a single direction in the original data. Hence, the first ten PCs taken from [math]\displaystyle{ \ U }[/math] matrix are not in the direction of the noise. Thus, reconstructing the image using the first ten PCs, will remove the noise.

Example 2

Consider a matrix of data points [math]\displaystyle{ \ X }[/math] with the dimensions 64 by 400. 64 is the number of elements in each column. Each column is a vector representation of a 8x8 grayscale pixel image of either a handwritten number 2 or a handwritten number 3 (see image below) and there are a total of 400 different images, where the first 200 images show a handwritten number 2 and the last 200 images show a handwritten number 3.

An example of the handwritten number images used in Example 2. Source: <ref>A. Ghodsi, "PCA" class notes for STAT841, Department of Statistics and Actuarial Science, University of Waterloo, 2011. </ref>

The corresponding Matlab commands for performing PCA on the data points are shown below:

 >> % start with a 64 by 400 matrix X that contains the data points
 >> load 2_3.mat;
 >> 
 >> % set the colors to grayscale 
 >> colormap gray
 >> 
 >> % show image in column 2 by reshaping column 2 into a 8 by 8 matrix
 >> imagesc(reshape(X(:,2),8,8))
 >> 
 >> % perform SVD, if X matrix if full rank, will obtain 64 PCs
 >> [U S V] = svd(X);
 >> 
 >> % project data down onto the first two PCs
 >> Y = U(:,1:2)'*X;
 >> 
 >> % show Y as an image (can see the change in the first PC at column 200,
 >> % when the handwritten number changes from 2 to 3)
 >> imagesc(Y)
 >> 
 >> % perform PCA using Matlab build-in function (do not use for assignment)
 >> % also note that due to the Matlab convention, the transpose of X is used
 >> [COEFF, Y] = princomp(X');
 >> 
 >> % again, use the first two PCs
 >> Y = Y(:,1:2);
 >> 
 >> % use plot digits to show the distribution of images on the first two PCs
 >> images = reshape(X, 8, 8, 400);
 >> plotdigits(images, Y, .1, 1);

Using the plotdigits function in Matlab, clearly illustrates that the first PC captured the differences between the numbers 2 and 3 as they are projected onto different regions of the axis for the first PC. Also, the second PC captured the tilt of the handwritten numbers as numbers tilted to the left or right were projected onto different regions of the axis for the second PC.

Example 3

(Not discussed in class) In the news recently was a story that captures some of the ideas behind PCA. Over the past two years, Scott Golder and Michael Macy, researchers from Cornell University, collected 509 million Twitter messages from 2.4 million users in 84 different countries. The data they used were words collected at various times of day and they classified the data into two different categories: positive emotion words and negative emotion words. Then, they were able to study this new data to evaluate subjects' moods at different times of day, while the subjects were in different parts of the world. They found that the subjects generally exhibited positive emotions in the mornings and late evenings, and negative emotions mid-day. They were able to "project their data onto a smaller dimensional space" using PCS. Their paper, "Diurnal and Seasonal Mood Vary with Work, Sleep, and Daylength Across Diverse Cultures," is available in the journal Science.<ref>http://www.pcworld.com/article/240831/twitter_analysis_reveals_global_human_moodiness.html</ref>.

Assumptions Underlying Principal Component Analysis can be found here<ref>http://support.sas.com/publishing/pubcat/chaps/55129.pdf</ref>

Example 4

(Not discussed in class) A somewhat well known learning rule in the field of neural networks called Oja's rule can be used to train networks of neurons to compute the principal component directions of data sets. <ref>A Simplified Neuron Model as a Principal Component Analyzer. Erkki Oja. 1982. Journal of Mathematical Biology. 15: 267-273</ref> This rule is formulated as follows

[math]\displaystyle{ \,\Delta w = \eta yx -\eta y^2w }[/math]

where [math]\displaystyle{ \,\Delta w }[/math] is the neuron weight change, [math]\displaystyle{ \,\eta }[/math] is the learning rate, [math]\displaystyle{ \,y }[/math] is the neuron output given the current input, [math]\displaystyle{ \,x }[/math] is the current input and [math]\displaystyle{ \,w }[/math] is the current neuron weight. This learning rule shares some similarities with another method for calculating principal components: power iteration. The basic algorithm for power iteration (taken from wikipedia: <ref>Wikipedia. http://en.wikipedia.org/wiki/Principal_component_analysis#Computing_principal_components_iteratively</ref>) is shown below


[math]\displaystyle{ \mathbf{p} = }[/math] a random vector
do c times:
      [math]\displaystyle{ \mathbf{t} = 0 }[/math] (a vector of length m)
      for each row [math]\displaystyle{ \mathbf{x} \in \mathbf{X^T} }[/math]
            [math]\displaystyle{ \mathbf{t} = \mathbf{t} + (\mathbf{x} \cdot \mathbf{p})\mathbf{x} }[/math]
      [math]\displaystyle{ \mathbf{p} = \frac{\mathbf{t}}{|\mathbf{t}|} }[/math]
return [math]\displaystyle{ \mathbf{p} }[/math]

Comparing this with the neuron learning rule we can see that the term [math]\displaystyle{ \, \eta y x }[/math] is very similar to the [math]\displaystyle{ \,\mathbf{t} }[/math] update equation in the power iteration method, and identical if the neuron model is assumed to be linear ([math]\displaystyle{ \,y(x)=x\mathbf{p} }[/math]) and the learning rate is set to 1. Additionally, the [math]\displaystyle{ \, -\eta y^2w }[/math] term performs the normalization, the same function as the [math]\displaystyle{ \,\mathbf{p} }[/math] update equation in the power iteration method.

Observations

Some observations about the PCA were brought up in class:

  • PCA assumes that data is on a linear subspace or close to a linear subspace. For non-linear dimensionality reduction, other techniques are used. Amongst the first proposed techniques for non-linear dimensionality reduction are Locally Linear Embedding (LLE) and Isomap. More recent techniques include Maximum Variance Unfolding (MVU) and t-Distributed Stochastic Neighbor Embedding (t-SNE). Kernel PCAs may also be used, but they depend on the type of kernel used and generally do not work well in practice. (Kernels will be covered in more detail later in the course.)
  • Finding the number of PCs to use is not straightforward. It requires knowledge about the instrinsic dimentionality of data. In practice, oftentimes a heuristic approach is adopted by looking at the eigenvalues ordered from largest to smallest. If there is a "dip" in the magnitude of the eigenvalues, the "dip" is used as a cut off point and only the large eigenvalues before the "dip" are used. Otherwise, it is possible to add up the eigenvalues from largest to smallest until a certain percentage value is reached. This percentage value represents the percentage of variance that is preserved when projecting onto the PCs corresponding to the eigenvalues that have been added together to achieve the percentage.
  • It is a good idea to normalize the variance of the data before applying PCA. This will avoid PCA finding PCs in certain directions due to the scaling of the data, rather than the real variance of the data.
  • PCA can be considered as an unsupervised approach, since the main direction of variation is not known beforehand, i.e. it is not completely certain which dimension the first PC will capture. The PCs found may not correspond to the desired labels for the data set. There are, however, alternate methods for performing supervised dimensionality reduction.
  • (Not in class) Even though the traditional PCA method does not work well on data set that lies on a non-linear manifold. A revised PCA method, called c-PCA, has been introduced to improve the stability and convergence of intrinsic dimension estimation. The approach first finds a minimal cover (a cover of a set X is a collection of sets whose union contains X as a subset<ref>http://en.wikipedia.org/wiki/Cover_(topology)</ref>) of the data set. Since set covering is an NP-hard problem, the approach only finds an approximation of minimal cover to reduce the complexity of the run time. In each subset of the minimal cover, it applies PCA and filters out the noise in the data. Finally the global intrinsic dimension can be determined from the variance results from all the subsets. The algorithm produces robust results.<ref>Mingyu Fan, Nannan Gu, Hong Qiao, Bo Zhang, Intrinsic dimension estimation of data by principal component analysis, 2010. Available: http://arxiv.org/abs/1002.2050</ref>
  • (Not in class) While PCA finds the mathematically optimal method (as in minimizing the squared error), it is sensitive to outliers in the data that produce large errors PCA tries to avoid. It therefore is common practice to remove outliers before computing PCA. However, in some contexts, outliers can be difficult to identify. For example in data mining algorithms like correlation clustering, the assignment of points to clusters and outliers is not known beforehand. A recently proposed generalization of PCA based on a Weighted PCA increases robustness by assigning different weights to data objects based on their estimated relevancy.<ref>http://en.wikipedia.org/wiki/Principal_component_analysis</ref>
  • (Not in class) Comparison between PCA and LDA: Principal Component Analysis (PCA)and Linear Discriminant Analysis (LDA) are two commonly used techniques for data classification and dimensionality reduction. Linear Discriminant Analysis easily handles the case where the within-class frequencies are unequal and their performances has been examined on randomly generated test data. This method maximizes the ratio of between-class variance to the within-class variance in any particular data set thereby guaranteeing maximal separability. ... The prime difference between LDA and PCA is that PCA does more of feature classification and LDA does data classification. In PCA, the shape and location of the original data sets changes when transformed to a different space whereas LDA doesn’t change the location but only tries to provide more class separability and draw a decision region between the given classes. This method also helps to better understand the distribution of the feature data." <ref> Balakrishnama, S., Ganapathiraju, A. LINEAR DISCRIMINANT ANALYSIS - A BRIEF TUTORIAL. http://www.isip.piconepress.com/publications/reports/isip_internal/1998/linear_discrim_analysis/lda_theory.pdf </ref>

Summary

The PCA algorithm can be summarized into the following steps:

  1. Recover basis
    [math]\displaystyle{ \ \text{ Calculate } XX^T=\Sigma_{i=1}^{t}x_ix_{i}^{T} \text{ and let } U=\text{ eigenvectors of } XX^T \text{ corresponding to the largest } d \text{ eigenvalues.} }[/math]
  2. Encode training data
    [math]\displaystyle{ \ \text{Let } Y=U^TX \text{, where } Y \text{ is a } d \times t \text{ matrix of encodings of the original data.} }[/math]
  3. Reconstruct training data
    [math]\displaystyle{ \hat{X}=UY=UU^TX }[/math].
  4. Encode test example
    [math]\displaystyle{ \ y = U^Tx \text{ where } y \text{ is a } d\text{-dimensional encoding of } x }[/math].
  5. Reconstruct test example
    [math]\displaystyle{ \hat{x}=Uy=UU^Tx }[/math].

Dual PCA

Singular value decomposition allows us to formulate the principle components algorithm entirely in terms of dot products between data points and limit the direct dependence on the original dimensionality d. Now assume that the dimensionality d of the d × n matrix of data X is large (i.e., d >> n). In this case, the algorithm described in previous sections become impractical. We would prefer a run time that depends only on the number of training examples n, or that at least has a reduced dependence on n. Note that in the SVD factorization [math]\displaystyle{ \ X = U \Sigma V^T }[/math], the eigenvectors in [math]\displaystyle{ \ U }[/math] corresponding to non-zero singular values in [math]\displaystyle{ \ \Sigma }[/math] (square roots of eigenvalues) are in a one-to-one correspondence with the eigenvectors in [math]\displaystyle{ \ V }[/math] . After performing dimensionality reduction on [math]\displaystyle{ \ U }[/math] and keeping only the first l eigenvectors, corresponding to the top l non-zero singular values in [math]\displaystyle{ \ \Sigma }[/math], these eigenvectors will still be in a one-to-one correspondence with the first l eigenvectors in [math]\displaystyle{ \ V }[/math] :

[math]\displaystyle{ \ X V = U \Sigma }[/math]

[math]\displaystyle{ \ \Sigma }[/math] is square and invertible, because its diagonal has non-zero entries. Thus, the following conversion between the top l eigenvectors can be derived:

[math]\displaystyle{ \ U = X V \Sigma^{-1} }[/math]

Now Replacing [math]\displaystyle{ \ U }[/math] with [math]\displaystyle{ \ X V \Sigma^{-1} }[/math] gives us the dual form of PCA.

Fisher Discriminant Analysis (FDA) (Lecture: Sep. 29, 2011 - Oct. 04, 2011)

Fisher Discriminant Analysis (FDA) is sometimes called Fisher Linear Discriminant Analysis (FLDA) or just Linear Discriminant Analysis (LDA). This causes confusion with the Linear Discriminant Analysis (LDA) technique covered earlier in the course. The LDA technique covered earlier in the course has a normality assumption and is a boundary finding technique. The FDA technique outlined here is a supervised feature extraction technique. FDA differs from PCA as well because PCA does not use the class labels, [math]\displaystyle{ \ y_i }[/math], of the data [math]\displaystyle{ \ (x_i,y_i) }[/math] while FDA organizes data into their classes by finding the direction of maximum separation between classes.


PCA

- Find a rank d subspace which minimize the squared reconstruction error:

[math]\displaystyle{ \Sigma = |x_i - \hat{x} |^2 }[/math]

where [math]\displaystyle{ \hat{x} }[/math] is projection of original data.


One main drawback of the PCA technique is that the direction of greatest variation may not produce the classification we desire. For example, imagine if the data set above had a lighting filter applied to a random subset of the images. Then the greatest variation would be the brightness and not the more important variations we wish to classify. As another example , if we imagine 2 cigar like clusters in 2 dimensions, one cigar has [math]\displaystyle{ y = 1 }[/math] and the other [math]\displaystyle{ y = -1 }[/math]. The cigars are positioned in parallel and very closely together, such that the variance in the total data-set, ignoring the labels, is in the direction of the cigars. For classification, this would be a terrible projection, because all labels get evenly mixed and we destroy the useful information. A much more useful projection is orthogonal to the cigars, i.e. in the direction of least overall variance, which would perfectly separate the data-cases (obviously, we would still need to perform classification in this 1-D space.) See figure below <ref>www.ics.uci.edu/~welling/classnotes/papers_class/Fisher-LDA.pdf</ref>. FDA circumvents this problem by using the labels, [math]\displaystyle{ \ y_i }[/math], of the data [math]\displaystyle{ \ (x_i,y_i) }[/math] i.e. the FDA uses supervised learning. The main difference between FDA and PCA is that, in PCA we are interested in transforming the data to a new coordinate system such that the greatest variance of data lies on the first coordinate, but in FDA, we project the data of each class onto a point in such a way that the resulting points would be as far apart from each other as possible. The FDA goal is achieved by projecting data onto a suitably chosen line that minimizes the within class variance, and maximizes the distance between the two classes i.e. group similar data together and spread different data apart. This way, new data acquired can be compared, after a transformation, to where these projections, using some well-chosen metric.

Two cigar distributions where the direction of greatest variance is not the most useful for classification

We first consider the cases of two-classes. Denote the mean and covariance matrix of class [math]\displaystyle{ i=0,1 }[/math] by [math]\displaystyle{ \mathbf{\mu}_i }[/math] and [math]\displaystyle{ \mathbf{\Sigma}_i }[/math] respectively. We transform the data so that it is projected into 1 dimension i.e. a scalar value. To do this, we compute the inner product of our [math]\displaystyle{ dx1 }[/math]-dimensional data, [math]\displaystyle{ \mathbf{x} }[/math], by a to-be-determined [math]\displaystyle{ dx1 }[/math]-dimensional vector [math]\displaystyle{ \mathbf{w} }[/math]. The new means and covariances of the transformed data:

[math]\displaystyle{ \mu'_i:\rightarrow \mathbf{w}^{T}\mathbf{\mu}_i }[/math]
[math]\displaystyle{ \Sigma'_i :\rightarrow \mathbf{w}^{T}\mathbf{\sigma}_i \mathbf{w} }[/math]

The new means and variances are actually scalar values now, but we will use vector and matrix notation and arguments throughout the following derivation as the multi-class case is then just a simpler extension.

Goals of FDA

As will be shown in the objective function, the goal of FDA is to maximize the separation of the classes (between class variance) and minimize the scatter within each class (within class variance). That is, our ideal situation is that the individual classes are as far away from each other as possible and at the same time the data within each class are as close to each other as possible (collapsed to a single point in the most extreme case). An interesting note is that R. A. Fisher who FDA is named after, used the FDA technique for purposes of taxonomy, in particular for categorizing different species of iris flowers. <ref name="RAFisher">R. A. Fisher, "The Use of Multiple measurements in Taxonomic Problems," Annals of Eugenics, 1936</ref>. It is very easy to visualize what is meant by within class variance (i.e. differences between the iris flowers of the same species) and between class variance (i.e. the differences between the iris flowers of different species) in that case.

First, we need to reduce the dimensionality of covariate to one dimension (two-class case) by projecting the data onto a line. That is take the d-dimensional input values x and project it to one dimension by using [math]\displaystyle{ z=\mathbf{w}^T \mathbf{x} }[/math] where [math]\displaystyle{ \mathbf{w}^T }[/math] is 1 by d and [math]\displaystyle{ \mathbf{x} }[/math] is d by 1.

Goal: choose the vector [math]\displaystyle{ \mathbf{w}=[w_1,w_2,w_3,...,w_d]^T }[/math] that best seperate the data, then we perform classification with projected data [math]\displaystyle{ z }[/math] instead of original data [math]\displaystyle{ \mathbf{x} }[/math] .


[math]\displaystyle{ \hat{{\mu}_0}=\frac{1}{n_0}\sum_{i:y_i=0} x_i }[/math]

[math]\displaystyle{ \hat{{\mu}_1}=\frac{1}{n_1}\sum_{i:y_i=1} x_i }[/math]

[math]\displaystyle{ \mathbf{x}\rightarrow\mathbf{w}^{T}\mathbf{x} }[/math].
[math]\displaystyle{ \mathbf{\mu}\rightarrow\mathbf{w}^{T}\mathbf{\mu} }[/math].
[math]\displaystyle{ \mathbf{\Sigma}\rightarrow\mathbf{w}^{T}\mathbf{\Sigma}\mathbf{w} }[/math]



1) Our first goal is to minimize the individual classes' covariance. This will help to collapse the data together. We have two minimization problems

[math]\displaystyle{ \min_{\mathbf{w}} \mathbf{w}^{T} \mathbf{\Sigma}_0 \mathbf{w} }[/math]

and

[math]\displaystyle{ \min_{\mathbf{w}} \mathbf{w}^{T} \mathbf{\Sigma}_1 \mathbf{w} }[/math].

But these can be combined:

[math]\displaystyle{ \min_{\mathbf{w}} \mathbf{w} ^{T}\mathbf{\Sigma}_0 \mathbf{w} + \mathbf{w}^{T} \mathbf{\Sigma}_1 \mathbf{w} }[/math]
[math]\displaystyle{ = \min_{\mathbf{w}} \mathbf{w} ^{T}( \mathbf{\Sigma_0} + \mathbf{\Sigma_1} ) \mathbf{w} }[/math]

Define [math]\displaystyle{ \mathbf{S}_W =\mathbf{\Sigma_0} + \mathbf{\Sigma_1} }[/math], called the within class variance matrix.

2) Our second goal is to move the minimized classes as far away from each other as possible. One way to accomplish this is to maximize the distances between the means of the transformed data i.e.

[math]\displaystyle{ \max_{\mathbf{w}} |\mathbf{w}^{T}\mathbf{\mu}_0 - \mathbf{w}^{T}\mathbf{\mu}_1|^2 }[/math]

Simplifying:

[math]\displaystyle{ \max_{\mathbf{w}} \,(\mathbf{w}^{T}\mathbf{\mu}_0 - \mathbf{w}^{T}\mathbf{\mu}_1)^T (\mathbf{w}^{T}\mathbf{\mu}_0 - \mathbf{w}^{T}\mathbf{\mu}_1) }[/math]
[math]\displaystyle{ = \max_{\mathbf{w}}\, (\mathbf{\mu}_0-\mathbf{\mu}_1)^{T}\mathbf{w} \mathbf{w}^{T} (\mathbf{\mu}_0-\mathbf{\mu}_1) }[/math]
[math]\displaystyle{ = \max_{\mathbf{w}} \,\mathbf{w}^{T}(\mathbf{\mu}_0-\mathbf{\mu}_1)(\mathbf{\mu}_0-\mathbf{\mu}_1)^{T}\mathbf{w} }[/math]

Recall that [math]\displaystyle{ \mathbf{\mu}_i }[/math] are known. Denote

[math]\displaystyle{ \mathbf{S}_B = (\mathbf{\mu}_0-\mathbf{\mu}_1)(\mathbf{\mu}_0-\mathbf{\mu}_1)^{T} }[/math]

This matrix, called the between class variance matrix, is a rank 1 matrix, so an inverse does not exist. Altogether, we have two optimization problems we must solve simultaneously:

1) [math]\displaystyle{ \min_{\mathbf{w}} \mathbf{w}^{T} \mathbf{S_W} \mathbf{w} }[/math]
2) [math]\displaystyle{ \max_{\mathbf{w}} \mathbf{w}^{T} \mathbf{S_B} \mathbf{w} }[/math]

There are other metrics one can use to both minimize the data's variance and maximizes the distance between classes, and other goals we can try to accomplish (see metric learning, below...one day), but Fisher used this elegant method, hence his recognition in the name, and we will follow his method.

We can combine the two optimization problems into one after noting that the negative of max is min:

[math]\displaystyle{ \max_{\mathbf{w}} \; \alpha \mathbf{w}^{T} \mathbf{S_B} \mathbf{w} - \mathbf{w}^{T} \mathbf{S_W} \mathbf{w} }[/math]

The [math]\displaystyle{ \alpha }[/math] coefficient is a necessary scaling factor: if the scale of one of the terms is much larger than the other, the optimization problem will be dominated by the larger term. This means we have another unknown, [math]\displaystyle{ \alpha }[/math], to solve for. Instead, we can circumvent the scaling problem by looking at the ratio of the quantities, the original solution Fisher proposed:

[math]\displaystyle{ \max_{\mathbf{w}} \frac{\mathbf{w}^{T} \mathbf{S_B} \mathbf{w}}{\mathbf{w}^{T} \mathbf{S_W} \mathbf{w}} }[/math]

This optimization problem can be shown<ref> http://www.socher.org/uploads/Main/optimizationTutorial01.pdf </ref> to be equivalent to the following optimization problem:

[math]\displaystyle{ \max_{\mathbf{w}} \mathbf{w}^{T} \mathbf{S_B} \mathbf{w} }[/math]

(optimized function)

subject to:

[math]\displaystyle{ {\mathbf{w}^{T} \mathbf{S_W} \mathbf{w}} = 1 }[/math]

(constraint)

A heuristic understanding of this equivalence is that we have two degrees of freedom: direction and scalar. The scalar value is irrelevant to our discussion. Thus, we can set one of the values to be a constant. We can use Lagrange multipliers to solve this optimization problem:

[math]\displaystyle{ L( \mathbf{w}, \lambda) = \mathbf{w}^{T} \mathbf{S_B} \mathbf{w} - \lambda(\mathbf{w}^{T} \mathbf{S_W} \mathbf{w}-1) }[/math]
[math]\displaystyle{ \Rightarrow \frac{\partial L}{\partial \mathbf{w}} = 2 \mathbf{S}_B \mathbf{w} - 2\lambda \mathbf{S}_W\mathbf{w} }[/math]

Setting the partial derivative to 0 gives us a generalized eigenvalue problem:

[math]\displaystyle{ \mathbf{S}_B \mathbf{w} = \lambda \mathbf{S}_W \mathbf{w} }[/math]
[math]\displaystyle{ \Rightarrow \mathbf{S}_W^{-1} \mathbf{S}_B \mathbf{w} = \lambda \mathbf{w} }[/math]

This is a generalized eigenvalue problem and [math]\displaystyle{ \ \mathbf{w} }[/math] can be computed as the eigenvector corresponds to the largest eigenvalue of

[math]\displaystyle{ \mathbf{S}_W^{-1} \mathbf{S}_B }[/math]

It is very likely that [math]\displaystyle{ \mathbf{S}_W }[/math] has an inverse. If not, the pseudo-inverse<ref> http://en.wikipedia.org/wiki/Generalized_inverse </ref><ref> http://www.mathworks.com/help/techdoc/ref/pinv.html </ref> can be used. In Matlab the pseudo-inverse function is named pinv. Thus, we should choose [math]\displaystyle{ \mathbf{w} }[/math] to equal the eigenvector of the largest eigenvalue as our projection vector.

In fact we can simplify the above expression further in the case of two classes. Recall the definition of [math]\displaystyle{ \mathbf{S}_B = (\mathbf{\mu}_0-\mathbf{\mu}_1)(\mathbf{\mu}_0-\mathbf{\mu}_1)^{T} }[/math]. Substituting this into our expression:

[math]\displaystyle{ \mathbf{S}_W^{-1}(\mathbf{\mu}_0-\mathbf{\mu}_1)(\mathbf{\mu}_0-\mathbf{\mu}_1)^{T} \mathbf{w} = \lambda \mathbf{w} }[/math]
[math]\displaystyle{ (\mathbf{S}_W^{-1}(\mathbf{\mu}_0-\mathbf{\mu}_1) ) ((\mathbf{\mu}_0-\mathbf{\mu}_1)^{T} \mathbf{w}) = \lambda \mathbf{w} }[/math]

This second term is a scalar value, let's denote it [math]\displaystyle{ \beta }[/math]. Then

[math]\displaystyle{ \mathbf{S}_W^{-1}(\mathbf{\mu}_0-\mathbf{\mu}_1) = \frac{\lambda}{\beta} \mathbf{w} }[/math]
[math]\displaystyle{ \Rightarrow \, \mathbf{S}_W^{-1}(\mathbf{\mu}_0-\mathbf{\mu}_1) \propto \mathbf{w} }[/math]


(this equation indicates the direction of the separation). All we are interested in the direction of [math]\displaystyle{ \mathbf{w} }[/math], so to compute this is sufficient to finding our projection vector. Though this will not work in higher dimensions, as [math]\displaystyle{ \mathbf{w} }[/math] would be a matrix and not a vector in higher dimensions.

Extensions to Multiclass Case

If we have [math]\displaystyle{ \ k }[/math] classes, we need [math]\displaystyle{ \ k-1 }[/math] directions i.e. we need to project [math]\displaystyle{ \ k }[/math] 'points' onto a [math]\displaystyle{ \ k-1 }[/math] dimensional hyperplane. What does this change in our above derivation? The most significant difference is that our projection vector,[math]\displaystyle{ \mathbf{w} }[/math], is no longer a vector but instead is a matrix [math]\displaystyle{ \mathbf{W} }[/math], where [math]\displaystyle{ \mathbf{W} }[/math] is a d*(k-1) matrix if X is in d-dim. We transform the data as:

[math]\displaystyle{ \mathbf{x}' :\rightarrow \mathbf{W}^{T} \mathbf{x} }[/math]

so our new mean and covariances for class k are:

[math]\displaystyle{ \mathbf{\mu_k}' :\rightarrow \mathbf{W}^{T} \mathbf{\mu_k} }[/math]
[math]\displaystyle{ \mathbf{\Sigma_k}' :\rightarrow \mathbf{W}^{T} \mathbf{\Sigma_k} \mathbf{W} }[/math]

What are our new optimization sub-problems? As before, we wish to minimize the within class variance. This can be formulated as:

[math]\displaystyle{ \min_{\mathbf{W}} \mathbf{W}^{T} \mathbf{\Sigma_1} \mathbf{W} + \dots + \mathbf{W}^{T} \mathbf{\Sigma_k} \mathbf{W} }[/math]

Again, denoting [math]\displaystyle{ \mathbf{S}_W = \mathbf{\Sigma_1} + \dots + \mathbf{\Sigma_k} }[/math], we can simplify above expression:

[math]\displaystyle{ \min_{\mathbf{W}} \mathbf{W}^{T} \mathbf{S}_W \mathbf{W} }[/math]

Similarly, the second optimization problem is:

[math]\displaystyle{ \max_{\mathbf{W}} \mathbf{W}^{T} \mathbf{S}_B \mathbf{W} }[/math]

What is [math]\displaystyle{ \mathbf{S}_B }[/math] in this case? It can be shown that [math]\displaystyle{ \mathbf{S}_T = \mathbf{S}_B + \mathbf{S}_W }[/math] where [math]\displaystyle{ \mathbf{S}_T }[/math] is the covariance matrix of all the data. From this we can compute [math]\displaystyle{ \mathbf{S}_B }[/math].

Next, if we express [math]\displaystyle{ \mathbf{W} = ( \mathbf{w}_1 , \mathbf{w}_2 , \dots ,\mathbf{w}_k ) }[/math] observe that, for [math]\displaystyle{ \mathbf{A} = \mathbf{S}_B , \mathbf{S}_W }[/math]:

[math]\displaystyle{ Tr(\mathbf{W}^{T} \mathbf{A} \mathbf{W}) = \mathbf{w}_1^{T} \mathbf{A} \mathbf{w}_1 + \dots + \mathbf{w}_k^{T} \mathbf{A} \mathbf{w}_k }[/math]

where [math]\displaystyle{ \ Tr() }[/math] is the trace of a matrix. Thus, following the same steps as in the two-class case, we have the new optimization problem:

[math]\displaystyle{ \max_{\mathbf{W}} \frac{ Tr(\mathbf{W}^{T} \mathbf{S}_B \mathbf{W}) }{Tr(\mathbf{W}^{T} \mathbf{S}_W \mathbf{W})} }[/math]

The first (k-1) eigenvector of [math]\displaystyle{ \mathbf{S}_W^{-1} \mathbf{S}_B }[/math] are required (k-1) direction. That is why under multiclass case, for the k-class problem, we need to project initial points onto k-1 direction.

subject to:

[math]\displaystyle{ Tr( \mathbf{W} \mathbf{S_W} \mathbf{W}^{T}) = 1 }[/math]

Again, in order to solve the above optimization problem, we can use the Lagrange multiplier <ref> http://en.wikipedia.org/wiki/Lagrange_multiplier </ref>:

[math]\displaystyle{ \begin{align}L(\mathbf{W},\Lambda) = Tr[\mathbf{W}^{T}\mathbf{S}_{B}\mathbf{W}] - \Lambda\left\{ Tr[\mathbf{W}^{T}\mathbf{S}_{W}\mathbf{W}] - 1 \right\}\end{align} }[/math].

where [math]\displaystyle{ \ \Lambda }[/math] is a d by d diagonal matrix.

Then, we differentiating with respect to [math]\displaystyle{ \mathbf{W} }[/math]:

[math]\displaystyle{ \begin{align}\frac{\partial L}{\partial \mathbf{W}} = (\mathbf{S}_{B} + \mathbf{S}_{B}^{T})\mathbf{W} - \Lambda (\mathbf{S}_{W} + \mathbf{S}_{W}^{T})\mathbf{W}\end{align} = 0 }[/math].

Thus:

[math]\displaystyle{ \begin{align}\mathbf{S}_{B}\mathbf{W} = \Lambda\mathbf{S}_{W}\mathbf{W}\end{align} }[/math]
[math]\displaystyle{ \begin{align}\mathbf{S}_{W}^{-1}\mathbf{S}_{B}\mathbf{W} = \Lambda\mathbf{W}\end{align} }[/math]

where, [math]\displaystyle{ \mathbf{\Lambda} =\begin{pmatrix}\lambda_{1} & & 0\\&\ddots&\\0 & &\lambda_{d}\end{pmatrix} }[/math]

The above equation is of the form of an eigenvalue problem. Thus, for the solution the k-1 eigenvectors corresponding to the k-1 largest eigenvalues should be chosen as the projection matrix, [math]\displaystyle{ \mathbf{W} }[/math]. In fact, there should only by k-1 eigenvectors corresponding to k-1 non-zero eigenvalues using the above equation.

Summary

FDA has two optimization problems:

1) [math]\displaystyle{ \min_{\mathbf{w}} \mathbf{w}^{T} \mathbf{S_W} \mathbf{w} }[/math]
2) [math]\displaystyle{ \max_{\mathbf{w}} \mathbf{w}^{T} \mathbf{S_B} \mathbf{w} }[/math]

where [math]\displaystyle{ \mathbf{S}_W = \mathbf{\Sigma_1} + \dots + \mathbf{\Sigma_k} }[/math] is called the within class variance and [math]\displaystyle{ \ \mathbf{S}_B = \mathbf{S}_T - \mathbf{S}_W }[/math] is called the between class variance where [math]\displaystyle{ \mathbf{S}_T }[/math] is the variance of all the data together.

Every column of [math]\displaystyle{ \mathbf{w} }[/math] is parallel to a single eigenvector.

The two optimization problems are combined as follows:

[math]\displaystyle{ \max_{\mathbf{w}} \frac{\mathbf{w}^{T} \mathbf{S_B} \mathbf{w}}{\mathbf{w}^{T} \mathbf{S_W} \mathbf{w}} }[/math]

By adding a constraint as shown:

[math]\displaystyle{ \max_{\mathbf{w}} \mathbf{w}^{T} \mathbf{S_B} \mathbf{w} }[/math]

subject to:

[math]\displaystyle{ \mathbf{w}^{T} \mathbf{S_W} \mathbf{w} = 1 }[/math]

Lagrange multipliers can be used and essentially the problem becomes an eigenvalue problem:

[math]\displaystyle{ \begin{align}\mathbf{S}_{W}^{-1}\mathbf{S}_{B}\mathbf{w} = \lambda\mathbf{w}\end{align} }[/math]

And [math]\displaystyle{ \ w }[/math] can be computed as the k-1 eigenvectors corresponding to the largest k-1 eigenvalues of [math]\displaystyle{ \mathbf{S}_W^{-1} \mathbf{S}_B }[/math].

Variations

Some adaptations and extensions exist for the FDA technique (Source: <ref>R. Gutierrez-Osuna, "Linear Discriminant Analysis" class notes for Intro to Pattern Analysis, Texas A&M University. Available: [2]</ref>):

1) Non-Parametric LDA (NPLDA) by Fukunaga

This method does not assume that the Gaussian distribution is unimodal and it is actually possible to extract more than k-1 features (where k is the number of classes).

2) Orthonormal LDA (OLDA) by Okada and Tomita

This method finds projections that are orthonormal in addition to maximizing the FDA objective function. This method can also extract more than k-1 features (where k is the number of classes).

3) Generalized LDA (GLDA) by Lowe

This method incorporates additional cost functions into the FDA objective function. This causes classes with a higher cost to be placed further apart in the lower dimensional representation.

Optical Character Recognition (OCR) using FDA

Optical Character Recognition (OCR) is a method to translate scanned, human-readable text into machine-encoded text. In class, we have employed FDA to recognize digits. A paper <ref>Manjunath Aradhya, V.N., Kumar, G.H., Noushath, S., Shivakumara, P., "Fisher Linear Discriminant Analysis based Technique Useful for Efficient Character Recognition", Intelligent Sensing and Information Processing, 2006.</ref> describes the use of FDA to recognize printed documents written in English and Kannada, the fifth most popular language in India. The researchers conducted two types of experiments: one on printed Kannada and English documents and another on handwritten English characters. In the first type of experiments, they conducted four experiments: i) clear and degraded characters in specific fonts; ii) characters in various size; iii) characters in various fonts; iv) characters with noise. In experiment i, FDA achieved 98.2% recognition rate with 12 projection vectors in 21,560 samples. In experiment ii, it achieved 96.9% recognition rate with 10 projection vectors in 11,200 samples. In experiment iii, it achieved 93% recognition rate with 17 projection vectors in 19,850 samples. In experiment iv, it achieved 96.3% recognition rate with 14 projection vectors in 20,000 samples. Overall, the recognition by FDA was very satisfying. In the second type of experiment, a total of 12,400 handwriting samples from 200 different writers were collected. With 175 samples for training purpose, the recognition rate by FDA is 92% with 35 projection vectors.

Facial Recognition using FDA

The Fisherfaces method of facial recognition uses PCA and FDA in a similar way to using just PCA. However, it is more advantageous than using on PCA because it minimizes variation within each class and maximizes class separation. The PCA only method is, therefore, more sensitive to lighting and pose variations. In studies done by Belhumeir, Hespanda, and Kiregeman (1997) and Turk and Pentland (1991), this method had a 96% recognition rate. <ref>Bagherian, Elham. Rahmat, Rahmita. Facial Feature Extraction for Face Recognition: a Review. International Symposium on Information Technology, 2008. ITSim2 article number 4631649.</ref>

Linear and Logistic Regression (Lecture: Oct. 06, 2011)

Linear Regression

Both Regression and Classification are aimed to find a function h which maps data X to feature Y. In regression, [math]\displaystyle{ \ y }[/math] is a continuous variable. In classification, [math]\displaystyle{ \ y }[/math] is a discrete variable. In linear regression, data is modeled using a linear function, and unknown parameters are estimated from the data. Regression problems are easier to formulate into functions (since [math]\displaystyle{ \ y }[/math] is continuous) and it is possible to solve classification problems by treating them like regression problems. In order to do so, the requirement in classification that [math]\displaystyle{ \ y }[/math] is discrete must first be relaxed. Once [math]\displaystyle{ \ y }[/math] has been found using regression techniques, it is possible to determine the discrete class corresponding to the [math]\displaystyle{ \ y }[/math] that has been found to solve the original classification problem. The discrete class is obtained by defining a threshold where [math]\displaystyle{ \ y }[/math] values below the threshold belong to one class and [math]\displaystyle{ \ y }[/math] values above the threshold belong to another class.

When running a regression we are making two assumptions,

  1. A linear relationship exists between two variables (i.e. X and Y)
  2. This relationship is additive (i.e. [math]\displaystyle{ Y= f_1(x_1) + f_2(x_2) + …+ f_n(x_n) }[/math]). Technically, linear regression estimates how much Y changes when X changes one unit.


More formally: a more direct approach to classification is to estimate the regression function [math]\displaystyle{ \ r(\mathbf{x}) = E[Y | X] }[/math] without bothering to estimate [math]\displaystyle{ \ f_k(\mathbf{x}) }[/math]. For the linear model, we assume that either the regression function [math]\displaystyle{ r(\mathbf{x}) }[/math] is linear, or the linear model has a reasonable approximation.

Here is a simple example. If [math]\displaystyle{ \ Y = \{0,1\} }[/math] (a two-class problem), then [math]\displaystyle{ \, h^*(\mathbf{x})= \left\{\begin{matrix} 1 &\text{, if } \hat r(\mathbf{x})\gt \frac{1}{2} \\ 0 &\mathrm{, otherwise} \end{matrix}\right. }[/math]

Basically, we can use a linear function [math]\displaystyle{ \ f(x, \beta) = y_i = \mathbf{\beta\,}^T \mathbf{x_{i}} + \mathbf{\beta\,_0} }[/math] , [math]\displaystyle{ \mathbf{x_{i}} \in \mathbb{R}^{d} }[/math] and use the least squares approach to fit the function to the given data. This is done by minimizing the following expression:

[math]\displaystyle{ \min_{\mathbf{\beta}} \sum_{i=1}^n (y_i - \mathbf{\beta}^T \mathbf{x_{i}} - \mathbf{\beta_0})^2 }[/math]

For convenience, [math]\displaystyle{ \mathbf{\beta} }[/math] and [math]\displaystyle{ \mathbf{\beta}_0 }[/math] can be combined into a d+1 dimensional vector, [math]\displaystyle{ \tilde{\mathbf{\beta}} }[/math]. The term 1 is appended to [math]\displaystyle{ \ x }[/math]. Thus, the function to be minimized can now be re-expressed as:

[math]\displaystyle{ \ LS = \min_{\tilde{\beta}} \sum_{i=1}^{n} (y_i - \tilde{\beta}^T \tilde{x_i} )^2 }[/math]

[math]\displaystyle{ \ LS = \min_{\tilde{\beta}} || y - X \tilde{\beta} ||^2 }[/math]

where

[math]\displaystyle{ \tilde{\mathbf{\beta}} = \left( \begin{array}{c}\mathbf{\beta_{1}} \\ \\ \vdots \\ \\ \mathbf{\beta}_{d} \\ \\ \mathbf{\beta}_{0} \end{array} \right) \in \mathbb{R}^{d+1} }[/math] and

[math]\displaystyle{ \tilde{x} = \left( \begin{array}{c}{x_{1}} \\ \\ \vdots \\ \\ {x}_{d} \\ \\ 1 \end{array} \right) \in \mathbb{R}^{d+1} }[/math].

where [math]\displaystyle{ \tilde{\mathbf{\beta}} }[/math] is a d+1 by 1 matrix(a d+1 dimensional vector)

Here [math]\displaystyle{ \ y }[/math] and [math]\displaystyle{ \tilde{\beta} }[/math] are vectors and [math]\displaystyle{ \ X }[/math] is a n by d+1 matrix with each row represents a data point with a 1 as the last entry. X also can be seen as a matrix in which each column represents a feature and the [math]\displaystyle{ \ (d+1)^{th} }[/math] column is an all-one vector corresponding to [math]\displaystyle{ \ \beta_0 }[/math] .

[math]\displaystyle{ \ {\tilde{\beta}} }[/math] that minimizes the error is:

[math]\displaystyle{ \ \frac{\partial LS}{\partial \tilde{\beta}} = -2(X^T)(y-X\tilde{\beta})=0 }[/math], which gives us [math]\displaystyle{ \ {\tilde{\beta}} = (X^TX)^{-1}X^Ty }[/math]. When [math]\displaystyle{ \ X^TX }[/math] is singular we have to use pseudo inverse for obtaining optimal [math]\displaystyle{ \ \tilde{\beta} }[/math].

Using regression to solve classification problems is not mathematically correct, if we want to be true to classification. However, this method works well in practice, if the problem is not complicated. When we have only two classes (for which the target values are encoded as [math]\displaystyle{ \ \frac{-n}{n_1} }[/math] and [math]\displaystyle{ \ \frac{n}{n_2} }[/math], where [math]\displaystyle{ \ n_i }[/math] is the number of data points in class i and n is the total number of points in the data set) this method is identical to LDA.

Matlab Example

The following is the code and the explanation for each step.

Again, we use the data in 2_3.m.

 >>load 2_3;
 >>[U, sample] = princomp(X');
 >>sample = sample(:,1:2);

We carry out Principal Component Analysis (PCA) to reduce the dimensionality from 64 to 2.

 >>y = zeros(400,1);
 >>y(201:400) = 1;

We let y represent the set of labels coded as 0 and 1.

 >>x=[sample;ones(1,400)];

Construct x by adding a row of vector 1 to data.

 >>b=inv(x*x')*x*y;

Calculate b, which represents [math]\displaystyle{ \beta }[/math] in the linear regression model.

 >>x1=x';
 >>for i=1:400
   if x1(i,:)*b>0.5
        plot(x1(i,1),x1(i,2),'.')
        hold on
   elseif x1(i,:)*b < 0.5
       plot(x1(i,1),x1(i,2),'r.')
   end 
 end

Plot the fitted y values.

File:linearregression.png
the figure shows that the classification of the data points in 2_3.m by the linear regression model

Practical Usefulness

Linear regression in general is not very useful for classification purposes. One of the main problems is that new data may not always have a positive ("more successful") impact on the linear regression learning algorithm due to the non-linear "binary" form of the classes. Consider the following simple example:

File:linreg1.jpg

The boundary decision at [math]\displaystyle{ r(x)=0.5 }[/math] was added for visualization purposes. Clearly, linear regression categories this data properly. However, consider adding one more datum:

File:linreg2.jpg

This datum actually skews linear regression to the point that it misclassified some of the data points that should be labelled '1'. This shows how linear regression cannot adapt well to binary classification problems.

general guidelines for building a regression model

  1. Make sure all relevant predictors are included. These are based on your research question, theory and knowledge on the topic.
  2. Combine those predictors that tend to measure the same thing (i.e. as an index).
  3. Consider the possibility of adding interactions (mainly for those variables with large effects)
  4. Strategy to keep or drop variables:
    1. Predictor not significant and has the expected sign -> Keep it
    2. Predictor not significant and does not have the expected sign -> Drop it
    3. Predictor is significant and has the expected sign -> Keep it
    4. Predictor is significant but does not have the expected sign -> Review, you may need more variables, it may be interacting with another variable in the model or there may be an error in the data.<ref>http://dss.princeton.edu/training/Regression101.pdf</ref>

Logistic Regression

Logistic regression is a more advanced method for classification, and is more commonly used. In statistics, logistic regression (sometimes called the logistic model or logit model) is used for prediction of the probability of occurrence of an event by fitting data to a logit function logistic curve. It is a generalized linear model used for binomial regression. Like many forms of regression analysis, it makes use of several predictor variables that may be either numerical or categorical. For example, the probability that a person has a heart attack within a specified time period might be predicted from knowledge of the person's age, sex and body mass index. Logistic regression is used extensively in the medical and social sciences fields, as well as marketing applications such as prediction of a customer's propensity to purchase a product or cease a subscription.<ref>http://en.wikipedia.org/wiki/Logistic_regression</ref>

We can define a function
[math]\displaystyle{ f_1(x)= P(Y=1| X=x) = (\frac{e^{\mathbf{\beta\,}^T \mathbf{x}}}{1+e^{\mathbf{\beta\,}^T \mathbf{x}}}) }[/math]

[math]\displaystyle{ P(Y=1 | X=x) }[/math]


This is a valid conditional density function since the two components ([math]\displaystyle{ f_1 }[/math] and [math]\displaystyle{ f_2 }[/math], shown just below) sum to 1 and remain in [0, 1].

It looks similar to a step function, but we have relaxed it so that we have a smooth curve, and can therefore take the derivative.

The range of this function is (0,1) since

[math]\displaystyle{ \lim_{x \to -\infty}f_1(\mathbf{x}) = 0 }[/math] and [math]\displaystyle{ \lim_{x \to \infty}f_1(\mathbf{x}) = 1 }[/math].

As shown on this graph of [math]\displaystyle{ \ P(Y=1 | X=x) }[/math].

Then we compute the complement of f1(x), and get

[math]\displaystyle{ f_2(x)= P(Y=0| X=x) = 1-f_1(x) = (\frac{1}{1+e^{\mathbf{\beta\,}^T \mathbf{x}}}) }[/math], denoted [math]\displaystyle{ f_2 }[/math].

[math]\displaystyle{ P(Y=0 | X=x) }[/math]


Function [math]\displaystyle{ f_2 }[/math] is commonlly called Logistic function, and it behaves like
[math]\displaystyle{ \lim_{x \to -\infty}f_2(\mathbf{x}) = 1 }[/math] and
[math]\displaystyle{ \lim_{x \to \infty}f_2(\mathbf{x}) = 0 }[/math].

As shown on this graph of [math]\displaystyle{ \ P(Y=0 | X=x) }[/math].

Since [math]\displaystyle{ f_1 }[/math] and [math]\displaystyle{ f_2 }[/math] specify the conditional distribution, the Bernoulli distribution is appropriate for specifying the likelihood of the class. Conveniently code the two classes via 0 and 1 responses, then the likelihood of [math]\displaystyle{ y_i }[/math] for given input [math]\displaystyle{ x_i }[/math] is given by,

[math]\displaystyle{ f(y_i|\mathbf{x_i}) = (f_1(\mathbf{x_i}))^{y} (1-f_1\mathbf{x_i}))^{1-y} = (\frac{e^{\mathbf{\beta\,}^T \mathbf{x_i}}}{1+e^{\mathbf{\beta\,}^T \mathbf{x_i}}})^{y_i} (\frac{1}{1+e^{\mathbf{\beta\,}^T \mathbf{x_i}}})^{1-y_i} }[/math]

Thus y takes value 1 with success probability [math]\displaystyle{ f_1 }[/math] and value 0 with failure probability [math]\displaystyle{ 1 - f_1 }[/math]. We can use this to derive the likelihood for N training observations, and search for the maximizing parameter [math]\displaystyle{ \beta }[/math].

In general, we can think of the problem as having a box with some knobs. Inside the box is our objective function which gives the form to classify our input ([math]\displaystyle{ x_i }[/math]) to our output ([math]\displaystyle{ y_i }[/math]). The knobs in the box are functioning like the parameters of the objective function. Our job is to find the proper parameters that can minimize the error between our output and the true value. So we have turned our machine learning problem into an optimization problem.

Since we need to find the parameters that maximize the chance of having our observed data coming from the distribution of [math]\displaystyle{ f (x|\theta) }[/math], we need to introduce Maximum Likelihood Estimation.

Maximum Likelihood Estimation

Given iid data points [math]\displaystyle{ ({\mathbf{x}_i})_{i=1}^n }[/math] and density function [math]\displaystyle{ f(\mathbf{x}|\mathbf{\theta}) }[/math], where the form of f is known but the parameters [math]\displaystyle{ \theta }[/math] are unknown. The maximum likelihood estimation of [math]\displaystyle{ \theta\,_{ML} }[/math] is a set of parameters that maximize the probability of observing [math]\displaystyle{ ({\mathbf{x}_i})_{i=1}^n }[/math] given [math]\displaystyle{ \theta\,_{ML} }[/math]. For example, we may know that the data come from a Gaussian distribution but we don't know the mean and variance of the distribution.

[math]\displaystyle{ \theta_\mathrm{ML} = \underset{\theta}{\operatorname{arg\,max}}\ f(\mathbf{x}|\theta) }[/math].

There was some discussion in class regarding the notation. In literature, Bayesians use [math]\displaystyle{ f(\mathbf{x}|\mu) }[/math] the probability of x given [math]\displaystyle{ \mu }[/math], while Frequentists use [math]\displaystyle{ f(\mathbf{x};\mu) }[/math] the probability of x and [math]\displaystyle{ \mu }[/math] occurring together. In practice, these two are equivalent.

Our goal is to find theta to maximize [math]\displaystyle{ \mathcal{L}(\theta\,) = f(\underline{\mathbf{x}}|\;\theta) = \prod_{i=1}^n f(\mathbf{x_i}|\theta) }[/math]. where [math]\displaystyle{ \underline{\mathbf{x}}=\{x_i\}_{i=1}^{n} }[/math] (The second equality holds because data points are iid.)

In many cases, it’s more convenient to work with the natural logarithm of the likelihood. (Recall that the logarithm preserves minumums and maximums.) [math]\displaystyle{ \ell(\theta)=\ln\mathcal{L}(\theta\,) }[/math]

[math]\displaystyle{ \ell(\theta\,)=\sum_{i=1}^n \ln f(\mathbf{x_i}|\theta) }[/math]

Applying Maximum Likelihood Estimation to [math]\displaystyle{ f(y|\mathbf{x})= (\frac{e^{\mathbf{\beta\,}^T \mathbf{x}}}{1+e^{\mathbf{\beta\,}^T \mathbf{x}}})^{y} (\frac{1}{1+e^{\mathbf{\beta\,}^T \mathbf{x}}})^{1-y} }[/math], gives

[math]\displaystyle{ \mathcal{L}(\mathbf{\beta\,})=\prod_{i=1}^n (\frac{e^{\mathbf{\beta\,}^T \mathbf{x_i}}}{1+e^{\mathbf{\beta\,}^T \mathbf{x_i}}})^{y_i} (\frac{1}{1+e^{\mathbf{\beta\,}^T \mathbf{x_i}}})^{1-y_i} }[/math]

[math]\displaystyle{ \ell(\mathbf{\beta\,}) = \sum_{i=1}^n \left[ y_i \ln(P(Y=y_i|X=x_i)) + (1-y_i) \ln(1-P(Y=y_i|X=x_i))\right] }[/math]

This is the likelihood function we want to maximize. Note that [math]\displaystyle{ -\ell(\mathbf{\beta\,}) }[/math] can be interpreted as the cost function we want to minimize. Simplifying, we get:

[math]\displaystyle{ \begin{align} {\ell(\mathbf{\beta\,})} & {} = \sum_{i=1}^n \left(y_i ({\mathbf{\beta\,}^T \mathbf{x_i}} - \ln({1+e^{\mathbf{\beta\,}^T \mathbf{x_i}}})) + (1-y_i) (\ln{1} - \ln({1+e^{\mathbf{\beta\,}^T \mathbf{x_i}}}))\right) \\[10pt]&{} = \sum_{i=1}^n \left(y_i ({\mathbf{\beta\,}^T \mathbf{x_i}} - \ln({1+e^{\mathbf{\beta\,}^T \mathbf{x_i}}})) - (1-y_i) \ln({1+e^{\mathbf{\beta\,}^T \mathbf{x_i}}})\right) \\[10pt] &{} = \sum_{i=1}^n \left(y_i ({\mathbf{\beta\,}^T \mathbf{x_i}} - \ln({1+e^{\mathbf{\beta\,}^T \mathbf{x_i}}})) - \ln({1+e^{\mathbf{\beta\,}^T \mathbf{x_i}}}) + y_i \ln({1+e^{\mathbf{\beta\,}^T \mathbf{x_i}}})\right) \\[10pt] &{} = \sum_{i=1}^n \left(y_i {\mathbf{\beta\,}^T \mathbf{x_i}} - \ln({1+e^{\mathbf{\beta\,}^T \mathbf{x_i}}})\right) \end{align} }[/math]

[math]\displaystyle{ \begin{align} {\frac{\partial \ell}{\partial \mathbf{\beta\,}}}&{} = \sum_{i=1}^n \left(y_i \mathbf{x_i} - \frac{e^{\mathbf{\beta\,}^T \mathbf{x_i}}}{1+e^{\mathbf{\beta\,}^T \mathbf{x_i}}} \mathbf{x_i} \right) \\[8pt] & {}= \sum_{i=1}^n \left(y_i \mathbf{x_i} - P(\mathbf{x_i} | \mathbf{\beta\,}) \mathbf{x_i}\right) \end{align} }[/math]

Now set [math]\displaystyle{ \frac{\partial \ell}{\partial \mathbf{\beta\,}} }[/math] equal to 0, and [math]\displaystyle{ \mathbf{\beta\,} }[/math] can be numerically solved by Newton's method.

Newton's Method

Newton's Method (or Newton-Raphson method) is a numerical method to find better approximations to the solutions of real-valued function. The function usually does not have an analytical form.

The goal is to find [math]\displaystyle{ \mathbf{x} }[/math] such that [math]\displaystyle{ f(\mathbf{x}) = 0 }[/math], such Xs are called the roots of function f. Iteration can be used to solve for x using the following equation [math]\displaystyle{ \mathbf{x_n} = \mathbf{x_{n-1}} - \frac{f(\mathbf{x_{n-1}})}{f'(\mathbf{x_{n-1}})}.\,\! }[/math].

It takes an initial guess [math]\displaystyle{ \mathbf{x_0} }[/math] and the direction [math]\displaystyle{ \ \frac{f(x_{n-1})}{f'(x_{n-1})} }[/math] that moves toward a better approximation. It then finds a newer and better [math]\displaystyle{ \mathbf{x_n} }[/math]. Iterating from the original guess slowly converges to a solution that will be sufficiently accurate to the actual solution [math]\displaystyle{ \mathbf{x_n} }[/math]. Note that this may find local optimums, and each function may require multiple guesses to find all the roots.

Matlab Example

Below is the Matlab code to find a root of the function [math]\displaystyle{ \,y=x^2-2500 }[/math] from the initial guess of [math]\displaystyle{ \,x=90 }[/math]. The roots of this equation are trivially solved analytically to be [math]\displaystyle{ \,x=\pm 50 }[/math].

x=1:100;
y=x.^2 - 2500;  %function to find root of
plot(x,y);

x_opt=90;  %starting guess
x_traversed=[];
y_traversed=[];
error=[];

for i=1:6,
   y_opt=x_opt^2-2500;
   y_prime_opt=2*x_opt;
   
   %save results of each iteration
   x_traversed=[x_traversed x_opt];
   y_traversed=[y_traversed y_opt];
   error=[error abs(y_opt)];
   
   %update minimum
   x_opt=x_opt-(y_opt/y_prime_opt);
end

hold on;
plot(x_traversed,y_traversed,'r','LineWidth',2);
title('Progressions Towards Root of y=x^2 - 2500');
legend('y=x^2 - 2500','Progression');
xlabel('x');
ylabel('y');

hold off;
figure();
semilogy(1:6,error);
title('Error vs Iteration');
xlabel('Iteration');
ylabel('Absolute Y Error');

In this example the Newton method converges to an optimum to within machine precision in only 6 iterations as can be seen from the plot of the Y deviate below.

File:newton error.png File:newton progression.png

Advantages/Limitation of Linear Regression

  • Linear regression implements a statistical model that, when relationships between the independent variables and the dependent variable are almost linear, shows optimal results.
  • Linear regression is often inappropriately used to model non-linear relationships.
  • Linear regression is limited to predicting numeric output.
  • A lack of explanation about what has been learned can be a problem.



Advantages of Logistic Regression

Logistic regression has several advantages over discriminant analysis:

  • It is more robust: the independent variables don't have to be normally distributed, or have equal variance in each group.
  • It does not assume a linear relationship between the IV and DV.
  • It may handle nonlinear effects.
  • You can add explicit interaction and power terms.
  • The DV need not be normally distributed.
  • There is no homogeneity of variance assumption.
  • Normally distributed error terms are not assumed.
  • It does not require that the independent variables be interval.
  • It does not require that the independent variables be unbounded.

Comparison Between Logistic Regression And Linear Regression

Linear regression is a regression where the explanatory variable X and response variable Y are linearly related. Both X and Y can be continuous variables, and for every one unit increase in the explanatory variable, there is a set increase or decrease in the response variable Y. A closed form solution exists for the least squares estimate of [math]\displaystyle{ \beta }[/math].

Logistic regression is a regression where the explanatory variable X and response variable Y are not linearly related. The response variable provides the probability of occurrence of an event. X can be continuous but Y must be a categorical variable (e.g., can only assume two values, i.e. 0 or 1). For every one unit increase in the explanatory variable, there is a set increase or decrease in the probability of occurrence of the event. No closed form solution exists for the least squares estimate of [math]\displaystyle{ \beta }[/math].


In terms of making assumptions on the data set: In LDA, we assumed that the probability density function (PDF) of each class and priors were Gaussian and Bernoulli, respectively. However, in Logistic Regression, we assumed that the PDF of each class had a parametric form and we ignored the priors. Therefore, we may conclude that Logistic regression has less assumptions than LDA.

Newton-Raphson Method (Lecture: Oct 11, 2011)

Previously we had derivated the log likelihood function for the logistic function.

[math]\displaystyle{ \begin{align} L(\beta\,) = \prod_{i=1}^n \left( (\frac{e^{\mathbf{\beta\,}^T \mathbf{x_i}}}{1+e^{\mathbf{\beta\,}^T \mathbf{x_i}}})^{y_i}(\frac{1}{1+e^{\mathbf{\beta\,}^T \mathbf{x_i}}})^{1-y_i} \right) \end{align} }[/math]

After taking log, we can have:

[math]\displaystyle{ \begin{align} \ell(\beta\,) = \sum_{i=1}^n \left( y_i \ln{\frac{e^{\mathbf{\beta\,}^T \mathbf{x_i}}}{1+e^{\mathbf{\beta\,}^T \mathbf{x_i}}}} + (1 - y_i) \ln{\frac{1}{1+e^{\mathbf{\beta\,}^T \mathbf{x_i}}}} \right) \end{align} }[/math]

This implies that:

[math]\displaystyle{ \begin{align} {\ell(\mathbf{\beta\,})} & {} = \sum_{i=1}^n \left(y_i \left( {\mathbf{\beta\,}^T \mathbf{x_i}} - \ln(1+e^{\mathbf{\beta\,}^T \mathbf{x_i}}) \right) - (1 - y_i)\ln({1+e^{\mathbf{\beta\,}^T \mathbf{x_i}}})\right) \end{align} }[/math]

[math]\displaystyle{ \begin{align} {\ell(\mathbf{\beta\,})} & {} = \sum_{i=1}^n \left(y_i {\mathbf{\beta\,}^T \mathbf{x_i}} - \ln({1+e^{\mathbf{\beta\,}^T \mathbf{x_i}}})\right) \end{align} }[/math]

Our goal is to find the [math]\displaystyle{ \beta\, }[/math] that maximizes [math]\displaystyle{ {\ell(\mathbf{\beta\,})} }[/math]. We use calculus to do this ie solve [math]\displaystyle{ {\frac{\partial \ell}{\partial \mathbf{\beta\,}}}=0 }[/math]. To do this we use the famous numerical method of Newton-Raphson. This is an iterative method where we calculate the first and second derivative at each iteration.

Newton's Method

Here is how we usually implement Newton's Method: [math]\displaystyle{ \mathbf{x_{n+1}} = \mathbf{x_n} - \frac{f(\mathbf{x_n})}{f'(\mathbf{x_n})}.\,\! }[/math]. In our particular case, we look for x such that [math]\displaystyle{ g'(x) = 0 }[/math], and implement it by [math]\displaystyle{ \mathbf{x_{n+1}} = \mathbf{x_n} - \frac{f'(\mathbf{x_n})}{f''(\mathbf{x_n})}.\,\! }[/math].
In practice, the convergence speed depends on |F'(x*)|, where F(x) = [math]\displaystyle{ \mathbf{x} - \frac{f(\mathbf{x})}{f'(\mathbf{x})}.\,\! }[/math]. The smaller the |F'(x*)| is, the faster the convergence is.


The first derivative is typically called the score vector.

[math]\displaystyle{ \begin{align} S(\beta\,) {}= {\frac{\partial \ell}{ \partial \mathbf{\beta\,}}}&{} = \sum_{i=1}^n \left(y_i \mathbf{x_i} - \frac{e^{\mathbf{\beta\,}^T \mathbf{x_i}}}{1+e^{\mathbf{\beta\,}^T \mathbf{x_i}}} \mathbf{x_i} \right) \\[8pt] \end{align} }[/math]

[math]\displaystyle{ \begin{align} S(\beta\,) {}= {\frac{\partial \ell}{ \partial \mathbf{\beta\,}}}&{} = \sum_{i=1}^n \left(y_i \mathbf{x_i} - P(x_i|\beta) \mathbf{x_i} \right) \\[8pt] \end{align} }[/math]

where [math]\displaystyle{ \ P(x_i|\beta) = \frac{e^{\beta^T x_i}}{1+e^{\beta^T x_i}} }[/math]

The negative of the second derivative is typically called the information matrix.

[math]\displaystyle{ \begin{align} I(\beta\,) {}= -{\frac{\partial^2 \ell}{\partial \mathbf {\beta\,} \partial \mathbf{\beta\,}^T}}&{} = \sum_{i=1}^n \left(\mathbf{x_i}\mathbf{x_i}^T (\frac{e^{\mathbf{\beta\,}^T \mathbf{x_i}}}{1+e^{\mathbf{\beta\,}^T \mathbf{x_i}}})(1 - \frac{e^{\mathbf{\beta\,}^T \mathbf{x_i}}}{1+e^{\mathbf{\beta\,}^T \mathbf{x_i}}}) \right) \\[8pt] \end{align} }[/math]

[math]\displaystyle{ \begin{align} I(\beta\,) {}= -{\frac{\partial^2 \ell}{\partial \mathbf {\beta\,} \partial \mathbf{\beta\,}^T}}&{} = \sum_{i=1}^n \left(\mathbf{x_i}\mathbf{x_i}^T (\frac{e^{\mathbf{\beta\,}^T \mathbf{x_i}}}{1+e^{\mathbf{\beta\,}^T \mathbf{x_i}}})(\frac{1}{1+e^{\mathbf{\beta\,}^T \mathbf{x_i}}}) \right) \\[8pt] \end{align} }[/math]

[math]\displaystyle{ \begin{align} I(\beta\,) {}= -{\frac{\partial^2 \ell}{\partial \mathbf {\beta\,} \partial \mathbf{\beta\,}^T}}&{} = \sum_{i=1}^n \left(\mathbf{x_i}\mathbf{x_i}^T (P(x_i|\beta))(1 - P(x_i|\beta)) \right) \\[8pt] \end{align} }[/math]

again where [math]\displaystyle{ \ P(x_i|\beta) = \frac{e^{\beta^T x_i}}{1+e^{\beta^T x_i}} }[/math]

[math]\displaystyle{ \, \beta\,^{new} \leftarrow \beta\,^{old}-\frac {f(\beta\,^{old})}{f'(\beta\,^{old})} }[/math]

We then use the following update formula to calcalute continually better estimates of the optimal [math]\displaystyle{ \beta\, }[/math]. It is not typically important what you use as your initial estimate [math]\displaystyle{ \beta\,^{(1)} }[/math] is. (However, some improper beta will cause I to be a singular matrix).

[math]\displaystyle{ \beta\,^{(r+1)} {}= \beta\,^{(r)} + (I(\beta\,^{(r)}))^{-1} S(\beta\,^{(r)} ) }[/math]

Matrix Notation

Let [math]\displaystyle{ \mathbf{y} }[/math] be a (n x 1) vector of all class labels. This is called the response in other contexts.

Let [math]\displaystyle{ \mathbb{X} }[/math] be a (n x (d+1)) matrix of all your features. Each row represents a data point. Each column represents a feature/covariate.

Let [math]\displaystyle{ \mathbf{p}^{(r)} }[/math] be a (n x 1) vector with values [math]\displaystyle{ P(\mathbf{x_i} |\beta\,^{(r)} ) }[/math]

Let [math]\displaystyle{ \mathbb{W}^{(r)} }[/math] be a (n x n) diagonal matrix with [math]\displaystyle{ \mathbb{W}_{ii}^{(r)} {}= P(\mathbf{x_i} |\beta\,^{(r)} )(1 - P(\mathbf{x_i} |\beta\,^{(r)} )) }[/math]

The score vector, information matrix and update equation can be rewritten in terms of this new matrix notation, so the first derivative is

[math]\displaystyle{ \begin{align} S(\beta\,^{(r)}) {}= {\frac{\partial \ell}{ \partial \mathbf{\beta\,}}}&{} = \mathbb{X}^T(\mathbf{y} - \mathbf{p}^{(r)})\end{align} }[/math]

And the second derivative is

[math]\displaystyle{ \begin{align} I(\beta\,^{(r)}) {}= -{\frac{\partial^{2} \ell}{\partial \mathbf {\beta\,} \partial \mathbf{\beta\,}^T}}&{} = \mathbb{X}^T\mathbb{W}^{(r)}\mathbb{X} \end{align} }[/math]

Therfore, we can fit a regression problem as follows

[math]\displaystyle{ \beta\,^{(r+1)} {}= \beta\,^{(r)} + (I(\beta\,^{(r)}))^{-1}S(\beta\,^{(r)} ) {} }[/math]

[math]\displaystyle{ \beta\,^{(r+1)} {}= \beta\,^{(r)} + (\mathbb{X}^T\mathbb{W}^{(r)}\mathbb{X})^{-1}\mathbb{X}^T(\mathbf{y} - \mathbf{p}^{(r)}) }[/math]

Iteratively Re-weighted Least Squares

If we reorganize this updating formula we can see it is really iteratively solving a least squares problem each time with a new weighting.

[math]\displaystyle{ \beta\,^{(r+1)} {}= (\mathbb{X}^T\mathbb{W}^{(r)}\mathbb{X})^{-1}(\mathbb{X}^T\mathbb{W}^{(r)}\mathbb{X}\beta\,^{(r)} + \mathbb{X}^T(\mathbf{y} - \mathbf{p}^{(r)})) }[/math]

[math]\displaystyle{ \beta\,^{(r+1)} {}= (\mathbb{X}^T\mathbb{W}^{(r)}\mathbb{X})^{-1}\mathbb{X}^T\mathbb{W}^{(r)}\mathbf(z)^{(r)} }[/math]

where [math]\displaystyle{ \mathbf{z}^{(r)} = \mathbb{X}\beta\,^{(r)} + (\mathbb{W}^{(r)})^{-1}(\mathbf{y}-\mathbf{p}^{(r)}) }[/math]


Recall that linear regression by least squares finds the following minimum: [math]\displaystyle{ \ \min_{\beta}(y-X \beta)^T(y-X \beta) }[/math]

Similarly, we can say that [math]\displaystyle{ \ \beta^{(r+1)} }[/math] is the solution of a weighted least square problem in the new space of [math]\displaystyle{ \ \mathbf{z} }[/math]: ( compare the equation of [math]\displaystyle{ \ \beta^{(r+1)} }[/math] with the solution of weighted least square [math]\displaystyle{ \ {\tilde{\beta}} = (X^TX)^{-1}X^Ty }[/math] )

[math]\displaystyle{ \beta^{(r+1)} \leftarrow arg \min_{\beta}(\mathbf{z}-X \beta)^T W (\mathbf{z}-X \beta) }[/math]

Fisher Scoring Method

Fisher Scoring is a method very similiar to Newton-Raphson. It uses the expected Information Matrix as opposed to the observed information matrix. This distinction simplifies the problem and in perticular the computational complexity. To learn more about this method & logistic regression in general you can take Stat431/831 at the University of Waterloo.

Multi-class Logistic Regression

In a multi-class logistic regression we have K classes. For 2 classes K and l

[math]\displaystyle{ \frac{P(Y=l|X=x)}{P(Y=K|X=x)} = e^{\beta_l^T x} }[/math]
(this is resulting from [math]\displaystyle{ f_1(x)= (\frac{e^{\mathbf{\beta\,}^T \mathbf{x}}}{1+e^{\mathbf{\beta\,}^T \mathbf{x}}}) }[/math] and [math]\displaystyle{ f_2(x)= (\frac{1}{1+e^{\mathbf{\beta\,}^T \mathbf{x}}}) }[/math] )

We call [math]\displaystyle{ log(\frac{P(Y=l|X=x)}{P(Y=k|X=x)}) = (\beta_l-\beta_k)^T x }[/math] , the log ratio of the posterior probabilities as the logit transformation. The decision boundary between the 2 classes is the set of points where the logit transformation is 0.

For each class from 1 to K-1 we then have:

[math]\displaystyle{ log(\frac{P(Y=1|X=x)}{P(Y=K|X=x)}) = \beta_1^T x }[/math]

[math]\displaystyle{ log(\frac{P(Y=2|X=x)}{P(Y=K|X=x)}) = \beta_2^T x }[/math]

[math]\displaystyle{ log(\frac{P(Y=K-1|X=x)}{P(Y=K|X=x)}) = \beta_{K-1}^T x }[/math]

Note that choosing Y=K is arbitrary and any other choice is equally valid.

Based on the above the posterior probabilities are given by: [math]\displaystyle{ P(Y=k|X=x) = \frac{e^{\beta_k^T x}}{1 + \sum_{i=1}^{K-1}{e^{\beta_i^T x}}}\;\;for \; k=1,\ldots, K-1 }[/math]

[math]\displaystyle{ P(Y=K|X=x)=\frac{1}{1+\sum_{i=1}^{K-1}{e^{\beta_i^T x}}} }[/math]

Logistic Regression Vs. Linear Discriminant Analysis (LDA)

Logistic Regression Model and Linear Discriminant Analysis (LDA) are widely used for classification. Both models build linear boundaries to classify different groups. Also, the categorical outcome variables (i.e. the dependent variables) must be mutually exclusive.

LDA used more parameters.

However, these two models differ in their basic approach. While Logistic Regression is more relaxed and flexible in its assumptions, LDA assumes that its explanatory variables are normally distributed, linearly related and have equal covariance matrices for each class. Therefore, it can be expected that LDA is more appropriate if the normality assumptions and equal covariance assumption are fulfilled in its explanatory variables. But in all other situations Logistic Regression should be appropriate.


Also, the total number of parameters to compute is different for Logistic Regression and LDA. If the explanatory variables have d dimensions and there are two classes to categorize, we need to estimate [math]\displaystyle{ \ d+1 }[/math] parameters in Logistic Regression (all elements of the d by 1 [math]\displaystyle{ \ \beta }[/math] vector plus the scalar [math]\displaystyle{ \ \beta_0 }[/math]) and the number of parameters grows linearly w.r.t. dimension, while we need to estimate [math]\displaystyle{ 2d+\frac{d*(d+1)}{2}+2 }[/math] parameters in LDA (two mean values for the Gaussians, the d by d symmetric covariance matrices, and two priors for the two classes) and the number of parameters grows quadratically w.r.t. dimension.


Note that the number of parameters also corresponds to the minimum number of observations needed to compute the coefficients of each function. Techniques do exist though for handling high dimensional problems where the number of parameters exceeds the number of observations. Logistic Regression can be modified using shrinkage methods to deal with the problem of having less observations than parameters. When maximizing the log likelihood, we can add a [math]\displaystyle{ -\frac{\lambda}{2}\sum^{K}_{k=1}\|\beta_k\|_{2}^{2} }[/math] penalization term where K is the number of classes. This resulting optimization problem is convex and can be solved using Newton-Raphson method as given in Zhu and hastie (2004). LDA involves the inversion of a d x d covariance matrix. When d is bigger than n (where n is the number of observations) this matrix has rank n < d and thus is singular. When this is the case, we can either use the pseudo inverse or perform regularized discriminant analysis which solves this problem. In RDA, we define a new covariance matrix [math]\displaystyle{ \, \Sigma(\gamma) = \gamma\Sigma + (1 - \gamma)diag(\Sigma) }[/math] with [math]\displaystyle{ \gamma \in [0,1] }[/math]. Cross validation can be used to calculate the best [math]\displaystyle{ \, \gamma }[/math]. More details on RDA can be found in Guo et al. (2006).


Because the Logistic Regression model has the form [math]\displaystyle{ log\frac{f_1(x)}{f_0(x)} = \beta{x} }[/math], we can clearly see the role of each input variable in explaining the outcome. This is one advantage that Logistic Regression has over other classification methods and is why it is so popular in data analysis.


In terms of the performance speed, since LDA is non-iterative, unlike Logistic Regression which uses the iterative Newton-Raphson method, LDA can be expected to be faster than Logistic Regression.

Example

(Not discussed in class.) One application of logistic regression that has recently been used is predicting the winner of NFL games. Previous predictors, like Yards Per Carry (YPC), were used to build probability models for games. Now, the Success Rate (SR), defined as the percentage of runs in which the a team’s point expectancy has improved, is shown to be a better predictor of a team's performance. SR is based on down, distance and yard line and is less susceptible to rare breakaway plays that can be considered outliers. More information can be found at [3].

Perceptron

Simple perceptron
Simple perceptron where [math]\displaystyle{ \beta_0 }[/math] is defined as 1

Perceptron is a simple, yet effective, linear separator classifier. The perceptron is the building block for neural networks. It was invented by Rosenblatt in 1957 at Cornell Labs, and first mentioned in the paper "The Perceptron - a perceiving and recognizing automaton". The perceptron is used on linearly separable data sets. The LS computes a linear combination of factor of input and returns the sign.

For a 2 class problem, and a set of inputs with d features, a perceptron will use a weighted sum and it will classify the information using the sign of the result (i.e it uses a step function as it's activation function ). The figures on the right give an example of a perceptron. In these examples, [math]\displaystyle{ \ x^i }[/math] is the i-th feature of a sample and [math]\displaystyle{ \ \beta_i }[/math] is the i-th weight. [math]\displaystyle{ \beta_0 }[/math] is defined as the bias. The bias alters the position of the decision boundary between the 2 classes. From a geometrical point of view, Perceptron assigns label "1" to elements on one side of vector [math]\displaystyle{ \ \beta }[/math] and label "-1" to elements on the other of [math]\displaystyle{ \ \beta }[/math], where [math]\displaystyle{ \ \beta }[/math] is a vector of [math]\displaystyle{ \ \beta_i }[/math]s.

Perceptrons are generally trained using gradient descent. This type of learning can have 2 side effects:

  • If the data sets are well separated, the training of the perceptron can lead to multiple valid solutions.
  • If the data sets are not linearly separable, the learning algorithm will never finish.

Perceptrons are the simplest kind of a feedforward neural network. A perceptron is the building block for other neural networks such as Multi-Layer Perceptron (MLP) which uses multiple layers of perceptrons with nonlinear activation functions so that it can classify data that is not linearly separable.

History of Perceptrons and Other Neural Models

One of the first perceptron-like models is the "McCulloch-Pitts Neuron" model developed by McCulloch and Pitts in the 1940's <ref> W. Pitts and W. S. McCulloch, "How we know universals: the perception of auditory and visual forms," Bulletin of Mathematical Biophysics, 1947.</ref>. It uses a weighted sum of the inputs that is fed through an activation function, much like the perceptron. However, unlike the perceptron, the weights in the "McCulloch-Pitts Neuron" model are not adjustable, so the "McCulloch-Pitts Neuron" is unable to perform any learning based on the input data.

As stated in the introduction of the perceptron section, the Perceptron was developed by Rosenblatt around 1960. Around the same time as the perceptron was introduced, the Adaptive Linear Neuron (ADALINE) was developed by Widrow <ref name="Widrow"> B. Widrow, "Generalization and information storage in networks of adaline 'neurons'," Self Organizing Systems, 1959.</ref>. The ADALINE differs from the standard perceptron by using the weighted sum (the net) to adjust the weights in the learning phase. The standard perceptron uses the output to adjust its weights (i.e. the net after it passed through the activation function).

Since both the perceptron and ADALINE are only able to handle data that is linearly separable Multiple ADALINE (MADALINE) was introduced <ref name="Widrow"/>. MADALINE is a two layer network to process multiple inputs. Each layer contains a number of ADALINE units. The lack of an appropriate learning algorithm prevented more layers of units to be cascaded at the time and interest in "neural networks" receded until the 1980's when the backpropagation algorithm was applied to neural networks and it became possible to implement the Multi-Layer Perceptron (MLP).

Many importand advances have been boosted by the use of inexpensive computer emulations. Following an initial period of enthusiasm, the field survived a period of frustration and disrepute. During this period when funding and professional support was minimal, important advances were made by relatively few reserchers. These pioneers were able to develop convincing technology which surpassed the limitations identified by Minsky and Papert. Minsky and Papert, published a book (in 1969) in which they summed up a general feeling of frustration (against neural networks) among researchers, and was thus accepted by most without further analysis. Currently, the neural network field enjoys a resurgence of interest and a corresponding increase in funding.<ref> http://www.doc.ic.ac.uk/~nd/surprise_96/journal/vol4/cs11/report.html#Historical background </ref>

Perceptron Learning Algorithm (Lecture: Oct. 13, 2011)

Like all of the learning methods we have seen, learning in a perceptron model is accomplished by minimizing a cost (or error) function, [math]\displaystyle{ \phi(\boldsymbol{\beta}, \beta_0) }[/math]. In the perceptron case, the cost function is simply the difference of the output ([math]\displaystyle{ sig(\sum_{i=0}^d \beta_i x^{(i)}) }[/math]) and the target. To achieve this, we define a cost function, [math]\displaystyle{ \phi(\boldsymbol{\beta}, \beta_0) }[/math], as a summation of the distance between all misclassified points and the hyper-plane, or the decision boundary. To minimize this cost function, we need to estimate [math]\displaystyle{ \boldsymbol{\beta, \beta_0} }[/math].

[math]\displaystyle{ \min_{\beta,\beta_0} \phi(\boldsymbol{\beta}, \beta_0) }[/math] = {distance of all misclassified points}

The logic is as follows:

Distance between the point [math]\displaystyle{ \ x }[/math] and the decision boundary hyperplane [math]\displaystyle{ \ L }[/math] (black line). Note that the vector [math]\displaystyle{ \ \beta }[/math] is orthogonal to the decision boundary hyperplane and that points [math]\displaystyle{ \ x_0, x_1, x_2 }[/math] are arbitrary points on the decision boundary hyperplane.

1) Because a hyper-plane [math]\displaystyle{ \,L }[/math] can be defined as

[math]\displaystyle{ \, L=\{x: f(x)=\beta^Tx+\beta_0=0\}, }[/math]


For any two arbitrary points [math]\displaystyle{ \,x_1 }[/math] and [math]\displaystyle{ \,x_2 }[/math] on [math]\displaystyle{ \, L }[/math], we have

[math]\displaystyle{ \,\beta^Tx_1+\beta_0=0 }[/math],

[math]\displaystyle{ \,\beta^Tx_2+\beta_0=0 }[/math],

such that

[math]\displaystyle{ \,\beta^T(x_1-x_2)=0 }[/math].

Therefore, [math]\displaystyle{ \,\beta }[/math] is orthogonal to the hyper-plane and it is the normal vector.


2) For any point [math]\displaystyle{ \,x_0 }[/math] in [math]\displaystyle{ \ L, }[/math] [math]\displaystyle{ \,\;\;\beta^Tx_0+\beta_0=0 }[/math], which means [math]\displaystyle{ \, \beta^Tx_0=-\beta_0 }[/math].


3) We set [math]\displaystyle{ \,\beta^*=\frac{\beta}{||\beta||} }[/math] as the unit normal vector of the hyper-plane[math]\displaystyle{ \, L }[/math]. For simplicity we call [math]\displaystyle{ \,\beta^* }[/math] norm vector. The distance of point [math]\displaystyle{ \,x }[/math] to [math]\displaystyle{ \ L }[/math] is given by

[math]\displaystyle{ \,\beta^{*T}(x-x_0)=\beta^{*T}x-\beta^{*T}x_0 =\frac{\beta^Tx}{||\beta||}+\frac{\beta_0}{||\beta||} =\frac{(\beta^Tx+\beta_0)}{||\beta||} }[/math]

Where [math]\displaystyle{ \,x_0 }[/math] is any point on [math]\displaystyle{ \ L }[/math]. Hence, [math]\displaystyle{ \,\beta^Tx+\beta_0 }[/math] is proportional to the distance of the point [math]\displaystyle{ \,x }[/math] to the hyper-plane[math]\displaystyle{ \, L }[/math].


4) The distance from a misclassified data point [math]\displaystyle{ \,x_i }[/math] to the hyper-plane [math]\displaystyle{ \, L }[/math] is

[math]\displaystyle{ \,d_i = -y_i(\boldsymbol{\beta}^Tx_i+\beta_0) }[/math]

where [math]\displaystyle{ \,y_i }[/math] is a target value, such that [math]\displaystyle{ \,y_i=1 }[/math] if [math]\displaystyle{ \boldsymbol{\beta}^Tx_i+\beta_0\lt 0 }[/math], [math]\displaystyle{ \,y_i=-1 }[/math] if [math]\displaystyle{ \boldsymbol{\beta}^Tx_i+\beta_0\gt 0 }[/math]

Since we need to find the distance from the hyperplane to the misclassified data points, we need to add a negative sign in front. When the data point is misclassified, [math]\displaystyle{ \boldsymbol{\beta}^Tx_i+\beta_0 }[/math] will produce an opposite sign of [math]\displaystyle{ \,y_i }[/math]. Since we need a positive sign for distance, we add a negative sign.

Perceptron Learning using Gradient Descent

The gradient descent is an optimization method that finds the minimum of an objective function by incrementally updating its parameters in the negative direction of the derivative of this function. That is, it finds the steepest slope in the D-dimensional space at a given point, and descends down in the direction of the negative slope. Note that unless the error function is convex, it is possible to get stuck in a local minima. In our case, the objective function to be minimized is classification error and the parameters of this function are the weights associated with the inputs, [math]\displaystyle{ \beta }[/math] . The gradient descent algorithm updates the weights as follows:

[math]\displaystyle{ \beta^{\mathrm{new}} \leftarrow \beta^{\mathrm{old}} \rho \frac{\partial Err}{\partial \beta} }[/math]

[math]\displaystyle{ \rho }[/math] is called the learning rate.
The Learning Rate [math]\displaystyle{ \rho }[/math] is positively related to the step size of convergence of [math]\displaystyle{ \min \phi(\boldsymbol{\beta}, \beta_0) }[/math]. i.e. the larger [math]\displaystyle{ \rho }[/math] is, the larger the step size is. Typically, [math]\displaystyle{ \rho \in [0.1, 0.3] }[/math].

The classification error is defined as the distance of misclassified observations to the decision boundary:


To minimize the cost function [math]\displaystyle{ \phi(\boldsymbol{\beta}, \beta_0) = -\sum\limits_{i\in M} y_i(\boldsymbol{\beta}^Tx_i+\beta_0) }[/math] where [math]\displaystyle{ \ M=\{\text {all points that are misclassified}\} }[/math]
[math]\displaystyle{ \cfrac{\partial \phi}{\partial \boldsymbol{\beta}} = - \sum\limits_{i\in M} y_i x_i }[/math] and [math]\displaystyle{ \cfrac{\partial \phi}{\partial \beta_0} = -\sum\limits_{i \in M} y_i }[/math]

Therefore, the gradient is [math]\displaystyle{ \nabla D(\beta,\beta_0) = \left( \begin{array}{c} -\displaystyle\sum_{i \in M}y_{i}x_i \\ -\displaystyle\sum_{i \in M}y_{i} \end{array} \right) }[/math]


Using the gradient descent algorithm to solve these two equations, we have [math]\displaystyle{ \begin{pmatrix} \boldsymbol{\beta}^{\mathrm{new}}\\ \beta_0^{\mathrm{new}} \end{pmatrix} = \begin{pmatrix} \boldsymbol{\beta}^{\mathrm{old}}\\ \beta_0^{\mathrm{old}} \end{pmatrix} + \rho \begin{pmatrix} y_i x_i\\ y_i \end{pmatrix} }[/math]


If the data is linearly-separable, the solution is theoretically guaranteed to converge to a separating hyperplane in a finite number of iterations. In this situation the number of iterations depends on the learning rate and the margin. However, if the data is not linearly separable there is no guarantee that the algorithm converges.

[math]\displaystyle{ \begin{pmatrix} \beta^0\\ \beta_0^0 \end{pmatrix} }[/math]

Note that we consider the offset term [math]\displaystyle{ \,\beta_0 }[/math] separately from [math]\displaystyle{ \ \beta }[/math] to distinguish this formulation from those in which the direction of the hyperplane ([math]\displaystyle{ \ \beta }[/math]) has been considered.

A major concern about gradient descent is that it may get trapped in local optimal solutions. Many works such as this paper by Cetin et al. and this paper by Atakulreka et al. have been done to tackle this issue.


Features

  • A Perceptron can only discriminate between two classes at a time.
  • When data is (linearly) separable, there are an infinite number of solutions depending on the starting point.
  • Even though convergence to a solution is guaranteed if the solution exists, the finite number of steps until convergence can be very large.
  • The smaller the gap between the two classes, the longer the time of convergence.
  • When the data is not separable, the algorithm will not converge (it should be stopped after N steps).
  • A learning rate that is too high will make the perceptron periodically oscillate around the solution unless additional steps are taken.
  • The L.S compute a linear combination of feature of input and return the sign.
  • This were called Perceptron in the engineering literate in late 1950.
  • Learning rate affects the accuracy of the solution and the number of iterations directly.


Separability and convergence

The training set D is said to be linearly separable if there exists a positive constant [math]\displaystyle{ \,\gamma }[/math] and a weight vector [math]\displaystyle{ \,\beta }[/math] such that [math]\displaystyle{ \,(\beta^Tx_i+\beta_0)y_i\gt \gamma }[/math] for all [math]\displaystyle{ \,1 \lt i \lt n }[/math]. That is, if we say that [math]\displaystyle{ \,\beta }[/math] is the weight vector of Perceptron and [math]\displaystyle{ \,y_i }[/math] is the true label of [math]\displaystyle{ \,x_i }[/math], then the signed distance of the [math]\displaystyle{ \,x_i }[/math] from [math]\displaystyle{ \,\beta }[/math] is greater than a positive constant [math]\displaystyle{ \,\gamma }[/math] for any [math]\displaystyle{ \,(x_i, y_i)\in D }[/math].


Novikoff (1962) proved that the perceptron algorithm converges after a finite number of iterations if the data set is linearly separable. The idea of the proof is that the weight vector is always adjusted by a bounded amount in a direction that it has a negative dot product with, and thus can be bounded above by [math]\displaystyle{ O(\sqrt{t}) }[/math]where t is the number of changes to the weight vector. But it can also be bounded below by[math]\displaystyle{ \, O(t) }[/math]because if there exists an (unknown) satisfactory weight vector, then every change makes progress in this (unknown) direction by a positive amount that depends only on the input vector. This can be used to show that the number t of updates to the weight vector is bounded by [math]\displaystyle{ (\frac{2R}{\gamma} )^2 }[/math] , where R is the maximum norm of an input vector.<ref>http://en.wikipedia.org/wiki/Perceptron</ref>

Choosing a Proper Learning Rate

choosing different learning rates affect the performance of gradient descent optimization algorithm.

Choice of a learning rate value will affect the final result of gradient descent algorithm. If the learning rate is too small then the algorithm would take too long to converge which could cause problems for the situations where time is an important factor. If the learning rate is chosen too be too large, then the optimal point can be skipped and never converge. In fact, if the step size is too large, larger than twice the largest eigenvalue of the second derivative matrix (Hessian) of cost function, then gradient steps will go upward instead of downward. However, the step size is not the only factor than can cause these kind of situations: even with the same learning rate and different initial values algorithm might end up in different situations. In general it can be said that having some prior knowledge could help in choice of initial values and learning rate.

There are different methods of choosing the step size in an gradient descent optimization problem. The most common method is choosing a fixed learning rate and finding a proper value for it by trial and error. This for sure is not the most sophisticated method, but the easiest one. Learning rate can also be adaptive; that means the value of learning rate can be different at each step of the algorithm. This can be specially a helpful approach when one is dealing with on-line training and non-stationary environments (i.e. when data characteristics vary over time). In such a case learning rate has to be adapted at each step of the learning algorithm. Different approaches and algorithms for learning rate adaptation can be found in <ref> V P Plagianakos, G D Magoulas, and M N Vrahatis, Advances in convex analysis and global optimization Pythagorion 2000 (2001), Volume: 54, Publisher: Kluwer Acad. Publ., Pages: 433-444. </ref>.

The learning rate leading to a local error minimum in the error function in one learning step is optimal. <ref>[Duda, Richard O., Hart, Peter E., Stork, David G. "Pattern Classification". Second Edition. John Wiley & Sons, 2001.]</ref>

Application of Perceptron: Branch Predictor

Perceptron could be used for both online and batch learning. Online learning tasks take place in a sequence of trials. In each round of trial, the learner is given an instance and is asked to use his current knowledge to predict a label for the point. In online learning, the true label of the point is revealed to learner at each round after he makes a prediction. At the last stage of each round the learner has a chance to use the feedback he received on the true label of the instance to help improve his belief about the data for future trials.

Instruction pipelining is a technique to increase the throughput in modern microprocessor architecture. A microprocessor instruction can be broken into several independent steps. In a single CPU clock cycle, several instructions at different stage can be executed at the same time. However, a problem arises with a branch, e.g. if-and-else- statement. It is not known whether the instructions inside the if- or else- statements will be executed until the condition is executed. This stalls the pipeline.

A branch predictor is used to address this problem. Using a predictor the pipelined processor predicts the execution path and speculatively executes instructions in the branch. Neural networks are good technique for prediction; however, they are expensive for microprocessor architecture. A research studied the use of perceptron, which is less expensive and simpler to implement, as the branch predictor. The inputs are the history of binary outcomes of the executed branches. The output of the predictor is whether a particular branch will be taken. Every time a branch is executed and its true outcome is known, it can be used to train the predictor. The experiments showed that with a 4 Kb hardware, a global perceptron predictor has a misprediction rate of 1.94%, a superior accuracy. <ref>Daniel A. Jimenez , Calvin Lin, "Neural Methods for Dynamic Branch Prediction", ACM Transactions on Computer Systems, 2002</ref>

Feed-Forward Neural Networks

  • The term 'neural networks' is used because historically, it was used to describe the processes of the brain (e.g. synapses).
  • A neural network is a multistate regression model which is typically represented by a network diagram (see right).
Feed Forward Neural Network
  • The feedforward neural network was the first and arguably simplest type of artificial neural network devised. In this network, the information moves in only one direction, forward, from the input nodes, through the hidden nodes (if any) and to the output nodes. There are no cycles or loops in the network.<ref>http://en.wikipedia.org/wiki/Feedforward_neural_network</ref>
  • For regression, typically k = 1 (the number of nodes in the last layer), there is only one output unit [math]\displaystyle{ y_1 }[/math] at the end.
  • For c-class classification, there are typically c units at the end with the cth unit modelling the probability of class c, each [math]\displaystyle{ y_c }[/math] is coded as 0-1 variable for the cth class.
  • Neural networks are known as universal approximators, where a two-layer feed-forward neural network can approximate any continuous function to an arbitrary accuracy (assuming sufficient hidden nodes exist and that the necessary parameters for the neural network can be found) <ref name="CMBishop">C. M. Bishop, Pattern Recognition and Machine Learning. Springer, 2006</ref>. It should be noted that fitting training data to a very high accuracy may lead to overfitting, which is discussed later in this course.
  • We often use Perceptron to blocks in Feed-Forward neural networks. We can easily to solve the problem by using Perceptron in many different classes. Feed-Forward neural networks looks like a complicated system of Perceptrons. We can regard the neural networks as an unit or a subset of Neural Network. Feed-Forward neural networks include many hidden layers of perceptron.

Backpropagation (Finding Optimal Weights)

There are many algorithms for calculating the weights in a feed-forward neural network. One of the most used approaches is the backpropagation algorithm. The application of the backpropagation algorithm for neural networks was popularized in the 1980's by researchers like Rumelhart, Hinton and McClelland (even though the backpropagation algorithm had existed before then). <ref>S. Seung, "Multilayer perceptrons and backpropagation learning" class notes for 9.641J, Department of Brain & Cognitive Sciences, MIT, 2002. Available: [4] </ref>

As the learning part of the network (the first part being feed-forward), backpropagation consists of "presenting an input pattern and changing the network parameters to bring the actual outputs closer to the desired teaching or target values." It is one of the "simplest, most general methods for the supervised training of multilayer neural networks." (pp. 288-289) <ref>[Duda, Richard O., Hart, Peter E., Stork, David G. "Pattern Classification". Second Edition. John Wiley & Sons, 2001.]</ref>

For the backpropagation algorithm, we consider three hidden layers of nodes

Refer to figure from October 18th lecture where [math]\displaystyle{ \ l }[/math] represents the column of nodes in the first column,
[math]\displaystyle{ \ i }[/math] represents the column of nodes in the second column, and
[math]\displaystyle{ \ k }[/math] represents the column of nodes in the third column.

We want the output of the feed forward neural network [math]\displaystyle{ \hat{y} }[/math] to be as close to the known target value [math]\displaystyle{ \ y }[/math] as possible (i.e. we want to minimize the distance between [math]\displaystyle{ \ y }[/math] and [math]\displaystyle{ \hat{y} }[/math]). Mathematically, we would write it as: Minimize [math]\displaystyle{ (\left| y- \hat{y}\right|)^2 }[/math]

Instead of the sign function that has no derivative we use the so called logistic function (a smoothed form of the sign function):

[math]\displaystyle{ \sigma(a)=\frac{1}{1+e^{-a}} }[/math]


"Notice that if σ is the identity function, then the entire model collapses to a linear model in the inputs. Hence a neural network can be thought of as a nonlinear generalization of the linear model, both for regression and classification." <ref>Friedman, J., Hastie, T. and Tibshirani, R. (2008) “The Elements of Statistical Learning”, 2nd ed, Springer.</ref>


Logistic function is a common sigmoid curve .It can model the S-curve of growth of some population [math]\displaystyle{ \sigma }[/math]. The initial stage of growth is approximately exponential; then, as saturation begins, the growth slows, and at maturity, growth stops.


To solve the optimization problem, we take the derivative with respect to weight [math]\displaystyle{ u_{il} }[/math]:
[math]\displaystyle{ \cfrac{\partial \left|y- \hat{y}\right|^2}{\partial u_{il}} = \cfrac{\partial \left|y- \hat{y}\right|^2}{\partial a_j} \cdot \cfrac{\partial a_j}{\partial u_{il}} }[/math] by Chain rule
[math]\displaystyle{ \cfrac{\partial \left|y- \hat{y}\right|^2}{\partial u_{il}} = \delta_j \cdot z_l }[/math]

where [math]\displaystyle{ \delta_j = \cfrac{\partial \left|y- \hat{y}\right|^2}{\partial a_j} }[/math] which will be computed recursively.

[math]\displaystyle{ \ a_i=\sum_{l}z_lu_{il} }[/math]

[math]\displaystyle{ \ z_i=\delta(a_i) }[/math]

[math]\displaystyle{ \ a_j=\sum_{i}z_iu_{ji} }[/math]

Backpropagation Continued (Lecture: Oct. 18, 2011)

Nodes from three hidden layers within the neural network are considered for the backpropagation algorithm. Each node has been divided into the weighted sum of the inputs [math]\displaystyle{ \ a }[/math] and the output of the activation function [math]\displaystyle{ \ z }[/math]. The weights between the nodes are denoted by [math]\displaystyle{ \ u }[/math].

From the figure to the right it can be seen that the input ([math]\displaystyle{ \ a }[/math]'s) can be expressed in terms of the weighted sum of the outputs of the previous nodes and output ([math]\displaystyle{ \ z }[/math]'s) can be expressed as the input as follows:

[math]\displaystyle{ \ a_i = \sum_l z_l u_{il} }[/math]

[math]\displaystyle{ \ z_i = \sigma(a_i) }[/math]


The goal is to optimize the weights to reduce the L2-norm between the target output values [math]\displaystyle{ \ y }[/math] (i.e. the correct labels) and the actual output of the neural network [math]\displaystyle{ \ \hat{y} }[/math]:

[math]\displaystyle{ \left(y - \hat{y}\right)^2 }[/math]

Since the L2-norm is differentiable, the optimization problem can be tackled by differentiating [math]\displaystyle{ \left(y - \hat{y}\right)^2 }[/math] with respect to each weight in the hidden layers. By using the chain rule we get:

[math]\displaystyle{ \cfrac{\partial \left(y - \hat{y}\right)^2}{\partial u_{il}} = \cfrac{\partial \left(y - \hat{y}\right)^2}{\partial a_i}\cdot \cfrac{\partial a_i}{\partial u_{il}} = \delta_{i}z_l }[/math]

where [math]\displaystyle{ \ \delta_i = \cfrac{\partial \left(y - \hat{y}\right)^2}{\partial a_i} }[/math]

The above equation essentially shows the effect of changes in the input [math]\displaystyle{ \ a_i }[/math] on the overall output [math]\displaystyle{ \ \hat{y} }[/math] as well as the effect of changes in the weights [math]\displaystyle{ \ u_{il} }[/math] on the input [math]\displaystyle{ \ a_i }[/math]. In the above equation, [math]\displaystyle{ \ z_l }[/math] is a known value (i.e. it can be calculated directly), whereas [math]\displaystyle{ \ \delta_i }[/math] is unknown but can be expressed as a recursive definition in terms of [math]\displaystyle{ \ \delta_j }[/math]:

[math]\displaystyle{ \delta_i = \cfrac{\partial (y - \hat{y})^2}{\partial a_i} = \sum_{j} \cfrac{\partial \left(y - \hat{y}\right)^2}{\partial a_j}\cdot \cfrac{\partial a_j}{\partial a_i} }[/math]

[math]\displaystyle{ \delta_i = \sum_{j}\delta_j\cdot\cfrac{\partial a_j}{\partial z_i}\cdot\cfrac{\partial z_i}{\partial a_i} }[/math]

[math]\displaystyle{ \delta_i = \sum_{j} \delta_j\cdot u_{ji} \cdot \sigma'(a_i) }[/math]

where [math]\displaystyle{ \delta_j = \cfrac{\partial \left(y - \hat{y}\right)^2}{\partial a_j} }[/math]

The above equation essentially shows the effect of changes in the input [math]\displaystyle{ \ a_j }[/math] on the overall output [math]\displaystyle{ \ \hat{y} }[/math] as well as the effect of changes in input [math]\displaystyle{ \ a_i }[/math] on the input [math]\displaystyle{ \ a_j }[/math]. Note that if [math]\displaystyle{ \sigma(x) }[/math] is the sigmoid function, then [math]\displaystyle{ \sigma'(x) = \sigma(x)(1-\sigma(x)) }[/math]

The recursive definition of [math]\displaystyle{ \ \delta_i }[/math] can be considered as a cost function at layer [math]\displaystyle{ i }[/math] for achieving the original goal of optimizing the weights to minimize [math]\displaystyle{ \left(y - \hat{y}\right)^2 }[/math]:

[math]\displaystyle{ \delta_i= \sigma'(a_i)\sum_{j}\delta_j \cdot u_{ji} }[/math].

Now considering [math]\displaystyle{ \ \delta_k }[/math] for the output layer:

[math]\displaystyle{ \delta_k= \cfrac{\partial \left(y - \hat{y}\right)^2}{\partial a_k} }[/math].

where [math]\displaystyle{ \,a_k = \hat{y} }[/math] because an activation function is not applied in the output layer. So, our calculation becomes:

[math]\displaystyle{ \delta_k = \cfrac{\partial \left(y - \hat{y}\right)^2}{\partial \hat{y}} }[/math]

[math]\displaystyle{ \delta_k = -2(y - \hat{y}) }[/math]
[math]\displaystyle{ u_{il} \leftarrow u_{il} - \rho \cfrac{\partial (y - \hat{y}) ^2}{\partial u_{il}} }[/math]

Since [math]\displaystyle{ \ y }[/math] is known and [math]\displaystyle{ \ \hat{y} }[/math] can be computed for each data point (assuming small, random, initial values for the weights of the neural network), [math]\displaystyle{ \ \delta_k }[/math] can be calculated and "backpropagated" (i.e. the [math]\displaystyle{ \ \delta }[/math] values for the layer before the output layer can be computed using [math]\displaystyle{ \ \delta_k }[/math] and then the [math]\displaystyle{ \ \delta }[/math] values for the layer before the layer before the output layer can be computed etc.). Once all [math]\displaystyle{ \ \delta }[/math] values are known, the errors due to each of the weights [math]\displaystyle{ \ u }[/math] will be known and techniques like gradient descent can be used to optimize the weights. However, as the cost function for [math]\displaystyle{ \ \delta_i }[/math] shown above is not guaranteed to be convex, convergence to a global minimum is no guaranteed. This also means that changing the order in which the training points are fed into the network or changing the initial random values for the weights may lead to finding different results for the optimized weights (i.e. different local minima may be reached).

Overview of Full Backpropagation Algorithm

The network weights are updated using the backpropagation algorithm when each training data point [math]\displaystyle{ \ x }[/math]is fed into the feed forward neural network (FFNN). This update procedure is done using the following steps:

  • First arbitrarily choose some random weights (preferably close to zero) for your network.
  • Apply [math]\displaystyle{ \ x }[/math] to the FFNN's input layer, and calculate the outputs of all input neurons.
  • Propagate the outputs of each hidden layer forward, one hidden layer at a time, and calculate the outputs of all hidden neurons.
  • Once [math]\displaystyle{ \ x }[/math] reaches the output layer, calculate the output(s) of all output neuron(s) given the outputs of the previous hidden layer.
  • At the output layer, compute [math]\displaystyle{ \,\delta_k = -2(y_k - \hat{y}_k) }[/math] for each output neuron(s).
  • Compute each [math]\displaystyle{ \delta_i }[/math], starting from [math]\displaystyle{ i=k-1 }[/math] all the way to the first hidden layer, where [math]\displaystyle{ \delta_i= \sigma'(a_i)\sum_{j}\delta_j \cdot u_{ji} }[/math].
  • Compute [math]\displaystyle{ \cfrac{\partial \left(y - \hat{y}\right)^2}{\partial u_{il}} = \delta_{i}z_l }[/math] for all weights [math]\displaystyle{ \,u_{il} }[/math].
  • Then update [math]\displaystyle{ u_{il}^{\mathrm{new}} \leftarrow u_{il}^{\mathrm{old}} - \rho \cdot \cfrac{\partial \left(y - \hat{y}\right)^2}{\partial u_{il}} }[/math] for all weights [math]\displaystyle{ \,u_{il} }[/math].
  • Continue for next data points and iterate on the training set until weights converge.

Epochs

It is common to cycle through the all of the data points multiple times in order to reach convergence. An epoch represents one cycle in which you feed all of your datapoints through the neural network. It is good practice to randomized the order you feed the points to the neural network within each epoch; this can prevent your weights changing in cycles. The number of epochs required for convergence depends greatly on the learning rate & convergence requirements used.

Limitations

  • The convergence obtained from backpropagation learning is very slow.
  • The convergence in backpropagation learning is not guaranteed.
  • The result may generally converge to any local minimum on the error surface, since stochastic gradient descent exists on a surface which is not flat.
  • Numerical problems may be encountered when there are a large number of hidden layers, as the errors at each layer may become very small and vanish.

Deep Neural Network

Increasing the number of units within a hidden layer can increase the "flexibility" of the neural network, i.e. the network is able to fit to more complex functions. Increasing the number of hidden layers on the other hand can increase the "generalizability" of the neural network, i.e. the network is able to generalize well to new data points that it was not trained on. A deep neural network is a neural network with many hidden layers. Deep neural networks were introduced in recent years by the same researchers (Hinton et al. <ref name="HintonDeepNN"> G. E. Hinton, S. Osindero and Y. W. Teh, "A Fast Learning Algorithm for Deep Belief Nets", Neural Computation, 2006. </ref>) that introduced the backpropagation algorithm to neural networks. The increased number of hidden layers in deep neural networks cannot be directly trained using backpropagation, because the errors at each layer will become very small and vanish as stated in the limitations section. To get around this problem, deep neural networks are trained a few layers at a time (i.e. two layers at a time). This process is still not straightforward as the target values for the hidden layers are not well defined (i.e. it is unknown what the correct target values are for the hidden layers given a data point and a label). Restricted Boltzmann Machines (RBM) and Greedy Learning Algorithms have been used to address this issue. For more information about how deep neural networks are trained, please refer to <ref name="HintonDeepNN"/>. A comparison of various neural network layouts including deep neural networks on a database of handwritten digits can be found at THE MNIST DATABASE.

one of the advantages of Deep Nets is that we can pre-train network using unlabeled data (Unsupervised learning) to obtain initial weights for final step training using labeled data(fine-tuning). Since most of data available are usually unlabeled data, this method gives us a great chance of finding better local optima than if we just wanted to use labeled data for training the parameters of the network(the weights). for more details on unsupervised pre-training and learning in Deep Nets see<ref> http://jmlr.csail.mit.edu/proceedings/papers/v9/erhan10a/erhan10a.pdf </ref> , <ref> http://www.cs.toronto.edu/~hinton/absps/tics.pdf </ref>

An interesting structure of the deep neural network is where the number of nodes in each hidden layer decreases towards the "center" of the network and then increases again. See figure below for an illustration.

A specific architecture for deep neural networks with a "bottleneck".

The central part with the least number of nodes in the hidden layer can be seen a reduced dimensional representation of the input data features. It would be interesting to compare the dimensionality reduction effect of this kind of deep neural network to a cascade of PCA.

It is known that training DNNs is hard <ref>http://ecs.victoria.ac.nz/twiki/pub/Courses/COMP421_2010T1/Readings/TrainingDeepNNs.pdf</ref> since randomly initializing weights for the network and applying gradient descent can find poor local minimums. In order to better train DNNs, Exploring Strategies for Training Deep Neural Networks looks at 3 principles to better train DNNs:

  1. Pre-training one layer at a time in a greedy way,
  2. Using unsupervised learning at each layer,
  3. Fine-tuning the whole network with respect to the ultimate criterion.

Their experiments show that by providing hints at each layer for the representation, the weights can be initialized such that a more optimal minimum can be reached.

Applications of Neural Networks

  • Sales forecasting
  • Industrial process control
  • Customer research
  • Data validation
  • Risk management
  • Target marketing

<ref> Reference:http://www.doc.ic.ac.uk/~nd/surprise_96/journal/vol4/cs11/report.html#Applications of neural networks </ref>

Model Selection (Complexity Control)

Selecting a proper statistical model for a given data set is a well-known problem in pattern recognition and machine learning. Systems with the optimal complexity have a good generalization to yet unobserved data. In the complexity control problem, we are looking for an appropriate model order which gives us the best generalization capability for the unseen data points, while fitting the seen data well. Model complexity here can be defined in terms of over-fitting and under-fitting situations defined in the following section.

Over-fitting and Under-fitting

File:overfitting-model.png
Example of overfitting and underfitting situations. The blue line is a high-degree polynomial which goes through most of the training data points and gives a very low training error, however has a very poor generalization for the unseen data points. The red line, on the other hand, is underfitted to the training data samples.

There are two situations which should be avoided in classification and pattern recognition systems:

  1. Overfitting
  2. Underfitting

In short, Overfitting occurs when the model tries to capture every detail of the data. This can happen if the model has too many parameters compared to the number of observations. Overfitted models have large testing errors but small training error. On the other hand, Underfitting occurs when the model does not capture the complexity of the data. This happens when the model has a large training error, and can be common when there is missing data.

Suppose there is no noise in the training data, then we would face no problem with over-fitting, because in this case every training data point lies on the underlying function, and the only goal is to build a model that is as complex as needed to pass through every training data point.

However, in the real-world, the training data are noisy, i.e. they tend to not lie exactly on the underlying function, instead they may be shifted to unpredictable locations by random noise. If the model is more complex than what it needs to be in order to accurately fit the underlying function, then it would end up fitting most or all of the training data. Consequently, it would be a poor approximation of the underlying function and have poor prediction ability on new, unseen data.

The danger of overfitting is that the model becomes susceptible to predicting values outside of the range of training data. It can cause wild predictions in multilayer perceptrons, even with noise-free data. To avoid Overfitting, techniques such as Cross Validation and Model Comparison might be necessary. The size of the training set is also important. The training set should have a sufficient number of data points which are sampled appropriately, so that it is representative of the whole data space.

In a Neural Network, if the number of hidden layers or nodes is too high, the network will have many degrees of freedom and will learn every characteristic of the training data set. That means it will fit the training set very precisely, but will not be able to generalize the commonality of the training set to predict the outcome of new cases.

Underfitting occurs when the model we picked to describe the data is not complex enough, and has a high error rate on the training set. There is always a trade-off. If our model is too simple, underfitting could occur and if it is too complex, overfitting can occur.

Different Approaches for Complexity Control

We would like to have a classifier that minimizes the true error rate [math]\displaystyle{ \ L(h) }[/math]:

[math]\displaystyle{ \ L(h)=Pr\{h(x)\neq y\} }[/math]

Model complexity

Because the true error rate cannot be determined directly in practice, we can try using the empirical true error rate (i.e. training error rate):

[math]\displaystyle{ \ \hat L(h)= \frac{1}{n} \sum_{i=1}^{n} I(h(x_{i}) \neq y_{i}) }[/math]

However, the empirical true error rate (i.e. training error rate) is biased downward. Minimizing this error rate does not find the best classifier model, but rather ends up overfitting to the training data. Thus, this error rate cannot be used.

The complexity of a fitting model depends on the degree of the fitting function. According to the graph, the area on the LHS of the critical point is considered as under-fitting. This inaccuracy is resulted by the low complexity of fitting. The area on the RHS of the critical point is over-fitting, because it's not generalized.

As illustrated in the figure to the right, the training error rate is always less than the true error rate, i.e. "biased downward". Also, the training error will always decrease with an increase in the complexity of the model used to fit the data. This does not reflect the behavior of the true error rate. The true error rate will have a unique minimum as the model complexity changes.

So, if the training error rate is the only criteria used for picking a model, overfitting can occur. An overfitted model has low training error rate, but is not able to generalize well to new test data points. On the other hand, underfitting can occur when a model that is not complex enough is picked (e.g. using a first order model for data that follows a second order trend). Both training and test error rates will be high in that case. The best choice for the model complexity is where the true error rate reaches its minimum point. Thus, model selection involves controlling the complexity of the model. The true error rate can be approximated using the test error rate, i.e. the test error follows the same trend that the true error rate does when the model complexity is changed. In this case, we assume there is a test data set [math]\displaystyle{ \,x_1, . . . ,x_n }[/math] and these points follow some unknown distribution. In order to find out this distribution, we can make some estimationg of some unknown parameters, such as [math]\displaystyle{ \,f }[/math], the mean [math]\displaystyle{ \,E(x_i) }[/math], the variance [math]\displaystyle{ \,var(x_i) }[/math] and more.

To estimate [math]\displaystyle{ \,f }[/math], we use an observation function as our estimator.

[math]\displaystyle{ \hat{f}(x_1,...,x_n) }[/math].

[math]\displaystyle{ Bias (\hat{f}) = E(\hat{f}) - f }[/math]

[math]\displaystyle{ MSE (\hat{f}) = E[(\hat{f} - f)^2]=Variance (\hat f)+Bias^2(\hat f ) }[/math]

[math]\displaystyle{ Variance (\hat{f}) = E[(\hat{f} - E(\hat{f}))^2] }[/math]

This estimator is unbiased.

[math]\displaystyle{ Bias (\hat{f}) = E(\hat{f}) - f=0 }[/math]

which means that we just need to minimize [math]\displaystyle{ MSE (\hat{f}) }[/math].

[math]\displaystyle{ \implies MSE (\hat{f})=Variance (\hat{f})+Bias ^2(\hat{f}) }[/math].

Thus given the Mean Squared Error (MSE), if we have a low bias, then we will have a high variance and vice versa.



In order to avoid overfitting, there are two main strategies:

  1. Estimate the error rate
    1. Cross-validation
    2. Computing error bound ( probability in-equality )
  2. Regulazition
    1. We basically make the function (model) smooth by limiting the complexity or by limiting the size of weights.

Cross Validation

File:k-fold.png
Graphical illustration of 4-fold cross-validation. V is the part used for validation and T is used for training.

Cross-validation is an approach for avoiding overfitting while modelling data that bases the choice of model parameters on a portion of the training set, while using the rest of the set for validation, i.e., some of the data is left out when fitting the model. One round of the process involves partitioning the data set into two complementary subsets, fitting the model to one subset (called the training set), and testing the model against the other subset (called the validation or testing subset). This is usually repeated several times using different partitions in order to reduce variability, and the validation results are then averaged over the rounds.

LOO: Leave-one-out cross-validation

When the dataset is very small, leaving one tenth out depletes our data too much, but making the validation set too small makes the estimate of the true error unstable (noisy). One solution is to do a kind of round-robin validation: for each complexity setting, learn a classifier on all the training data minus one example and evaluate its error the remaining example. Leave-one-out error is defined as:

LOO error: [math]\displaystyle{ \frac {1}{n} \sum_{i} 1 (h(x_i; D_-i)\neq y_i) }[/math] where [math]\displaystyle{ D_-i }[/math] is the dataset minus ith example and [math]\displaystyle{ h(x_i; D_-i) }[/math] is the classifier learned on [math]\displaystyle{ D_-i }[/math]. LOO error is an unbiased estimate of the error of our learning algorithm (for a given complexity setting) when given [math]\displaystyle{ n-1 }[/math] examples.

K-Fold Cross Validation

Instead of minimizing the training error, here we minimize the validation error.

A common type of cross-validation that is used for relatively small data sets is K-fold cross-validation, the algorithm for which can be stated as follows:

Let h denote a classification model to be fitted to a given data set.

  1. Randomly partition the original data set into K subsets of approximately the same size. A common choice for K is K = 10.
  2. For k = 1 to K do the following
    1. Remove subset k from the data set
    2. Estimate the parameters of each different classification model based only on the remaining data points. Denote the resulting function by h(k)
    3. Use h(k) to predict the data points in subset k. Denote by [math]\displaystyle{ \begin{align}\hat L_k(h)\end{align} }[/math] the observed error rate.
  3. Compute the average error [math]\displaystyle{ \hat L(h) = \frac{1}{K} \sum_{k=1}^{K} \hat L_k(h) }[/math]

The best classifier is the model that results in the lowest average error rate.

A common variation of k-fold cross-validation uses a single observation from the original sample as the validation data, and the remaining observations as the training data. This is then repeated such that each sample is used once for validation. It is the same as a K-fold cross-validation with K being equal to the number of points in the data set, and is referred to as leave-one-out cross-validation. <ref> stat.psu.edu/~jiali/course/stat597e/notes2/percept.pdf</ref>

Alternatives to Cross Validation for model selection:

  1. Akaike Information Criterion (AIC): This approach ranks models by their AIC values. The model with the minimum AIC is chosen. The formula of AIC value is: [math]\displaystyle{ AIC = 2k + 2log(L_{max}) }[/math], where [math]\displaystyle{ k }[/math] is the number of parameters and [math]\displaystyle{ L_{max} }[/math] is the maximum value of the likelihood function of the model. This selection method penalizes the number of parameters.<ref>http://en.wikipedia.org/wiki/Akaike_information_criterion</ref>
  2. Bayesian Information Criterion (BIC): It is similar to AIC but penalizes the number of parameters even more. The formula of BIC value is: [math]\displaystyle{ BIC = klog(n) - 2log(L) }[/math], where [math]\displaystyle{ n }[/math] is the sample size.<ref>http://en.wikipedia.org/wiki/Bayesian_information_criterion</ref>

Model Selection Continued (Lecture: Oct. 20, 2011)

Error Bound Computation

Apart from cross validation, another approach for estimating the error rates of different models is to find a bound to the error. This works well theoretically to compare different models, however, in practice the error bounds are not a good indication of which model to pick because the error bounds are not tight. This means that the actual error observed in practice may be a lot better than what was indicated by the error bounds. This is because the error bounds indicate the worst case errors and by only comparing the error bounds of different models, the worst case performance of each model is compared, but not the overall performance under normal conditions.

Penalty Function

Another approach for model selection to avoid overfitting is to use regularization. Regularization involves adding extra information or restrictions to the problem in order to prevent overfitting. This additional information can be in the form of a function penalizing high complexity (penalty function). So in regularization, instead of minimizing the squared error alone we attempt to minimize the squared error plus a penalty function. A common penalty function is the euclidean norm of the parameter vector multiplied by some scaling parameter. The scaling parameter allows for balancing the relative importance of the two terms.
This means minimizing the following new objective function:
[math]\displaystyle{ \left|y-\hat{y}\right|^2+f(\theta) }[/math]
where [math]\displaystyle{ \ \theta }[/math] is model complexity and [math]\displaystyle{ \ f(\theta) }[/math] is the penalty function. The penalty function should increase as the model increases in complexity. This way it counteracts the downward bias of the training error rate. There is no optimal choice for a penalty function but they should all increase all the complexity and size of the estimates increase.

There is no optimal choice for the penalty function but they all seek to solve the same problem. Suppose you have models of order 1,2,...,K such that the models of class k-1 are a subset of the models in class k. An example of this is linear regression where a model of order k is the model with the first k explanatory covariates. If you do not include a penalty term and minimize the squared error alone you will always choose the largest most complex model (K). But the problem with this is the gain from including more complexity might be incredibly small. The gain in accuracy may in fact be no better than you would expect from including a covariate drawn from a N(0,1) distribution. If this is the case then clearly we don't want to include such a covariate. And in general if the increase in accuracy is below a certain level then it is preferable to stay with the simpler model. By adding a penalty term, no matter how small it is, you know at least at some point these insignificant gains in accuracy will be outweighed by increase in penalty. By effectively choosing and scaling your penalty function you can have your objective function approximate the true error as opposed to the training error.

Example: Penalty Function in Neural Network Model Selection

In MLP neural networks, the activation function is of the form of a logistic function, where the function behaves almost linearly when the input is close to zero (i.e., the weights of the neural network are close to zero), while the function behaves non-linearly as the magnitude of the input increases (i.e., the weights of the neural network become larger). In order to penalize additional model complexity (i.e., unnecessary non-linearities in the model), large weights will be penalized by the penalty function.

The objective function to minimize with respect to the weights [math]\displaystyle{ \ u_{ji} }[/math] is:

[math]\displaystyle{ \ Reg=\left|y-\hat{y}\right|^2 + \lambda*\sum_{i=1}^{n}(u_{ji})^2 }[/math] If the weight start to grow, then [math]\displaystyle{ \sum_{i=1}^{n}(u_{ji})^2 }[/math] becomes larger and [math]\displaystyle{ \left|y-\hat{y}\right|^2 }[/math] becomes smaller.

The derivative of the objective function with respect to the weights [math]\displaystyle{ \ u_{ji} }[/math] is:
[math]\displaystyle{ \cfrac{\partial Reg}{\partial u_{ji}} = \cfrac{\partial \left|y-\hat{y}\right|^2}{\partial u_{ji}}+2*\lambda*u_{ji} }[/math]

This objective function is used during gradient descent. In practice, cross validation is used to determine the value of [math]\displaystyle{ \ \lambda }[/math] in the objective function.

We can do CV to choose [math]\displaystyle{ \lambda }[/math]. In case of any model, the least "complex" is the linear model. gradually let the complexity to grow then complexity begins to rise.

We want non-linear model but not too curvy.

Penalty Functions in Practice

In practice, we only apply the penalty function to the parametrized terms. That is, the bias term is not regularized, since it is simply the DC component and is not associated with a feature. Although this makes little difference, the concept is clear that the bias term should not be considered when determining the relative weights of the features.

In particular, we update the weights as follows:

[math]\displaystyle{ u_{ji} := \begin{cases} u_{ji} + \alpha * \cfrac{\partial \left|y-\hat{y}\right|^2}{\partial u_{ji}} &bias term\\ u_{ji} + \alpha * \cfrac{\partial \left|y-\hat{y}\right|^2}{\partial u_{ji}}+2*\lambda*u_{ji} &otherwise \end{cases} }[/math]

Radial Basis Function Neural Network (RBF NN)

Radial Basis Function Network(RBF) NN is a type of neural network with only one hidden layer in addition to an input and output layer. Each node within the hidden layer uses a radial basis activation function, hence the name of the RBF NN. A radial basis function is a real-valued function whose value depends only on the distance from center. One of the most commonly used radial basis functions is Gaussian. The weights from the input layer to the hidden layer are always "1" in a RBF NN, while the weights from the hidden layer to the output layer are adjusted during training. The output unit implements a weighted sum of hidden unit outputs. The input into an RBF NN is nonlinear while the output is linear. Due to their nonlinear approximation properties, RBF NNs are able to model complex mappings, which perceptron based neural networks can only model by means of multiple hidden layers. It can be trained without back propagation since it has a closed-form solution. RBF NNs have been successfully applied to a large diversity of applications including interpolation, chaotic time series modeling, system identification, control engineering, electronic device parameter modeling, channel equalization, speech recognition, image restoration, shape-form-shading, 3-D object modeling, motion estimation and moving object segmentation, data fusion, etc. <ref>www-users.cs.york.ac.uk/adrian/Papers/Others/OSEE01.pdf</ref>

The Network System

1. Input:
n data points [math]\displaystyle{ \mathbf{x}_i\subset \mathbb{R}^d, \quad i=1,...,n }[/math]
2. Basis function (the single hidden layer):
[math]\displaystyle{ \mathbf{\phi}_{n*m} }[/math], where [math]\displaystyle{ m }[/math] is the number of the neurons/basis functions that project original data points into a new space.
There are many choices for the basis function. The commonly used is radial basis:
[math]\displaystyle{ \phi_j(\mathbf{x}_i)=e^{-|\mathbf{x}_i-\mathbf{\mu}_j|^2} }[/math]
3. Weights associated with the last layer: [math]\displaystyle{ \mathbf{W}_{m*k} }[/math], where k is the number of classes in the output [math]\displaystyle{ \mathbf{Y} }[/math].
4. Output: [math]\displaystyle{ \mathbf{Y} }[/math], where
[math]\displaystyle{ y_k(x)=\sum_{j=1}^{m}(W_{jk}*\phi_j(x)) }[/math]
Alternatively, the output [math]\displaystyle{ \mathbf{Y} }[/math] can be written as [math]\displaystyle{ Y=\phi*W }[/math]

where

[math]\displaystyle{ \hat{Y}_{n,k} = \left[ \begin{matrix} \hat{y}_{1,1} & \hat{y}_{1,2} & \cdots & \hat{y}_{1,k} \\ \hat{y}_{2,1} & \hat{y}_{2,2} & \cdots & \hat{y}_{2,k} \\ \vdots &\vdots & \ddots & \vdots \\ \hat{y}_{n,1} & \hat{y}_{n,2} & \cdots & \hat{y}_{n,k} \end{matrix}\right] }[/math] is the matrix of output variables.
[math]\displaystyle{ \Phi_{n,m} = \left[ \begin{matrix} \phi_{1}(\mathbf{x}_1) & \phi_{2}(\mathbf{x}_1) & \cdots & \phi_{m}(\mathbf{x}_1) \\ \phi_{1}(\mathbf{x}_2) & \phi_{2}(\mathbf{x}_2) & \cdots & \phi_{m}(\mathbf{x}_2) \\ \vdots & \vdots & \ddots & \vdots \\ \phi_{1}(\mathbf{x}_n) & \phi_{2}(\mathbf{x}_n) & \cdots & \phi_{m}(\mathbf{x}_n) \end{matrix}\right] }[/math] is the matrix of Radial Basis Functions.
[math]\displaystyle{ W_{m,k} = \left[ \begin{matrix} w_{1,1} & w_{1,2} & \cdots & w_{1,k} \\ w_{2,1} & w_{2,2} & \cdots & w_{2,k} \\ \vdots & \vdots & \ddots & \vdots \\ w_{m,1} & w_{m,2} & \cdots & w_{m,k} \end{matrix}\right] }[/math] is the matrix of weights.

Here, [math]\displaystyle{ k }[/math] is the number of outputs, [math]\displaystyle{ n }[/math] is the number of data points, and [math]\displaystyle{ m }[/math] is the number of hidden units. If [math]\displaystyle{ k = 1 }[/math], [math]\displaystyle{ \hat Y }[/math] and [math]\displaystyle{ W }[/math] are column vectors. If m = n, then [math]\displaystyle{ \mathbf{\mu}_i = \mathbf{x}_i }[/math], so [math]\displaystyle{ \phi_{i} }[/math] checks to see how similar the two data points are.

[math]\displaystyle{ Y=\phi W }[/math] where Y and [math]\displaystyle{ \phi }[/math] are known while W is unknown. The object function is [math]\displaystyle{ \psi=|Y-\Phi W|^2 }[/math] and we want to [math]\displaystyle{ \underset{W}{\mbox{min}} |Y-\Phi W|^2 }[/math]. Therefore, to get the optimal weight, [math]\displaystyle{ W=(\phi^T \phi)^{-1}\phi^TY }[/math]

Network Training

To construct m basis functions, first cluster data points into m groups. Then find the centre of each cluster [math]\displaystyle{ \mu_1 }[/math] to [math]\displaystyle{ \mu_m }[/math].

Clustering: the K-means algorithm <ref>This section is taken from Wikicourse notes stat441/841 fall 2010.</ref>
K-means is a commonly applied technique in clustering observations into groups by minimizing the distance of individual observations from the center of the cluster it is in. The most common K-means algorithm used is referred to as Lloyd's algorithm:

  1. Select the number of clusters m
  1. Randomly select m observations from the n observations, to be used as m initial centers.
  1. (Alternative): Randomly assign all data points to clusters and use the means of those clusters as the initial centers.
  1. For each data point from the rest of observations, compute the distance to each of the initial centers and classify it into the cluster with the minimum distance.
  1. Obtain updated cluster centers by computing the mean of all the observations in the corresponding clusters.
  1. Repeat Step 3 and Step 4 until all of the differences between the old cluster centers and new cluster centers are acceptable.

Note: K means can be sensitive to the originally selected points, so it may be useful to run K-means repeatedly and use prior knowledge to select the best cluster.

Having constructed the basis functions, next minimize the objective function with respect to [math]\displaystyle{ \mathbf{W} }[/math]:
[math]\displaystyle{ min \;\left|| Y-\phi*W\right ||_2^{2} }[/math]

The solution to the problem is [math]\displaystyle{ \ W=(\phi^T*\phi)^{-1}*\phi^T*Y }[/math]

Matlab example:

clear all;
clc;
load ionosphere.mat;
P=ionosphere(:,1:(end-1));
P=P';
T=ionosphere(:,end);
T=T';
net=newff(minmax(P),[4,1],{'logsig','purelin'},'trainlm'); 
net.trainParam.show=100;
net.trainParam.mc=0.9; 
net.trainParam.mu=0.05; 
net.trainParam.mu_dec=0.1;
net.trainParam.mu_inc=5;
net.trainParam.lr=0.5;
net.trainParam.goal=0.01; 
net.trainParam.epochs=5000; 
net.trainParam.max_fail=10;
net.trainParam.min_grad=1e-20; 
net.trainParam.mem_reduc=2;
net.trainParam.alpha=0.1;
net.trainParam.delt_inc=1;
net.trainParam.delt_dec=0.1;
net=init(net);        
net,tr]=train(net,P,T); 
A = sim(net,P); 
E = T - A; 
disp('the training error:')
MSE=mse(E)

Single Basis Function vs. Multiple Basis Functions

Suppose the data points belong to a mixture of Gaussian distributions.

Under single basis function approach, every class in [math]\displaystyle{ \mathbf{Y} }[/math] is represented by a single basis function. This approach is similar to the approach of linear discriminant analysis.

Compare [math]\displaystyle{ y_k(x)=\sum_{j=1}^{m}(W_{jk}*\phi_j(x)) }[/math]
with [math]\displaystyle{ P(Y|X)=\frac{P(X|Y)*P(Y)}{P(X)} }[/math].
Here, the basis function [math]\displaystyle{ \mathbf{\phi}_{j} }[/math] can be thought of as equivalent to [math]\displaystyle{ \frac{P(X|Y)}{P(X)} }[/math].

Under multiple basis function approach, a layer of j basis functions are placed between [math]\displaystyle{ \mathbf{Y} }[/math] and [math]\displaystyle{ \mathbf{X} }[/math]. The probability function of the joint distribution of [math]\displaystyle{ \mathbf{X} }[/math], [math]\displaystyle{ \mathbf{J} }[/math] and [math]\displaystyle{ \mathbf{Y} }[/math] is

[math]\displaystyle{ \,P(X,J,Y)=P(Y)*P(J|Y)*P(X|J) }[/math]

Here, instead of using single Gaussian to represent each class, we use a "mixture of Gaussian" to represent.
The probability funcion of [math]\displaystyle{ \mathbf{Y} }[/math] conditional on [math]\displaystyle{ \mathbf{X} }[/math] is

[math]\displaystyle{ P(Y|X)=\frac{P(X,Y)}{P(X)}=\frac{\sum_{j}{P(X,J,Y)}}{P(X)} }[/math]

Multiplying both the nominator and the denominator by [math]\displaystyle{ \ P(J) }[/math] yields

[math]\displaystyle{ \ P(Y|X)=\sum_{j}{P(J|X)*P(Y|J)} }[/math]
where [math]\displaystyle{ \ P(J|X) }[/math] tells that, with given X (data), how likely the data is in the Gaussian J, and [math]\displaystyle{ \ P(Y|J) }[/math] tells that, with given Gaussian J, how likely this Gaussian belongs to class K.


since
[math]\displaystyle{ \ P(J|X)=\frac{P(X|J)*P(J)}{P(X)} }[/math] and [math]\displaystyle{ \ P(Y|J)=\frac{P(Y|J)*P(Y)}{P(J)} }[/math]

If the weights in the radial basis neural network have proper properties of probability function, then the basis function [math]\displaystyle{ \mathbf{\phi}_j }[/math] can be thought of as [math]\displaystyle{ \ P(J|X) }[/math], representing the probability that [math]\displaystyle{ \mathbf{x} }[/math] is in Gaussian class j; and the weight function W can be thought of as [math]\displaystyle{ \ P(Y|J) }[/math], representing the probability that a data point belongs to class k given that the point is from Gaussian class j.

In conclusion, given a mixture of Gaussian distributions, multiple basis function approach is better than single basis function, since the former produces a non-linear boundary.

RBF Network Complexity Control (Lecture: Oct. 25, 2011)

When performing model selection, overfitting is a common issue. As model complexity increases, there comes a point where the model becomes worse and worse at fitting real data even though it fits the training data better. It becomes too sensitive to small perturbations in the training data that should be treated as noise to allow flexibility in the general case. In this section we will show that training error (empiricial error from the training data) is a poor estimator for true error and that minimizing training error will increase complexity and result in overfitting. We will show that test error (empirical error from the test data) is a better estimator of true error. This will be done by estimating a model [math]\displaystyle{ \hat f }[/math] given training data [math]\displaystyle{ T={(x_i,y_i)}^n_{i=1} }[/math].


First, some notation is defined.

The assumption for the training data set is that it consists of the true model values [math]\displaystyle{ \ f(x_i) }[/math] plus some additive Gaussian noise [math]\displaystyle{ \ \epsilon_i }[/math]:

[math]\displaystyle{ \ y_i = f(x_i)+\epsilon_i }[/math] where [math]\displaystyle{ \ \epsilon \sim N(0,\sigma^2) }[/math]

[math]\displaystyle{ \ y_i = true\,model + noise }[/math]

Important Notation

Let:

  • [math]\displaystyle{ \displaystyle f(x) }[/math] denote the true model.
  • [math]\displaystyle{ \hat f(x) }[/math] denote the prediction/estimated model, which is generated from a training data set [math]\displaystyle{ \displaystyle T = \{(x_i, y_i)\}^n_{i=1} }[/math]. The observation is not accurate.

Remark: [math]\displaystyle{ \hat f(x_i) = \hat y_i }[/math].

  • [math]\displaystyle{ \displaystyle err }[/math] denote the empirical error based on actual data points. This can be either test error or training error depending on the data points used. This is the difference between [math]\displaystyle{ (y-\hat{y})^2 }[/math]
  • [math]\displaystyle{ \displaystyle Err }[/math] denote the true error or generalization error, and is what we are trying to minimize. It is the difference between [math]\displaystyle{ (f-\hat{f})^2 }[/math]
  • [math]\displaystyle{ \displaystyle MSE=E[(\hat f(x)-f(x))^2] }[/math] denote the mean squared error.

We use the training data to estimate our model parameters.

[math]\displaystyle{ D=\{(x_i,y_i)\}_{i=1}^n }[/math]


For a given point [math]\displaystyle{ y_0 }[/math], the expectation of the empirical error is:

[math]\displaystyle{ \begin{align} E[(\hat{y_0}- y_0)^2] &= E[(\hat{f_0}- f_0 -\epsilon_0)^2] \\ &=E[(\hat{f_0}-f_0)^2 + \epsilon_0^2 - 2 \epsilon_0 (\hat{f_0}-f_0)] \\ &=E[(\hat{f_0}-f_0)^2] + E[\epsilon_0^2] - 2 E [ \epsilon_0 (\hat{f_0}-f_0)] \\ &=E[(\hat{f_0}-f_0)^2] + \sigma^2 - 2 E [ \epsilon_0 (\hat{f_0}-f_0)] \end{align} }[/math]

This is the formula partitions the training error into the true error and others errors. Our goal is to select the model that minimizes the true error so we must try to understand the effects of these other error terms if we are to use training error as a estimate for the true error.

The first term is essentially true error. The second term is a constant. The third term is problematic, since in general this expectation is not 0. We will break this into 2 cases to simplify the third term.

Case 1: Estimating Error using Data Points from Test Set

In Case 1, the empirical error is test error and the data points used to calculate test error are from the test set, not the training set. That is, [math]\displaystyle{ y_0 \notin T }[/math].

We can rewrite the third term in the following way, since both [math]\displaystyle{ y_0 }[/math] and [math]\displaystyle{ \hat{f_0} }[/math] have expectation [math]\displaystyle{ f_0 }[/math], the true value, which is a constant and not random.

[math]\displaystyle{ \begin{align} E [ \epsilon_0 (\hat{f_0}-f_0)] &= E [ (y_0-f_0) (\hat{f_0}-f_0)] \\ & = cov{(y_0,\hat{f_0})} \end{align} }[/math]

(The reason why covariance is here since [math]\displaystyle{ \displaystyle y_i }[/math] is a new point, [math]\displaystyle{ \hat f }[/math] and [math]\displaystyle{ \displaystyle y_i }[/math] are independent.)

Consider [math]\displaystyle{ \ f_0 }[/math] is a mean.

Since [math]\displaystyle{ y_0 }[/math] is not part of the training set, it is independent of the model [math]\displaystyle{ \hat{f_0} }[/math] generated by the training set. Therefore,

[math]\displaystyle{ y_0 \notin T \to y_0 \perp \hat{f} }[/math]

[math]\displaystyle{ \ cov{(y_0,\hat{f}_0)}=0 }[/math]


The equation for the expectation of empirical error simplifies to the following:

[math]\displaystyle{ E[(y_0-\hat{y_0})^2] = E[(f_0-\hat{f_0})^2] + \sigma^2 }[/math]


This result applies to every output value in the test data set, so we can generalize this equation by summing over all m data points that have NOT been seen by the model:

[math]\displaystyle{ \begin{align} \sum_{i=1}^m{(y_i-\hat{y_i})^2} &= \sum_{i=1}^m{(f_i-\hat{f_i})^2)} + m \sigma^2 \\ err &= Err + m \sigma^2 \\ & = Err + constant\\ \end{align} }[/math]

Rearranging to solve for true error, we get

[math]\displaystyle{ \ Err = err - m \sigma^2 }[/math]

We see that test error is a good estimator for true error upto a constant additive value, since they only differ by a constant. Minimizing test error is equal to minimize true error. Moreover, the true error is less than the empirical error. There is no term adding unnecessary complexity. This is the justification for Cross Validation.

To avoid over-fitting or under-fitting using cross-validation, a validation data set selected so that it is independent from the estimated model.

Case 2: Estimating Error using Data Points from Training Set

In Case 2, the data points used to calculate error are from the training set, so [math]\displaystyle{ \ y_0 \in T }[/math], i.e. [math]\displaystyle{ \ (x_i, y_i) }[/math] is in the training set. We will show that this results in a worse estimator for true error.

Now [math]\displaystyle{ \ y_0 }[/math] has been used to estimate [math]\displaystyle{ \ \hat{f} }[/math] so they are not independent. We use Stein's lemma to simplify the term [math]\displaystyle{ \ E[\epsilon_0 (\hat{f_0} - f_0)] }[/math].

Stein's Lemma states that if [math]\displaystyle{ \ x \sim N(\theta,\sigma^2) }[/math] and [math]\displaystyle{ \ g(x) }[/math] is differentiable, then

[math]\displaystyle{ E\left[g(x) (x - \theta)\right] = \sigma^2 E \left[ \frac{\partial g(x)}{\partial x} \right] }[/math]

Substitute [math]\displaystyle{ \ \epsilon_0 }[/math] for [math]\displaystyle{ \ x }[/math] and [math]\displaystyle{ \ (\hat{f_0}-f_0) }[/math] for [math]\displaystyle{ \ g(x) }[/math]. Note that [math]\displaystyle{ \ \hat{f_0} }[/math] is a function of the noise, since as noise changes, [math]\displaystyle{ \hat{f_0} }[/math] will change. Using Stein's Lemma, we get:

[math]\displaystyle{ \begin{align} E[\epsilon_0 (\hat{f_0}-f_0)] &= \sigma^2 E \left[ \frac{\partial (\hat{f_0}-f_0)}{\partial \epsilon_0} \right]\\ &=\sigma^2 E\left[\frac{\partial \hat{f_0}}{\partial \epsilon_0}\right]\\ &=\sigma^2 E\left[\frac{\partial \hat{f_0}}{\partial y_0}\right]\\ &=\sigma^2 E\left[D_0\right] \end{align} }[/math]


Remark: [math]\displaystyle{ \frac{\partial (\hat{f_0} - f_0)}{\partial y_0} = \frac{\partial (\hat{f_0} - f_0)}{\partial \epsilon_0} * \frac{\partial \epsilon_0}{\partial y} = \frac{\partial (\hat{f_0} - f_0)}{\partial \epsilon_0} * \frac{\partial (y_0 - \hat{y_0})}{\partial y} }[/math]

The reason why [math]\displaystyle{ \frac{\partial (\hat{f_0})}{\partial \epsilon_0} = 0 }[/math] is that [math]\displaystyle{ f_0 }[/math] is a constant instead of a function.

where [math]\displaystyle{ \frac{\partial (y_0 - \hat{y_0})}{\partial y} = 1 }[/math]


We take [math]\displaystyle{ \ D_0 = \frac{\partial \hat{f_0}}{\partial y_0} }[/math], where [math]\displaystyle{ \ D_0 }[/math] represents the derivative of the fitted model with respect to the observations. The equation for the expectation of empirical error becomes:

[math]\displaystyle{ E[(y_0-\hat{y_0})^2] = E[(f_0-\hat{f_0})^2] + \sigma^2 - 2 \sigma^2 E[D_0] }[/math]

Generalizing the equation for all n data points in the training set:

[math]\displaystyle{ \sum_{i=1}^n{(y_i-\hat{y_i})^2} = \sum_{i=1}^n{(f_i-\hat{f_i})^2} + n \sigma^2 - 2 \sigma^2 \sum_{i=1}^n{D_i} }[/math]

Based on the notation defined above, we then have:

[math]\displaystyle{ err = Err + n \sigma^2 - 2 \sigma^2 \sum_{i=1}^n{D_i} }[/math]

[math]\displaystyle{ Err = err - n \sigma^2 + 2 \sigma^2 \sum_{i=1}^n{D_i} }[/math]

This equation for the true error is called Stein's unbiased risk estimator (SURE). It is an unbiased estimator of the mean-squared error of a given estimator, in a deterministic estimation scenario. In other words, it provides an indication of the accuracy of a given estimator. This is important since, in deterministic estimation, the true mean-squared error of an estimator generally depends on the value of the unknown parameter and thus cannot be determined completely.

Note that [math]\displaystyle{ \ D_i }[/math] depends on complexity of the model. It measures how sensitive the model is to small perturbations in a single [math]\displaystyle{ \ y_i }[/math] in the training set. As complexity increases, the model will try to chase every little change and will be more sensitive to such perturbations. Minimizing training error without accounting for the impact of this term will result in overfitting. Thus, we need to know how to find [math]\displaystyle{ \ D_i }[/math]. Below we show an example, applying SURE to RBFs, where computing [math]\displaystyle{ \ D_i }[/math] is straightforward.

SURE for RBF Network Complexity Control

Problem: Assuming we want to fit our data using a radial basis function network, how many radial basis functions should be used? The network size has to compromise the approximation quality, which usually improves as the network grows, and the training effort, which increases with the network size. Moreover, too complex models can show insufficient generalization properties (overfitting) requiring small networks. Furthermore, in terms of hardware or software realization smaller networks occupy less area due to reduced memory needs. Hence, controlling the network size is one major task during training. For further information about RBF network complexity control check [5]

We can use Stein's unbiased risk estimator (SURE) to give us an approximation for how many RBFs to use.

The SURE equation is

[math]\displaystyle{ \mbox{Err}=\mbox{err} - n\sigma^2 + 2\sigma^2\sum_{i=1}^n D_i }[/math]

where [math]\displaystyle{ \ Err }[/math] is the true error, [math]\displaystyle{ \ err }[/math] is the empirical error, [math]\displaystyle{ \ n }[/math] is the number of training samples, [math]\displaystyle{ \ \sigma^2 }[/math] is the variance of the noise of the training samples and [math]\displaystyle{ \ D_i }[/math] is derivative of the model output with respect to true output as shown below

[math]\displaystyle{ D_i=\frac{\partial \hat{f_i}}{\partial y_i} }[/math]

Optimal Number of Basis in RBF

The optimal number of basis functions should be rearranged in order to minimize the generalization error [math]\displaystyle{ \ err }[/math].

The formula for an RBF network is:

[math]\displaystyle{ \hat{f}=\Phi W }[/math]

where [math]\displaystyle{ \ \hat{f} }[/math] is a matrix of RBFN outputs for each training sample, [math]\displaystyle{ \ \Phi }[/math] is the matrix of neuron outputs for each training sample, and [math]\displaystyle{ \ W }[/math] is the weight vector between each neuron and the output. Suppose we have m + 1 neurons in the network, where one has a constant function.

Given the training labels [math]\displaystyle{ \ Y }[/math] we define the empirical error and minimize it

[math]\displaystyle{ \underset{W}{\mbox{min}} |Y-\Phi W|^2 }[/math]

[math]\displaystyle{ \, W=(\Phi^T \Phi)^{-1} \Phi^T Y }[/math]

[math]\displaystyle{ \hat{f}=\Phi(\Phi^T \Phi)^{-1} \Phi^T Y }[/math]


For simplification let [math]\displaystyle{ \ H }[/math] be the hat matrix defined as

[math]\displaystyle{ \, H=\Phi(\Phi^T \Phi)^{-1} \Phi^T }[/math]

Our optimal output then becomes

[math]\displaystyle{ \hat{f}=H Y }[/math]

We calculate [math]\displaystyle{ D }[/math] from the SURE equation. We now consider applying SURE to Radial Basis Function networks specifically. Based on SURE, the optimum number of basis functions should be assigned so that the generalization error [math]\displaystyle{ \displaystyle err }[/math] is minimized. Based on the RBF Network, by setting [math]\displaystyle{ \frac{\partial err}{\partial W} }[/math] equal to zero we obtain the least squares solution of [math]\displaystyle{ \ W = (\Phi^{T}\Phi)^{-1}\Phi^{T}Y }[/math]. Then the fitted values are [math]\displaystyle{ \hat{Y} = \hat{f} = \Phi W = \Phi(\Phi^{T}\Phi)^{-1}\Phi^{T}Y = HY }[/math], where [math]\displaystyle{ \ H = \Phi(\Phi^{T}\Phi)^{-1}\Phi^{T} }[/math] is the hat matrix for this model.


Consider only one node of the network. In this case we can write: [math]\displaystyle{ \hat f_i=\,H_{i1}y_1+\,H_{i2}y_2+\cdots+\,H_{ii}y_i+\cdots+\,H_{in}y_n }[/math].

Note here that [math]\displaystyle{ \,H }[/math] depends on the input vector [math]\displaystyle{ \displaystyle x_i }[/math] but not on the observation [math]\displaystyle{ \displaystyle y_i }[/math].

By taking the derivative of [math]\displaystyle{ \ \hat f_i }[/math] with respect to [math]\displaystyle{ \displaystyle y_i }[/math], we can readily obtain:

[math]\displaystyle{ \sum_{i=1}^n \frac {\partial \hat f}{\partial y_i}=\sum_{i=1}^n \,H_{ii} }[/math]

[math]\displaystyle{ D_i= \frac{\partial \hat f_i}{\partial y_i}=\frac{\partial [HY]_i}{\partial y_i} }[/math] , [math]\displaystyle{ \hat f_i=\sum_{j}\,H_{ij}*Y_j }[/math]


Here we recall that [math]\displaystyle{ \sum_{i=1}^n\,D_{i}= \sum_{i=1}^n \,H_{ii}= \,Trace(H) }[/math], the sum of the diagonal elements of [math]\displaystyle{ \,H }[/math]. Using the permutation property of the trace function we can further simplify the expression as follows: [math]\displaystyle{ \,Trace(H)= Trace(\Phi(\Phi^{T}\Phi)^{-1}\Phi^{T})= Trace(\Phi^{T}\Phi(\Phi^{T}\Phi)^{-1})=m }[/math], by the trace cyclical permutation property, where [math]\displaystyle{ \displaystyle m }[/math] is the number of basis functions in the RBF network (and hence [math]\displaystyle{ \displaystyle \Phi }[/math] has dimension [math]\displaystyle{ \displaystyle n \times m }[/math]).

Sketch of Trace Cyclical Property Proof:

For [math]\displaystyle{ \, A_{mn}, B_{nm}, Tr(AB) = \sum_{i=1}^{n}\sum_{j=1}^{m}A_{ij}B_{ji} = \sum_{j=1}^{m}\sum_{i=1}^{n}B_{ji}A_{ij} = Tr(BA) }[/math].
With that in mind, for [math]\displaystyle{ \, A_{nn}, B_{nn} = CD, Tr(AB) = Tr(ACD) = Tr(BA) }[/math] (from above) [math]\displaystyle{ \, = Tr(CDA) }[/math].

Note that since [math]\displaystyle{ \displaystyle \Phi }[/math] is a projection of the input matrix [math]\displaystyle{ \,X }[/math] onto a basis set spanned by [math]\displaystyle{ \,m }[/math], the number of basis functions, that sometimes an extra [math]\displaystyle{ \displaystyle \Phi_0 }[/math] term is included without any input to represent the intercept of a fitted model. In this case, if considering an intercept, then [math]\displaystyle{ \,Trace(H)= m+1 }[/math].


The SURE equation then becomes

[math]\displaystyle{ \, \mbox{Err}=\mbox{err} - n\sigma^2 + 2\sigma^2(m+1) }[/math]

As the number of RBFs [math]\displaystyle{ \ m }[/math] increases the empirical error [math]\displaystyle{ \ err }[/math] decreases, but the right term of the SURE equation increases. An optimal true error [math]\displaystyle{ \ Err }[/math] can be found by increasing [math]\displaystyle{ \ m }[/math] until [math]\displaystyle{ \ Err }[/math] begins to grow. At that point the estimate to the minimum true error has been reached.

The value of m that gives the minimum true error estimate is the optimal number of basis functions to be implemented in the RBF network, and hence is also the optimal degree of complexity of the model.

One way to estimate the noise variance is

[math]\displaystyle{ \hat{\sigma}^2=\frac{\sum (y-\hat{y})^2}{n-1} }[/math]

This application of SURE is straightforward because minimizing Radial Basis Function error reduces to a simple least squares estimator problem with a linear solution. This makes computing [math]\displaystyle{ \ D_i }[/math] quite simple. In general, [math]\displaystyle{ \ D_i }[/math] can be much more difficult to solve for.

RBF Network Complexity Control (Alternate Approach)

An alternate approach (not covered in class) to tackling RBF Network complexity control is controlling the complexity by similarity <ref name="Eickhoff">R. Eickhoff and U. Rueckert, "Controlling complexity of RBF networks by similarity," Proceedings of European Symposium on Artificial Neural Networks, 2007</ref>. In <ref name="Eickhoff" />, the authors suggest looking at the similarity between the basis functions multiplied by their weight by determining the cross-correlations between the functions. The cross-correlation is calculated as follows:

[math]\displaystyle{ \ \rho_{ij} = \frac{E[g_i(x)g_j(x)]}{\sqrt(E[g^2_i(x)]E[g^2_j(x)])} }[/math]

where [math]\displaystyle{ \ E[] }[/math] denotes the expectation and [math]\displaystyle{ \ g_i(x) }[/math] and [math]\displaystyle{ \ g_j(x) }[/math] would denote two of the basis functions multiplied by their respective weights.

If the cross-correlation between two functions is high, <ref name="Eickhoff" /> suggests that the two basis functions be replaced with one basis function that covers the same region of both basis functions and that the corresponding weight of this new basis function be the average of the weights of the two basis functions. For the case of Gaussian radial basis functions, the equations for finding the new weight ([math]\displaystyle{ \ w_{new} }[/math]), mean ([math]\displaystyle{ \ c_{new} }[/math]) and variance ([math]\displaystyle{ \ \sigma_{new} }[/math]) are as follows:

[math]\displaystyle{ \ w_{new} = \frac{w_i + w_j}{2} }[/math]

[math]\displaystyle{ \ c_{new} = \frac{1}{w_i \sigma^n_i + w_j \sigma^n_j}(w_i \sigma^n_i c_i + w_j \sigma^n_j c_j) }[/math]

[math]\displaystyle{ \ \sigma^2_{new} = \left(\frac{\sigma_i + \sigma_j}{2}+ \frac{min(||m-c_i||,||m-c_j||)}{2}\right)^2 }[/math]

where [math]\displaystyle{ \ n }[/math] denotes the input dimension and [math]\displaystyle{ \ m }[/math] denotes the total number of radial basis functions.

This process is repeated until the cross-correlation between the basis functions falls below a certain threshold, which is a tunable parameter.

Note 1) Though not extensively discussed in <ref name="Eickhoff" />, this approach to RBF Network complexity control presumably requires a starting RBF Network with a large number basis functions.

Note 2) This approach does not require the repeated implementation of differently sized RBF Networks to determine the empirical error, unlike the approach using SURE. However, the SURE approach is backed up by theory to find the number of radial basis functions that optimizes the true error and does not rely on some tunable threshold. It would be interesting to compare the results of both approaches (in terms of the resulting RBF Network obtained and the test error).


Generalized SURE for Exponential Families

As we know, Stein’s unbiased risk estimate (SURE) is limited to be applied for the independent, identically distributed (i.i.d.) Gaussian model. However, in some recent work, some researchers tried to work on obtaining a SURE counterpart for general, instead of deriving estimate by dominating least-squares estimation, and this technique made SURE extend its application to a wider area.

You may look at Yonina C. Eldar, Generalized SURE for Exponential Families: Applications to Regularization, IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 57, NO. 2, FEBRUARY 2009 for more information.

Further Reading

Fully Tuned Radial Basis Function Neural Networks for Flight Control <ref> http://www.springer.com/physics/complexity/book/978-0-7923-7518-0;jsessionid=985F21372AC7AE1B654F1EADD11B296F.node3 </ref>

Paper about the BBFN for multi-task learning <ref>http://books.nips.cc/papers/files/nips18/NIPS2005_0628.pdf</ref>

Radial Basis Function (RBF) Networks <ref>http://documents.wolfram.com/applications/neuralnetworks/index6.html</ref>

An Example of RBF Networks <ref>http://reference.wolfram.com/applications/neuralnetworks/ApplicationExamples/12.1.2.html</ref>

This paper suggests an objective approach in determining proper samples to find good RBF networks with respect to accuracy <ref>http://www.wseas.us/e-library/conferences/2009/hangzhou/MUSP/MUSP41.pdf</ref>.

Support Vector Machines (Lecture: Oct. 27, 2011)

A series of linear classifiers, H2 represents a SVM, where the SVM attempts to maximize the margin, the distance between the closest point in each data set and the linear classifier.

Support vector machines (SVMs), also referred to as max-margin classifiers, are learning systems that use a hypothesis space of linear functions in a high dimensional feature space, trained with a learning algorithm from optimization theory that implements a learning bias derived from statistical learning theory. SVMs are kernel machines based on the principle of structural risk minimization, which are used in applications of regression and classification; however, they are mostly used as binary classifiers. Although the subject can be said to have started in the late seventies (Vapnik, 1979), it is receiving increasing attention recently by researchers. It is such a powerful method that in the few years since its introduction has outperformed most other systems in a wide variety of applications, especially in pattern recognition.

The current standard incarnation of SVM is known as "soft margin" and was proposed by Corinna Cortes and Vladimir Vapnik [6]. In practice the data is not usually linearly separable. Although theoretically we can make the data linearly separable by mapping it into higher dimensions, the issues of how to obtain the mapping and how to avoid overfitting are still of concern. A more practical approach to classifying non-linearly separable data is to add some error tolerance to the separating hyperplane between the two classes, meaning that a data point in class A can cross the separating hyperplane into class B by a certain specified distance. This more generalized version of SVM is the so-called "soft margin" support vector machine and is generally accepted as the standard form of SVM over the hard margin case in practice today. [7]

Support Vector Machines are motivated by the idea of training linear machines with margins. It involves preprocessing the data to represent patterns in a high dimension (generally much higher than the original feature space). Note that using a suitable non-linear mapping to a sufficiently high dimensional space, the data will always be separable. (p. 263) <ref>[Duda, Richard O., Hart, Peter E., Stork, David G. "Pattern Classification". Second Edition. John Wiley & Sons, 2001.]</ref>

A suitable way to describe the interest in SVM can be seen in the following quote. "The problem which drove the initial development of SVMs occurs in several guises - the bias variance tradeoff (Geman, Bienenstock and Doursat, 1992), capacity control (Guyon et al., 1992), overfitting (Montgomery and Peck, 1992) - but the basic idea is the same. Roughly speaking, for a given learning task, with a given finite amount of training data, the best generalization performance will be achieved if the right balance is struck between the accuracy attained on that particular training set, and the “capacity” of the machine, that is, the ability of the machine to learn any training set without error. A machine with too much capacity is like a botanist with a photographic memory who, when presented with a new tree, concludes that it is not a tree because it has a different number of leaves from anything she has seen before; a machine with too little capacity is like the botanist’s lazy brother, who declares that if it’s green, it’s a tree. Neither can generalize well. The exploration and formalization of these concepts has resulted in one of the shining peaks of the theory of statistical learning (Vapnik, 1979). A Tutorial on Support Vector Machines for Pattern Recognition

Support Vector Method Solving Real-world Problems

No matter whether the training data are linearly-separable or not, the linear boundary produced by any of the versions of SVM is calculated using only a small fraction of the training data rather than using all of the training data points. This is much like the difference between the median and the mean.

SVM can also be considered a special case of Tikhonov regularization. A special property is that they simultaneously minimize the empirical classification error and maximize the geometric margin; hence they are also known as maximum margin classifiers. The key features of SVM are the use of kernels, the absence of local minima, the sparseness of the solution (i.e. few training data points are needed to construct the linear decision boundary) and the capacity control obtained by optimizing the margin.(Shawe-Taylor and Cristianini (2004)).

Another key feature of SVM, as discussed below, is the use of slack variables to control the amount of tolerable misclassification on the training data, which form the soft margin SVM. This key feature can serve to improve the generalization of SVM to new data. SVM has been used successfully in many real-world problems:

- Pattern Recognition, such as Face Detection , Face Verification, Object Recognition, Handwritten Character/Digit Recognition, Speaker/Speech Recognition, Image Retrieval , Prediction;

- Text and Hypertext categorization;

- Image classification;

- Bioinformatics, such as Protein classification, Cancer classification;

Please refer to here for more applications.

Structural Risk Minimization and VC Dimension

Linear learning machines are the fundamental formulations of SVMs. The objective of the linear learning machine is to find the linear function that minimizes the generalization error from a set of functions which can approximate the underlying mapping between the input and output data. Consider a learning machine that implements linear functions in the plane as decision rules

[math]\displaystyle{ f(\mathbf{x},\boldsymbol{\beta}, \beta_0)=sign (\boldsymbol{\beta}^T\mathbf{x}+\beta_0) }[/math]


With n given training data with input values [math]\displaystyle{ \mathbf{x}_i \in \mathbb{R}^d }[/math] and output values [math]\displaystyle{ y_i\in\{-1,+1\} }[/math]. The empirical error is defined as

[math]\displaystyle{ \Re_{emp} (\boldsymbol{\theta}) = \frac{1}{n}\sum_{i=1}^n |y_i-f(\mathbf{x},\boldsymbol{\beta}, \beta_0)|= \frac{1}{n}\sum_{i=1}^n |y_i-sign (\boldsymbol{\beta}^T\mathbf{x}+\beta_0)| }[/math]


where [math]\displaystyle{ \boldsymbol{\theta}=(\mathbf{x},\boldsymbol{\beta}) }[/math]

The generalization error can be expressed as

[math]\displaystyle{ \Re (\boldsymbol{\theta}) = \int|y-f(\mathbf{x},\boldsymbol{\theta})|p(\mathbf{x},y)dxdy }[/math]

which measures the error for all input/output patterns that are generated from the underlying generator of the data characterized by the probability distribution [math]\displaystyle{ p(\mathbf{x},y) }[/math] which is considered to be unknown. According to statistical learning theory, the generalization (test) error can be upper bounded in terms of training error and a confidence term as shown in

[math]\displaystyle{ \Re (\boldsymbol{\theta})\leq \Re_{emp} (\boldsymbol{\theta}) +\sqrt{\frac{h(ln(2n/h)+1)-ln(\eta/4)}{n}} }[/math]


The term on left side represents generalization error. The first term on right hand side is empirical error calculated from the training data and the second term is called VC confidence which is associated with the VC dimension h of the learning machine. VC dimension is used to describe the complexity of the learning system. The relationship between these three items is illustrated in figure below:


File:risk.png
The relation between expected risk, empirical risk and VC confidence in SVMs.


Thus, even though we don’t know the underlying distribution based on which the data points are generated, it is possible to minimize the upper bound of the generalization error in place of minimizing the generalization error. That means one can minimize the expression in the right hand side of the inequality above.

Unlike the principle of Empirical Risk Minimization (ERM) applied in Neural Networks which aims to minimize the training error, SVMs implement Structural Risk Minimization (SRM) in their formulations. SRM principle takes both the training error and the complexity of the model into account and intends to find the minimum of the sum of these two terms as a trade-off solution (as shown in figure above) by searching a nested set of functions of increasing complexity.

Introduction

Support Vector Machineis a popular linear classifier. Suppose that we have a data set with two classes which could be separated using a hyper-plane. Support Vector Machine (SVM) is a method which will give us the "best" hyper-plane. There are other classifiers that find a hyper-plane that separate the data, namely Perceptron. However, the output of Perceptron and many other algorithms depends on the input parameters, so every run of Percetron can give you a different output. On the other hand, SVM tries to find the hyper-plane that separates the data and have the farthest distance from the points. This is also known as the Max-Margin hyper-plane.

No matter whether the training data are linearly-separable or not, the linear boundary produced by any of the versions of SVM is calculated using only a small fraction of the training data rather than using all of the training data points. This is much like the difference between the median and the mean. SVM can also be considered a special case of Tikhonov regularization. A special property is that they simultaneously minimize the empirical classification error and maximize the geometric margin; hence they are also known as maximum margin classifiers. The key features of SVM are the use of kernels, the absence of local minima, the sparseness of the solution (i.e. few training data points are needed to construct the linear decision boundary) and the capacity control obtained by optimizing the margin.(Shawe-Taylor and Cristianini (2004)). Another key feature of SVM, as discussed below, is the use of slack variables to control the amount of tolerable misclassification on the training data, which form the soft margin SVM. This key feature can serve to improve the generalization of SVM to new data.


With Perceptron, there can be infinitely many separating hyperplanes such that the training error will be zero. But the question is that among all these possible solution which one is the best. Intuitively, a good separation is achieved by the hyperplane that has the largest distance to the nearest training data points of any class (so-called functional margin), since in general the larger the margin the lower the generalization error of the classifier. This makes sense because at test time, more points will be observed and they may be closer to the other class, so the safest choice for the hyper-plane would be the one farthest from both classes.

One of the great things about SVM is that not only it has solid theoretical guarantees, but also it works very well in practice.

To summarize

What we mean by margin is the distance between the hyperplane and the closest point in a class.

If the data is Linearly separable, then there exists infinitely many solution hyperplanes. Of those, infinitely many hyperplanes, one of them is the best choice for the solution. Then the best decision to make is the hyperplane which is furthest from both classes. Our goal is to find a hyperplane among all possible hyperplanes which is furthest from both classes. This is to say, find the hyperplane that has maximum margin. If such a hyperplane exists, it is known as the maximum-margin hyperplane and the linear classifier it defines is known as a maximum margin classifier; or equivalently, the perceptron of optimal stability.

What we mean by margin is the distance between the hyperplane and the closest point in a class.

If the mean value were to be used instead of the closest point, then an outlier may pull the hyperplane into the data which would incorrectly classify the known data points. This is the reason why we use the closest point instead of the expected value.
Setting
What is [math]\displaystyle{ d_i }[/math]
  • We assume that the data is linearly separable
  • Our classifier will be of the form [math]\displaystyle{ \boldsymbol\beta^T\mathbf{x} + \beta_0 }[/math]
  • We will assume that our labels are [math]\displaystyle{ y_i \in \{-1,1\} }[/math]


The goal is to classify the point [math]\displaystyle{ \mathbf{x_i} }[/math] based on the [math]\displaystyle{ sign \{d_i\} }[/math] where [math]\displaystyle{ d_i }[/math] is the signed distance between [math]\displaystyle{ \mathbf{x_i} }[/math] and the hyperplane.


Now we are going to check how far this point is from the hyperplane, and the parts on one side of the hyperplane will have a negative value and the parts on the other side will have a positive value. Points are classified by the sign of the data point. So [math]\displaystyle{ \mathbf{x_i} }[/math] would be classified using [math]\displaystyle{ d_i }[/math]

Side Note: A memory from the past of Dr. Ali Ghodsi

When the aforementioned Professor was a small child, grade 2. He was often careless with the accuracy of certain curly brackets, when writing what one can only assume was math proofs. One day, his teacher grew impatient and demanded that a page of perfect curly brackets be produced by the young Dr. (He may or may not have been a doctor at the time) And now, whenever Dr. Ghodsi writes a tidy curly bracket, he is reminded of this and it always brings a smile to his face.

From memories of the past.

(the number 20 was involved in the story, either the number of pages or the number of lines)

Case 1: Linearly Separable (Hard Margin)

In this case, the classifier will be [math]\displaystyle{ \boldsymbol {\beta^T} \boldsymbol {x} + \beta_0 }[/math] and [math]\displaystyle{ \ y \in \{-1, 1\} }[/math]. The point [math]\displaystyle{ \boldsymbol {x_i} }[/math] to classify is based on the sign of [math]\displaystyle{ \ \{d_i\} }[/math], where [math]\displaystyle{ \ d_i }[/math] is the signed distance between [math]\displaystyle{ \boldsymbol {x_i} }[/math] and the hyperplane.

Objective Function
Look at it being perpendicular

Observation 1: [math]\displaystyle{ \boldsymbol\beta }[/math] is orthogonal to hyper-plane. Because, for any two arbitrary points [math]\displaystyle{ \mathbf{x_1, x_2} }[/math] on the plane we have:

[math]\displaystyle{ \boldsymbol\beta^T\mathbf{x_1} + \beta_0 = 0 }[/math]

[math]\displaystyle{ \boldsymbol\beta^T\mathbf{x_2} + \beta_0 = 0 }[/math]

So [math]\displaystyle{ \boldsymbol\beta^T (\boldsymbol{x_1}-\boldsymbol{x_2}) = 0 }[/math]. Thus, [math]\displaystyle{ \boldsymbol\beta \perp (\boldsymbol{x_1} - \boldsymbol{x_2}) }[/math], which implies that [math]\displaystyle{ \boldsymbol \beta }[/math] is a normal vector to the hyper-plane.


Observation 2: If [math]\displaystyle{ \boldsymbol x_0 }[/math] is a point on the hyper-plane, then there exists a [math]\displaystyle{ \ \beta_0 }[/math] such that, [math]\displaystyle{ \boldsymbol\beta^T\boldsymbol{x_0}+\beta_0 = 0 }[/math]. So [math]\displaystyle{ \boldsymbol\beta^T\boldsymbol{x_0} = - \beta_0 }[/math]. This along with observation 1 imply there exists a [math]\displaystyle{ \ \beta_0 }[/math] such that, [math]\displaystyle{ \boldsymbol\beta^T\boldsymbol{x} = - \beta_0 }[/math] for all [math]\displaystyle{ \boldsymbol{x} }[/math] on the hyperplane.


Observation 3: Let [math]\displaystyle{ \ d_i }[/math] be the signed distance of point [math]\displaystyle{ \boldsymbol{x_i} }[/math] from the plane. The [math]\displaystyle{ \ d_i }[/math] is the projection of [math]\displaystyle{ (\boldsymbol{x_i} - \boldsymbol{x_0}) }[/math] on the direction of [math]\displaystyle{ \boldsymbol\beta }[/math]. In other words, [math]\displaystyle{ d_i \propto \boldsymbol\beta^T(\mathbf{x - x_0}) }[/math].(normalize [math]\displaystyle{ \beta }[/math])

[math]\displaystyle{ \begin{align} \displaystyle d_i &= \frac{\boldsymbol\beta^T(\boldsymbol{x_i} - \boldsymbol{x_0})}{\vert \boldsymbol\beta\vert}\\ & = \frac{\boldsymbol{\beta^Tx_i}- \boldsymbol{\beta^Tx_0}}{\vert \boldsymbol\beta\vert}\\ & = \frac{\boldsymbol{\beta^Tx_i}+ \beta_0}{\vert \boldsymbol\beta\vert} \end{align} }[/math]


Observation 4: Let margin be the distance between the hyper-plane and the closest point. Since [math]\displaystyle{ d_i }[/math] is the signed distance between the hyperplane and point [math]\displaystyle{ \boldsymbol{x_i} }[/math], we can define the positive distance of point [math]\displaystyle{ \boldsymbol{x_i} }[/math] from the hyper-plane as [math]\displaystyle{ (y_id_i) }[/math].

[math]\displaystyle{ \begin{align} \displaystyle \text{Margin} &= \min\{y_i d_i\}\\ &= \min\{ \frac{y_i(\boldsymbol\beta^T\mathbf{x_i} + \beta_0)}{|\boldsymbol\beta|} \} \end{align} }[/math]

Our goal is to maximize the margin. This is also known as the Max/Min problem in Optimization. When defining the hyperplane, what is important is the direction of [math]\displaystyle{ \boldsymbol\beta }[/math]. Value of [math]\displaystyle{ \beta_0 }[/math] does not change the direction of the hyper-plane, it is only the distance from the origin. Note that if we assume that the points do not lie on the hyper-plane, then the margin is positive:

[math]\displaystyle{ \begin{align} \displaystyle &y_i(\boldsymbol\beta^T\mathbf{x_i} + \beta_0) \geq 0 &&\\ &y_i(\boldsymbol\beta^T\mathbf{x_i} + \beta_0) \geq C &&\mbox{ for some positive C } \\ &y_i(\frac{\boldsymbol\beta^T}{C}\mathbf{x_i} + \frac{\beta_0}{C}) \geq 1 &&\mbox{ Divide by C}\\ &y_i(\boldsymbol\beta^{*T}\mathbf{x_i} + \beta^*_0) \geq 1 && \mbox{ By setting }\boldsymbol\beta^* = \frac{\boldsymbol\beta}{C}, \boldsymbol\beta_0^* = \frac{\boldsymbol\beta_0}{C}\\ &y_i(\boldsymbol\beta^{T}\mathbf{x_i} + \beta_0) \geq 1 && \mbox{ By setting }\boldsymbol\beta\gets\boldsymbol\beta^*, \boldsymbol\beta_0\gets\boldsymbol\beta_0^*\\ \end{align} }[/math]


So with a bit of abuse of notation we can assume that

[math]\displaystyle{ y_i(\boldsymbol\beta^T\mathbf{x_i} + \beta_0) \geq 1 }[/math]

Therefore, the problem translates to:

[math]\displaystyle{ \, \max\{\frac{1}{||\boldsymbol\beta||}\} }[/math]

So, it is possible to re-interpret the problem as:

[math]\displaystyle{ \, \min \frac 12 \vert \boldsymbol\beta \vert^2 \quad }[/math] s.t. [math]\displaystyle{ \quad \,y_i (\boldsymbol\beta^{T} \boldsymbol{x_i}+ \beta_0) \geq 1 }[/math]

[math]\displaystyle{ \, \vert \boldsymbol\beta \vert }[/math] could be any norm, but for simplicity we use L2 norm. We use [math]\displaystyle{ \frac 12 \vert \boldsymbol\beta \vert^2 }[/math] instead of [math]\displaystyle{ |\boldsymbol\beta| }[/math] to make the function differentiable. To solve the above optimization problem we can use Lagrange multipliers as follows

Support Vectors

Support vectors are the training points that determine the optimal separating hyperplane that we seek. Also, they are the most difficult points to classify and at the same time the most informative for classification.

Visualizing the Cost Function

Recall the cost function for a single example in the logistic regression model:

[math]\displaystyle{ -\left( y \log \frac{1}{1+e^{-\beta^T \boldsymbol{x}}} + (1-y)\log \frac{e^{-\beta^T\boldsymbol{x}}}{1+e^{-\beta^T \boldsymbol{x}}} \right) }[/math]

where [math]\displaystyle{ y \in \{0,1\} }[/math]. Looking at the plot of the cost term (for y=1), if [math]\displaystyle{ y=1 }[/math] (i.e. the target class is 1), then we want our [math]\displaystyle{ \beta }[/math] to be such that [math]\displaystyle{ \beta^T \boldsymbol{x} \gg 0 }[/math]. This will ensure very accurate classification.

File:logreg cost.jpg

Now for SVM, consider the generic cost function as follows:

[math]\displaystyle{ -\left( y \cdot \text{cost}_1(\beta^T \boldsymbol{x}) + (1-y)\cdot \text{cost}_0(\beta^T \boldsymbol{x}) \right) }[/math]

We can visualize [math]\displaystyle{ \text{cost}_1 }[/math] compared with the sigmoid cost term in logistic regression as follows:

File:svm cost.jpg

What you should take away from this is for y=1, we want [math]\displaystyle{ \beta^T \boldsymbol{x}\ge 1 }[/math]. In our notes, we have [math]\displaystyle{ y \in \{-1, 1\} }[/math], so that's why we write [math]\displaystyle{ y_i (\beta^T \boldsymbol{x} + \beta_0) \ge 1 }[/math].

The same rationale can be applied for y=0, using [math]\displaystyle{ (1-y)\log \frac{1}{1+e^{-\beta^T \boldsymbol{x}}} }[/math]

Writing Lagrangian Form of Support Vector Machine

The Lagrangian form using Lagrange multipliers and constraints that are discussed below is introduced to ensure that the optimization conditions are satisfied, as well as finding an optimal solution (the optimal saddle point of the Lagrangian for the classic quadratic optimization). The problem will be solved in dual space by introducing [math]\displaystyle{ \,\alpha_i }[/math] as dual constraints, this is in contrast to solving the problem in primal space as function of the betas. A simple algorithm for iteratively solving the Lagrangian has been found to run well on very large data sets, making SVM more usable. Note that this algorithm is intended to solve Support Vector Machines with some tolerance for errors - not all points are necessarily classified correctly. Several papers by Mangasarian explore different algorithms for solving SVM.

the Lagrangian function of the above optimization problem:

[math]\displaystyle{ \begin{align} \displaystyle L(\boldsymbol\beta, \beta_0, \boldsymbol\alpha) &= \frac 12 \vert \boldsymbol\beta \vert^2 - \sum_{i=1}^n \alpha_i \left[ y_i (\boldsymbol{\beta^T x_i}+\beta_0) -1 \right]\\ &= \frac 12 \vert \boldsymbol\beta \vert^2 - \boldsymbol\beta^T \sum_{i=1}^n \alpha_i y_i \boldsymbol{x_i} - \sum_{i=1}^n \alpha_i y_i \beta_0 - \sum_{i=1}^n \alpha_i \end{align} }[/math]

where [math]\displaystyle{ \boldsymbol\alpha = (\alpha_1 ,... ,\alpha_n) }[/math] are lagrange multipliers. [math]\displaystyle{ 0 \le \alpha_{i} i=1...n }[/math]

To find the optimal value, we set the derivatives equal to zero: [math]\displaystyle{ \,\frac{\partial L}{\partial \boldsymbol{\beta}} = 0 }[/math] and [math]\displaystyle{ \,\frac{\partial L}{\partial \beta_0} = 0 }[/math].

[math]\displaystyle{ \begin{align} \displaystyle &\frac{\partial L}{\partial \boldsymbol{\beta}} = \boldsymbol\beta - \sum_{i=1}^n \alpha_i y_i \boldsymbol{x_i} = 0 &\Longrightarrow& \boldsymbol\beta = \sum_{i=1}^n \alpha_i y_i\boldsymbol{x_i}\\ &\frac{\partial L}{\partial \beta_0} = - \sum_{i=1}^n \alpha_i y_i = 0 &\Longrightarrow& \sum_{i=1}^n \alpha_i y_i = 0 \end{align} }[/math]

To get the dual form of the optimization problem we replace the above two equations in definition of [math]\displaystyle{ L(\boldsymbol\beta, \beta_0, \boldsymbol\alpha) }[/math].

We have: [math]\displaystyle{ \begin{align} \displaystyle L(\boldsymbol\beta, \beta_0, \boldsymbol\alpha) &= \frac 12 \boldsymbol\beta^T\boldsymbol\beta - \boldsymbol\beta^T \sum_{i=1}^n \alpha_i y_i \boldsymbol{x_i} - \sum_{i=1}^n \alpha_i y_i \beta_0 - \sum_{i=1}^n \alpha_i\\ &= \frac 12 \boldsymbol\beta^T \sum_{i=1}^n \alpha_i y_i\boldsymbol{x_i} - \boldsymbol\beta^T \sum_{i=1}^n \alpha_i y_i\boldsymbol{x_i} - 0 + \sum_{i=1}^n \alpha_i\\ &= - \frac 12 \boldsymbol\beta^T \sum_{i=1}^n \alpha_i y_i\boldsymbol{x_i} + \sum_{i=1}^n \alpha_i\\ &= - \frac 12 \sum_{i=1}^n \alpha_i y_i\boldsymbol{x_i}^T \sum_{i=1}^n \alpha_i y_i\boldsymbol{x_i} + \sum_{i=1}^n \alpha_i\\ &= \sum_{i=1}^n \alpha_i - \frac 12 \sum_{i=1}^n\sum_{j=1}^n \alpha_i\alpha_jy_iy_j\boldsymbol{x_i}^T\boldsymbol{x_j} \end{align} }[/math]

The above function is a dual objective function, so we should minimize it:

[math]\displaystyle{ \begin{align} \displaystyle \max_\alpha &\sum_{i=1}^n \alpha_i - \frac 12 \sum_{i=1}^n\sum_{j=1}^n \alpha_i \alpha_j y_i y_j \boldsymbol{x_i}^T \boldsymbol{x_j}\\ s.t.\; & \alpha_i \geq 0\\ & \sum_{i=1}^n \alpha_i y_i = 0 \end{align} }[/math]

The dual function is a quadratic function of several variables subject to linear constraints. This optimization problem is called Quadratic Programming and is much easier than the primal function. It is possible to to write to dual form using matrices:

[math]\displaystyle{ \begin{align} \displaystyle \max_\alpha \,& \boldsymbol\alpha^T\boldsymbol{1} - \frac 12 \boldsymbol\alpha^T S \boldsymbol\alpha\\ s.t.\; & \boldsymbol\alpha \geq 0\\ & \boldsymbol\alpha^Ty = 0\\ & S = ([y_1,\dots, y_n]\odot X)^T ([y_1,\dots, y_n]\odot X) \end{align} }[/math]


Since [math]\displaystyle{ S = ([y_1,\dots, y_n]\odot X)^T ([y_1,\dots, y_n]\odot X) }[/math], S is a positive semi-definite matrix. This means that the dual function is convex.[8]. This means that the dual function does not have any local minimum that is not global. So it is relatively easy to find the global minimum.

This is a much simpler optimization problem and we can solve it by Quadratic programming. Quadratic programming (QP) is a special type of mathematical optimization problem. It is the problem of optimizing (minimizing or maximizing) a quadratic function of several variables subject to linear constraints on these variables. The general form of such a problem is minimize with respect to [math]\displaystyle{ \,x }[/math]

[math]\displaystyle{ f(x) = \frac{1}{2}x^TQx + c^Tx }[/math]

subject to one or more constraints of the form:

[math]\displaystyle{ \,Ax\le b }[/math], [math]\displaystyle{ \,Ex=d }[/math].

A good description of general QP problem formulation and solution can be find link here.

Discussion on the Dual of the Lagrangian

As mentioned in the previous section, solving the dual form of the Lagrangian requires quadratic programming. Quadratic programming can be used to minimize a quadratic function subject to a set of constraints. In general, for a problem with N variables, the quadratic programming solution has a computational complexity of [math]\displaystyle{ \ O(N^3) }[/math] <ref name="CMBishop" />. The original problem formulation only has (d+1) variables that need to be found (i.e. the values of [math]\displaystyle{ \ \beta }[/math] and [math]\displaystyle{ \ \beta_0 }[/math]), where d is the dimensionality of the data points. However, the dual form of the Lagrangian has n variables that need to be found (i.e. all the [math]\displaystyle{ \ \alpha }[/math] values), where n is the number of data points. It is likely that n is larger than (d+1) (i.e. the number of data points is larger than the dimensionality of the data plus 1), which makes the dual form of the Lagrangian seem computationally inefficient <ref name="CMBishop" />. However, the dual of the Lagrangian allows the inner product [math]\displaystyle{ \ x_i^T x_j }[/math] to be expressed using a kernel formulation which allows the data to be transformed into higher feature spaces and thus allowing seemingly non-linearly separable data points to be separated, which is a highly useful feature described in more detail in the next class <ref name="CMBishop" />.

Support Vector Method Packages

One of the popular Matlab toolboxes for SVM is LIBSVM, which has been developed in the department of Computer Science and Information Engineering, National Taiwan University, under supervision of Chih-Chung Chang and Chih-Jen Lin. In this page they have provided the society with many different interfaces for LIBSVM like Matlab, C++, Python, Perl, and many other languages, each one of those has been developed in different institutes and by variety of engineers and mathematicians. In this page you can also find a thorough introduction to the package and its various parameters.

A very helpful tool which you can find on the LIBSVM page is a graphical interface for SVM; it is an applet by which we can draw points corresponding to each of the two classes of the classification problem and by adjusting the SVM parameters, observe the resulting solution.

If you found LIBSVM helpful and wanted to use it for your research, please cite the toolbox.

A pretty long list of other SVM packages and comparison between all of them in terms of language, execution platform, multiclass and regression capabilities, can be found here.

The top 3 SVM software are:

1. LIBSVM

2. SVMlight

3. SVMTorch

More information which introduces SVM software and their comparison can be found here and here.

Support Vector Machine Continued (Lecture: Nov. 1, 2011)

In the previous lecture we considered the case when data is linearly separable. The goal of the Support Vector Machine classifier is to find the hyperplane that maximizes the margin distance from the hyperplane to each of the two classes. We derived the following optimization problem based on the SVM methodology:

[math]\displaystyle{ \, \min_{\beta} \frac{1}{2}{|\boldsymbol{\beta}|}^2 }[/math]

Subject to the constraint:

[math]\displaystyle{ \,y_i(\boldsymbol{\beta}^T\mathbf{x}_i+\beta_0)\geq1, \quad y_i \in \{-1,1\} \quad \forall{i} =1, \ldots , n }[/math]

Notice that SVM can only classify 2-class output. Lots of work will be needed for higher classes output.

This is the primal form of the optimization problem. Then we derived the dual of this problem:

[math]\displaystyle{ \, \max_\alpha \quad \sum_i \alpha_i - \frac{1}{2} \sum_i \sum_j \alpha_i \alpha_j y_i y_j \mathbf{x}_i^T\mathbf{x}_j }[/math]

Subject to constraints:

[math]\displaystyle{ \,\alpha_i\geq 0 }[/math]

[math]\displaystyle{ \,\sum_i \alpha_i y_i =0 }[/math]


The is a quadratic programming problem. QP problems have been thoroughly studied and they can be efficiently solved. This particular problem has a convex objective function as well as convex constraints. This guarantees a global optima, even if we use local optima search algorithms (e.g. gradient descent). These properties are of significant importance for classifiers and thus are one of the most important strengths of the SVM classifier.

for an easy implementation of SVM and solving above quadratic optimization problem in R see<ref> http://cbio.ensmp.fr/~thocking/mines-course/2011-04-01-svm/svm-qp.pdf </ref>

We are able to find [math]\displaystyle{ \,\beta }[/math] when [math]\displaystyle{ \,\alpha }[/math] is found:

[math]\displaystyle{ \, \boldsymbol{\beta} = \sum_i \alpha_i y_i \mathbf{x}_i }[/math]

But in order to find the hyper-plane uniquely we also need to find [math]\displaystyle{ \,\beta_0 }[/math].

When finding the dual objective function, there is a set of conditions called KKT that should be satisfied.

Examining KKT Conditions

KKT stands for Karush-Kuhn-Tucker (initially named after Kuhn and Tucker's work in the 1950's, however, it was later discovered that Karush had stated the conditions back in the late 1930's) <ref name="CMBishop" />

The K.K.T. conditions are as follows: stationarity, primal feasibility, dual feasibility, and complementary slackness.

It gives us a closer look into the Lagrangian equation and the associated conditions.

Suppose we want to find [math]\displaystyle{ \, \min_x f(x) }[/math] subject to the constraint [math]\displaystyle{ \, g_i(x)\geq 0 , \forall{x} }[/math]. The Lagrangian is then computed as:

[math]\displaystyle{ \, \mathcal{L} (x,\alpha_i)=f(x)-\sum_i \alpha_i g_i(x) }[/math]

If [math]\displaystyle{ \, x^* }[/math] is the point where [math]\displaystyle{ \beta }[/math] is optimal with respect to our cost function, the necessary conditions for [math]\displaystyle{ \, x^* }[/math] to be the local minimum :

1) Stationarity: [math]\displaystyle{ \, \frac{\partial \mathcal{L}}{\partial x} (x^*) = 0 }[/math] that is [math]\displaystyle{ \, f'(x^*) - \Sigma_i{\alpha_ig'_i(x^*)}=0 }[/math]

2) Dual Feasibility: [math]\displaystyle{ \, \alpha_i\geq 0 , }[/math]

3) Complementary Slackness: [math]\displaystyle{ \, \alpha_i g_i(x^*)=0 , }[/math]

4) Primal Feasibility: [math]\displaystyle{ \, g_i(x^*)\geq 0 , }[/math]


If any of the above four conditions are not satisfied, then the primal function is not feasible.

Support Vectors

Support vectors are the training points that determine the optimal separating hyperplane that we seek i.e. the margin is calculated as the distance from the hyperplane to the support vectors. Also, they are the most difficult points to classify and at the same time the most informative for classification.

In our case, the [math]\displaystyle{ g_i({x}) }[/math] function is:

[math]\displaystyle{ \,g_i(x) = y_i(\beta^Tx_i+\beta_0)-1 }[/math]

Substituting [math]\displaystyle{ \,g_i }[/math] into KKT condition 3, we get [math]\displaystyle{ \,\alpha_i[y_i(\beta^Tx_i+\beta_0)-1] = 0 }[/math]. <br\>In order for this condition to be satisfied either
[math]\displaystyle{ \,\alpha_i= 0 }[/math] or
[math]\displaystyle{ \,y_i(\beta^Tx_i+\beta_0)=1 }[/math]

All points [math]\displaystyle{ \,x_i }[/math] will be either 1 or greater than 1 distance unit away from the hyperplane, since [math]\displaystyle{ y_i(\beta^T \boldsymbol{x_i} + \beta_0) }[/math] is the value of the projected distance in the specific direction of the target value.

Case 1: a point away from the margin

If [math]\displaystyle{ \,y_i(\beta^Tx_i+\beta_0) \gt 1 \Rightarrow \alpha_i = 0 }[/math].

In other words, if point [math]\displaystyle{ \, x_i }[/math] is not on the margin (i.e. [math]\displaystyle{ \boldsymbol{x_i} }[/math] is not a support vector), then the corresponding [math]\displaystyle{ \,\alpha_i=0 }[/math].

Case 2: a point on the margin

If [math]\displaystyle{ \,y_i(\beta^Tx_i+\beta_0) = 1 \Rightarrow \alpha_i \gt 0 }[/math]. <br\>If point [math]\displaystyle{ \, x_i }[/math] is on the margin (i.e. [math]\displaystyle{ \boldsymbol{x_i} }[/math] is a support vector), then the corresponding [math]\displaystyle{ \,\alpha_i\gt 0 }[/math].


Points on the margin, with corresponding [math]\displaystyle{ \,\alpha_i \gt 0 }[/math], are called support vectors.

Since it is impossible for us to know a priori which of the training data points would end up as the support vectors, it is necessary for us to work with the entire training set to find the optimal hyperplane. It is usually the case that we only use a small number of support vectors, which makes the SVM model very robust to new data.


To compute [math]\displaystyle{ \ \beta_0 }[/math], we need to choose any [math]\displaystyle{ \,\alpha_i \gt 0 }[/math], this will satisfy:

[math]\displaystyle{ \,y_i(\beta^Tx_i+\beta_0) = 1 }[/math].

We can compute [math]\displaystyle{ \,\beta = \sum_i \alpha_i y_i x_i }[/math], substitute [math]\displaystyle{ \ \beta }[/math] in [math]\displaystyle{ \,y_i(\beta^Tx_i+\beta_0) = 1 }[/math] and solve for [math]\displaystyle{ \ \beta_0 }[/math].

Everything we derived so far was based on the assumption that the data is linearly separable (termed Hard Margin SVM), but there are many cases in practical applications that the data is not linearly separable.

Kernel Trick

</ref>

We talked about the curse of dimensionality at the beginning of this course. However, we now turn to the power of high dimensions in order to find a hyperplane between two classes of data points that can linearly separate the transformed (mapped) data in a space that has a higher dimension than the space in which the training data points reside.

To understand this, imagine a two dimensional prison where a two dimensional person is constrained. Suppose magically we give the person a third dimension, then he can escape from the prison. In other words, the prison and the person are linearly separable now with respect to the third dimension. The intuition behind the kernel trick is basically to map data to a higher dimension in which the mapped data are linearly separable by a hyperplane, even if the original data are not linearly separable.

The original optimal hyperplane algorithm proposed by Vladimir Vapnik in 1963 was a linear classifier. However, in 1992, Bernhard Boser, Isabelle Guyon and Vapnik suggested a way to create non-linear classifiers by applying the kernel trick to maximum-margin hyperplanes. The algorithm is very similar, except that every dot product is replaced by a non-linear kernel function as below. This allows the algorithm to fit the maximum-margin hyperplane in a transformed feature space. We have seen SVM as a linear classification problem that finds the maximum margin hyperplane in the given input space. However, for many real world problems a more complex decision boundary is required. The following simple method was devised in order to solve the same linear classification problem but in a higher dimensional space, a feature space, under which the maximum margin hyperplane is better suited.

In Machine Learning, the kernel trick is a way of mapping points into an inner product space, hoping that the new space is more suitable for classification. [math]\displaystyle{ \phi }[/math] is function to transfer a m-dimensional data to a higher dimension, so that we can find the connection between the non-linearly separable data the linearly separable ones. Example:

[math]\displaystyle{ \left[\begin{matrix} \,x \\ \,y \\ \end{matrix}\right] \rightarrow\ \left[\begin{matrix} \,x^2 \\ \,y^2 \\ \, \sqrt{2}xy \\ \end{matrix}\right] }[/math]

[math]\displaystyle{ k(x,y)=\phi^{T}(x)\phi(y) }[/math]

[math]\displaystyle{ \left[\begin{matrix} \,x_1 \\ \,y_1 \\ \end{matrix}\right] \rightarrow\ \left[\begin{matrix} \,x_1^2 \\ \,y_1^2 \\ \, \sqrt{2}x_1y_1 \\ \end{matrix}\right] }[/math]

[math]\displaystyle{ \left[\begin{matrix} \,x_2 \\ \,y_2 \\ \end{matrix}\right] \rightarrow\ \left[\begin{matrix} \,x_2^2 \\ \,y_2^2 \\ \, \sqrt{2}x_2y_2 \\ \end{matrix}\right] }[/math]


[math]\displaystyle{ \left[\begin{matrix} \,x_1^2 \\ \,y_1^2 \\ \, \sqrt{2}x_1y_1 \\ \end{matrix}\right] ^{T} * \left[\begin{matrix} \,x_2^2 \\ \,y_2^2 \\ \, \sqrt{2}x_2y_2 \\ \end{matrix}\right] = K(\left[\begin{matrix} \,x_1 \\ \,y_1 \\ \end{matrix}\right],\left[\begin{matrix} \,x_2 \\ \,y_2 \\ \end{matrix}\right] ) }[/math]

Recall our objective function: [math]\displaystyle{ \sum_i \alpha_i - \frac{1}{2} \sum_{ij} \alpha_i \alpha_j y_i y_j \mathbf{x}_i^T\mathbf{x}_j }[/math] We can replace [math]\displaystyle{ \mathbf{x}_i^T\mathbf{x}_j }[/math] by [math]\displaystyle{ \mathbf{\phi^{T}(x_i)}\mathbf{\phi(x_j)}= k(x_i,x_j) }[/math]


[math]\displaystyle{ \left[\begin{matrix} \,k(x_1, x_1)& \,k(x_1, x_2)& \cdots &\,k(x_1, x_n) \\ \vdots& \vdots& \vdots& \vdots\\ \,k(x_n, x_1)& \,k(x_n, x_2)& \cdots &\,k(x_n, x_n) \\ \end{matrix}\right] }[/math]


In most of the real world cases the data points are not linearly separable. How can the above methods be generalized to the case where the decision function is not a linear function of the data? Boser, Guyon and Vapnik, 1992, showed that a rather old trick (Aizerman, 1964) can be used to accomplish this in an astonishingly straightforward way. First notice that the only way in which the data appears in the dual-form optimization problem is in the form of dot products: [math]\displaystyle{ \mathbf{x}_i^T.\mathbf{x}_j }[/math] . Now suppose we first use a non-linear operator [math]\displaystyle{ \Phi \mathbf(x) }[/math] to map the data points to some other higher dimensional space (possibly infinite dimensional) [math]\displaystyle{ \mathcal{H} }[/math] (called Hilbert space or feature space), where they can be classified linearly. Figure below illustrates this concept:


File:kernell trick.jpg
Mapping of not-linearly separable data points in a two-dimensional space to a three-dimensional space where they can be linearly separable by means of a kernel function.


In other words, a linear learning machine can be employed in the higher dimensional feature space to solve the original non-linear problem. Then of course the training algorithm would only depend on the data through dot products in [math]\displaystyle{ \mathcal{H} }[/math], i.e. on functions of the form [math]\displaystyle{ \lt \Phi (\mathbf{x}_i),\Phi (\mathbf{x}_j)\gt }[/math]. Note that the actual mapping [math]\displaystyle{ \Phi \mathbf(x) }[/math] does not need to be known, only the inner product of the mapping is needed for modifying the support vector machine such that it can separate non-linearly separable data. Avoiding the actual mapping to the higher dimensional space is preferable, because higher dimensional spaces may have problems due to the curse of dimensionality.

So the hypothesis in this case would be

[math]\displaystyle{ f(\mathbf{x}) = \boldsymbol{\beta}^T \Phi (\mathbf{x}) + \beta_0 }[/math]

which is linear in terms of the new space that [math]\displaystyle{ \Phi (\mathbf{x}) }[/math] maps the data to, but non-linear in the original space. Now we can extend all the presented optimization problems for the linear case, for the transformed data in the feature space. If we define the kernel function as

[math]\displaystyle{ K (\mathbf{x}_i,\mathbf{x}_j) = \lt \Phi (\mathbf{x}_i),\Phi (\mathbf{x}_j)\gt = \Phi(\mathbf{x}_i)^T \Phi (\mathbf{x}_j) }[/math]

where [math]\displaystyle{ \ \Phi }[/math] is a mapping from input space to an (inner product) feature space. Then the corresponding dual form is


[math]\displaystyle{ L(\boldsymbol{\alpha}) =\sum_{i=1}^n \alpha_i - \frac 12 \sum_{i=1}^n\sum_{j=1}^n \alpha_i\alpha_jy_iy_j K (\mathbf{x}_i,\mathbf{x}_j) }[/math]

subject to [math]\displaystyle{ \sum_{i=1}^n \alpha_i y_i=0 \quad \quad \alpha_i \geq 0,\quad i=1, \cdots, n }[/math]


The cost function [math]\displaystyle{ L(\boldsymbol{\alpha}) }[/math] is convex and quadratic in terms of the unknown parameters. This problem is solved through quadratic programming. The KKT conditions for this equation lead to the following final decision rule:

[math]\displaystyle{ L(\mathbf{x}, \boldsymbol{\alpha}^{\ast}, \beta_0) =\sum_{i=1}^{N_{sv}} y_i \alpha_i^{\ast} K (\mathbf{x}_i,\mathbf{x}) + \beta_0 }[/math]


where [math]\displaystyle{ \ N_{sv} }[/math] and [math]\displaystyle{ \ \alpha_i }[/math] denote number of support vectors and the non-zero Lagrange multipliers corresponding to the support vectors respectively.

Several typical choices of kernels are linear, polynomial, Sigmoid or Multi-Layer Perceptron (MLP) and Gaussian or Radial Basis Function (RBF) kernel. Their expressions are as following:

Linear kernel: [math]\displaystyle{ K (\mathbf{x}_i,\mathbf{x}_j) = \mathbf{x}_i^T\mathbf{x}_j }[/math]

Polynomial kernel: [math]\displaystyle{ K (\mathbf{x}_i,\mathbf{x}_j) = (1 + \mathbf{x}_i^T\mathbf{x}_j)^p }[/math]

Sigmoid (MLP) kernel: [math]\displaystyle{ K (\mathbf{x}_i,\mathbf{x}_j) = \tanh (k_1\mathbf{x}_i^T\mathbf{x}_j +k_2) }[/math]

Gaussian (RBF) kernel: [math]\displaystyle{ \ K(\mathbf{x}_i,\mathbf{x}_j) = \exp\left[\frac{-(\mathbf{x}_i - \mathbf{x}_j)^T (\mathbf{x}_i - \mathbf{x}_j)}{2\sigma^2 }\right] }[/math]


Kernel functions satisfying Mercer's conditions not only enables implicit mapping of data from input space to feature space but also ensure the convexity of the cost function which leads to the unique optimum. Mercer condition states that a continuous symmetric function [math]\displaystyle{ K \mathbf(x,y) }[/math] must be positive semi-definite to be a kernel function which can be written as inner product between the data pairs. Note that we would only need to use K in the training algorithm, and would never need to explicitly even know what [math]\displaystyle{ \ \Phi }[/math] is.

Furthermore, one can construct new kernels from previously defined kernels.[9] Given two kernels [math]\displaystyle{ K_1 (\mathbf{x}_i,\mathbf{x}_j) }[/math] and [math]\displaystyle{ K_2 (\mathbf{x}_i,\mathbf{x}_j) }[/math], properties include:

1. [math]\displaystyle{ K (\mathbf{x}_i,\mathbf{x}_j) = \alpha K_1 (\mathbf{x}_i,\mathbf{x}_j) + \beta K_2 (\mathbf{x}_i,\mathbf{x}_j) }[/math] for [math]\displaystyle{ \alpha , \beta \geq 0 }[/math]

2. [math]\displaystyle{ K (\mathbf{x}_i,\mathbf{x}_j) = K_1 (\mathbf{x}_i,\mathbf{x}_j) K_2 (\mathbf{x}_i,\mathbf{x}_j) }[/math]

3. [math]\displaystyle{ K (\mathbf{x}_i,\mathbf{x}_j) = K_1 (f ( \mathbf{x}_i ) ,f ( \mathbf{x}_j ) ) }[/math] where [math]\displaystyle{ \, f \colon X \rightarrow X }[/math]

4. [math]\displaystyle{ K (\mathbf{x}_i,\mathbf{x}_j) = f ( K_1 ( \mathbf{x}_i , \mathbf{x}_j ) }[/math] where [math]\displaystyle{ \, f }[/math] is a polynomial with positive coefficients.


In the case of Gaussian or RBF kernel for example, [math]\displaystyle{ \mathcal{H} }[/math] is infinite dimensional, so it would not be very easy to work with [math]\displaystyle{ \Phi }[/math] explicitly. However, if one replaces [math]\displaystyle{ \lt (\mathbf{x}_i). (\mathbf{x}_j)\gt }[/math] by [math]\displaystyle{ K (\mathbf{x}_i,\mathbf{x}_j) }[/math] everywhere in the training algorithm, the algorithm will happily produce a support vector machine which lives in an infinite dimensional space, and furthermore do so in roughly the same amount of time it would take to train on the un-mapped data. All the considerations of the previous sections hold, since we are still doing a linear separation, but in a different space.


The choice of which kernel would be best for a particular application has to be determined through trial and error. Normally, the Gaussian or RBF kernel are best suited for classification tasks including SVM.


The video below shows a graphical illustration of how a polynomial kernel works to a get better sense of kernel concept:

Mapping data points to a higher dimensional space using a polynomial kernel

Kernel Properties

Kernel functions must be continuous, symmetric, and most preferably should have a positive (semi-) definite Gram matrix. The Gram matrix is the matrix whose elements are [math]\displaystyle{ \ g_{ij} = K(x_i,x_j) }[/math]. Kernels which are said to satisfy the Mercer's theorem are positive semi-definite, meaning their kernel matrices have no non-negative Eigen values. The use of a positive definite kernel ensures that the optimization problem will be convex and solution will be unique. <ref> Reference:http://crsouza.blogspot.com/2010/03/kernel-functions-for-machine-learning.html#kernel_properties</ref>


Furthermore, kernels can be categorized into classes based on their properties <ref name="Genton"> M. G. Genton, "Classes of Kernels for Machine Learning: A Statistics Perspective," Journal of Machine Learning Research 2, 2001</ref>:

  • Nonstationary kernels are explicitly dependent on both inputs (e.g., the polynomial kernel).
  • Stationary kernels are invariant to translation (e.g., the Gaussian kernel which only looks at the distance between the inputs).
  • Reducible kernels are nonstationary kernels that can be reduced to stationary kernels via a bijective deformation (for more detailed information see <ref name = "Genton" />).

Further Information of Kernel Functions

In class we have studied 3 kernel functions, linear, polynomial and gaussian kernel. The following are some properties for each:

  1. Linear Kernel is the simplest kernel. Algorithms using this kernel are often equivalent to non-kernel algorithms such as standard PCA
  2. Polynomial Kernel is a non-stationary kernel, well suited when training data is normalized.
  3. Gaussian Kernel is an example of radial basis function kernel.

When choosing a kernel we need to take into account the data we are trying to model. For example, data that clusters in circles (or hyperspheres) is better classified by Gaussian Kernel.

Beyond the kernel functions we discussed in class, such as Linear Kernel, Polynomial Kernel and Gaussian Kernel functions, many more kernel functions can be used in the application of kernel methods for machine learning.

Some examples are: Exponential Kernel, Laplacian Kernel, ANOVA Kernel, Hyperbolic Tangent (Sigmoid) Kernel, Rational Quadratic Kernel, Multiquadric Kernel, Inverse Multiquadric Kernel, Circular Kernel, Spherical Kernel, Wave Kernel, Power Kernel, Log Kernel, Spline Kernel, B-Spline Kernel, Bessel Kernel, Cauchy Kernel, Chi-Square Kernel, Histogram Intersection Kernel, Generalized Histogram Intersection Kernel, Generalized T-Student Kernel, Bayesian Kernel, Wavelet Kernel, etc.

You may visit http://crsouza.blogspot.com/2010/03/kernel-functions-for-machine-learning.html#kernel_functions for more information.

Case 2: Linearly Non-Separable Data (Soft Margin)

The original SVM was specifically made for separable data. But, this is a very strong requirement, so it was suggested by Vladimir Vapnik and Corinna Cortes later on to remove this requirement. This is called Soft Margin Support Vector Machine. One of the advantages of SVM is that it is relatively easy to generalize it to the case that the data is not linearly separable.

In the case when 2 data sets are not linearly separable, it is impossible to have a hyperplane that completely separates 2 classes of data. In this case the idea is to minimize the number of points that cross the margin and are miss-classified .So we are going to minimize that are going to violate the constraint:

[math]\displaystyle{ \, y_i(\beta^T x_i + \beta_0) \geq 1 }[/math]

Hence we allow some of the points to cross the margin (or equivalently violate our constraint) but on the other hand we penalize our objective function (so that the violations of the original constraint remains low):

[math]\displaystyle{ \, min (\frac{1}{2} |\beta|^2 +\gamma \sum_i \zeta_i) }[/math]

And now our constraint is as follows:

[math]\displaystyle{ \, y_i(\beta^T x_i + \beta_0) \geq 1-\zeta_i }[/math]

[math]\displaystyle{ \, \zeta_i \geq 0 }[/math]

We have to check that all KKT conditions are satisfied:

[math]\displaystyle{ \, \mathcal{L}(\beta,\beta_0,\zeta_i,\alpha_i,\lambda_i)=\frac{1}{2}|\beta|^2+\gamma \sum_i \zeta_i -\sum_i \alpha_i[y_i(\beta^T x_i +\beta_0)-(1-\zeta_i)] - \sum_i \lambda_i \zeta_i }[/math]

[math]\displaystyle{ \, 1) \frac{\partial\mathcal{L}}{\partial \beta}=\beta-\sum_i \alpha_i y_i x_i \rightarrow \beta=\sum_i \alpha_i y_i x_i }[/math]

[math]\displaystyle{ \, 2) \frac{\partial\mathcal{L}}{\partial \beta_0}=\sum_i \alpha_i y_i =0 }[/math]


[math]\displaystyle{ \, 3) \frac{\partial\mathcal{L}}{\partial \zeta_i}=\gamma - \alpha_i - \lambda_i }[/math]

Now we have to write this into a Lagrangian form.

Support Vector Machine Continued (Lecture: Nov. 3, 2011)

Case 2: Linearly Non-Separable Data (Soft Margin [10]) Continued

Recall from last time that soft margins are used instead of hard margins when we are using SVM to classify data points that are not linearly separable.

Soft Margin SVM Derivation of Dual

The soft-margin SVM optimization problem is defined as:

[math]\displaystyle{ \min \{\frac{1}{2}|\boldsymbol{\beta}|^2 + \gamma\sum_i \zeta_i\} }[/math]

subject to the constraints [math]\displaystyle{ y_i(\boldsymbol{\beta}^T \boldsymbol{x_i} + \beta_0) \ge 1-\zeta_i \quad ,\quad \zeta_i \ge 0 }[/math],

where [math]\displaystyle{ \boldsymbol \gamma \sum_i \zeta_i \quad \quad }[/math] is the penalty function that penalizes the slack variable. Note that [math]\displaystyle{ \zeta_i=0 }[/math] denotes the Hard Margin SVM classifier.

(where [math]\displaystyle{ \zeta \gt 0 }[/math] represents some points across the margin).

In other words, we have relaxed the constraint for each [math]\displaystyle{ \boldsymbol{x_i} }[/math] so that it can violate the margin by an amount [math]\displaystyle{ \zeta_i }[/math]. As such, we want to make sure that all [math]\displaystyle{ \zeta_i }[/math] values are as small as possible. So, we penalize them in the objective function by a factor of some chosen [math]\displaystyle{ \gamma }[/math].

Forming the Lagrangian

In this case we have have two constraints in the Lagrangian primal form ([math]\displaystyle{ \beta }[/math] and [math]\displaystyle{ \zeta }[/math]) and therefore we optimize with respect to two dual variables [math]\displaystyle{ \, \alpha }[/math] and [math]\displaystyle{ \,\lambda }[/math],

[math]\displaystyle{ L(\boldsymbol{\beta},\beta_0,\zeta_i,\alpha_i,\lambda_i) = \frac{1}{2} |\boldsymbol{\beta}|^2 + \gamma \sum_i \zeta_i - \sum_i \alpha_i [y_i(\boldsymbol{\beta}^T \boldsymbol{x_i} + \beta_0)-1+\zeta_i] - \sum_i \lambda_i \zeta_i }[/math]

Note the following simplification:

[math]\displaystyle{ - \sum_i \alpha_i [y_i(\boldsymbol{\beta}^T \boldsymbol{x_i} + \beta_0)-1+\zeta_i] = -\boldsymbol{\beta}^T\sum_i\alpha_i y_i x_i-\beta_0\sum_i\alpha_iy_i+\sum_i\alpha_i-\sum_i\alpha_i\zeta_i }[/math]

Apply KKT conditions

[math]\displaystyle{ \begin{align} 1) &\frac{\partial \mathcal{L}}{\partial \boldsymbol{\beta}} = \boldsymbol{\beta}-\sum_i \alpha_i y_i \boldsymbol{x_i} = 0 \\ & \rightarrow \boldsymbol{\beta} = \sum_i \alpha_i y_i \boldsymbol{x_i} \\ &\frac{\partial \mathcal{L}}{\partial \beta_0} = \sum_i \alpha_i y_i = 0 \\ &\frac{\partial \mathcal{L}}{\partial \zeta_i} = \gamma - \alpha_i - \lambda_i = 0 \\ & \rightarrow \boldsymbol{\gamma} = \alpha_i + \lambda_i \\ 2) &\text{dual feasibility: } \alpha_i \ge 0, \lambda_i \ge 0 \\ 3) &\alpha_i [y_i(\boldsymbol{\beta}^T \boldsymbol{x_i} + \beta_0)-1+\zeta_i] = 0, \text{ and } \lambda_i \zeta_i = 0 \\ 4) &y_i(\boldsymbol{\beta}^T \boldsymbol{x_i} + \beta_0) \ge 1-\zeta_i \quad,\quad \zeta_i \ge 0 \\ \end{align} }[/math]

Objective Function

Simplifying the Lagrangian the same way we did with the hard margin case, we get the following:

[math]\displaystyle{ \begin{align} L &= \frac{1}{2} \sum_{i,j} \alpha_i \alpha_j y_i y_j \boldsymbol{x_i}^T \boldsymbol{x_j} + \gamma \sum_i \zeta_i - \sum_{i,j} \alpha_i \alpha_j y_i y_j \boldsymbol{x_i}^T \boldsymbol{x_j} - \beta_0 \sum_i \alpha_i y_i + \sum_i \alpha_i - \sum_i \alpha_i \zeta_i - \sum_i \lambda_i \zeta_i \\ &= -\frac{1}{2} \sum_{i,j} \alpha_i \alpha_j y_i y_j \boldsymbol{x_i}^T \boldsymbol{x_j} + \sum_i \alpha_i - 0 + (\sum_i \gamma \zeta_i - \sum_i \alpha_i \zeta_i - \sum_i \lambda_i \zeta_i) \\ &= -\frac{1}{2} \sum_{i,j} \alpha_i \alpha_j y_i y_j \boldsymbol{x_i}^T \boldsymbol{x_j} + \sum_i \alpha_i + \sum_i (\gamma - \alpha_i - \lambda_i) \zeta_i \\ &= -\frac{1}{2} \sum_{i,j} \alpha_i \alpha_j y_i y_j \boldsymbol{x_i}^T \boldsymbol{x_j} + \sum_i \alpha_i \end{align} }[/math]

subject to the constaints:

[math]\displaystyle{ \begin{align} \alpha_i &\ge 0 \\ \sum_i \alpha_i y_i &= 0 \\ \lambda_i &\ge 0 \end{align} }[/math]

Notice that the simplified Lagrangian is the exact same as the hard margin case. The only difference with the soft margin case is the additional constraint [math]\displaystyle{ \lambda_i \ge 0 }[/math]. However, [math]\displaystyle{ \gamma }[/math] doesn't actually appear directly in the objective function. But, we can discern the following:

[math]\displaystyle{ \lambda_i = 0 \implies \alpha_i = \gamma }[/math]

[math]\displaystyle{ \lambda_i \gt 0 \implies \alpha_i \lt \gamma }[/math]

Thus, we can derive that the only difference with the soft margin case is the constraint [math]\displaystyle{ 0 \le \alpha_i \le \gamma }[/math]. This problem can be solved with quadratic programming.

Soft Margin SVM Formulation Summary

In summary, the primal form of the soft-margin SVM is given by:

[math]\displaystyle{ \begin{align} \min_{\boldsymbol{\beta}, \boldsymbol{\zeta}} \quad & \frac{1}{2}|\boldsymbol{\beta}|^2 + \gamma\sum_i \zeta_i \\ \text{s.t. } & y_i(\boldsymbol{\beta}^T \boldsymbol{x_i} + \beta_0) \ge 1-\zeta_i \quad, \quad \zeta_i \ge 0 \qquad i=1,...,M \end{align} }[/math]


The corresponding dual form which we derived above is:

[math]\displaystyle{ \begin{align} \max_{\boldsymbol{\alpha}} \quad & \sum_i \alpha_i - \frac{1}{2} \sum_{i,j} \alpha_i \alpha_j y_i y_j \boldsymbol{x_i}^T \boldsymbol{x_j} \\ \text{s.t. } & \sum_i \alpha_i y_i = 0 \\ & 0 \le \alpha_i \le \gamma, \qquad i=1,...,M \end{align} }[/math]

Note, the soft-margin dual objective is identical to hard margin dual objective! The only difference is now [math]\displaystyle{ \,\alpha_i }[/math] variables cannot be unbounded and are restricted to be a maximum of [math]\displaystyle{ \,\gamma }[/math]. This restriction allows the optimization problem to become feasible when the data is non-seperable. In the hard-margin case, when [math]\displaystyle{ \,\alpha_i }[/math] is unbounded there may be no finite maximum for the objective and we would not be able to converge to a solution.

Also note, [math]\displaystyle{ \,\gamma }[/math] is a model parameter and must be chosen to a fixed constant. It controls the size of margin versus violations. In a data set with a lot of noise (or non-seperability) you may want to choose a smaller [math]\displaystyle{ \,\gamma }[/math] to ensure a large margin. In practice, [math]\displaystyle{ \,\gamma }[/math] is chosen by cross-validation---which tests the model on a held out sample to determine which [math]\displaystyle{ \,\gamma }[/math] gives the best result. However, it may be troublesome to work with [math]\displaystyle{ \,\gamma }[/math] since [math]\displaystyle{ \,\gamma \in (0, \infty) }[/math]. So often a variant formulation, known as [math]\displaystyle{ \,\nu }[/math]-SVM is used which uses a better scaled parameter [math]\displaystyle{ \,\nu \in (0,1) }[/math] instead of [math]\displaystyle{ \,\gamma }[/math] to balance margin versus separability.

Finally note that as [math]\displaystyle{ \,\gamma \rightarrow \infty }[/math], the soft-margin SVM converges to hard-margin, as we do not allow any violation.

Soft Margin SVM Problem Interpretation

Like in the case of hard-margin the dual formulation for soft-margin given above allows us to interpret the role of certain points as support vectors.

We consider three cases:

Case 1: [math]\displaystyle{ \,\alpha_i=\gamma }[/math]

From KKT condition 1 (third part), [math]\displaystyle{ \,\gamma - \alpha_i - \lambda_i = 0 }[/math] implies [math]\displaystyle{ \,\lambda_i = 0 }[/math].

From KKT condition 3 (second part) [math]\displaystyle{ \,\lambda_i \zeta_i = 0 }[/math] this now suggests [math]\displaystyle{ \,\zeta_i \gt 0 }[/math].

Thus this is a point that violates the margin, and we say [math]\displaystyle{ \,x_i }[/math] is inside the margin.

Case 2: [math]\displaystyle{ \,\alpha_i=0 }[/math]

From KKT condition 1 (third part), [math]\displaystyle{ \,\gamma - \alpha_i - \lambda_i = 0 }[/math] implies [math]\displaystyle{ \,\lambda_i \gt 0 }[/math].

From KKT condition 3 (second part) [math]\displaystyle{ \,\lambda_i \zeta_i = 0 }[/math] this now implies [math]\displaystyle{ \,\zeta_i = 0 }[/math].

Finally, from KKT condition 3 (first part), [math]\displaystyle{ y_i(\boldsymbol{\beta}^T \boldsymbol{x_i} + \beta_0) \gt 1-\zeta_i }[/math], and since [math]\displaystyle{ \,\zeta_i = 0 }[/math], the point is classified correctly and we say [math]\displaystyle{ \,x_i }[/math] is outside the margin. In particular, [math]\displaystyle{ \,x_i }[/math] does not play a role in determining the classifier and if we ignored it, we would get the same result.

Case 3: [math]\displaystyle{ \,0 \lt \alpha_i \lt \gamma }[/math]

From KKT condition 1 (third part), [math]\displaystyle{ \,\gamma - \alpha_i - \lambda_i = 0 }[/math] implies [math]\displaystyle{ \,\lambda_i \gt 0 }[/math].

From KKT condition 3 (second part) [math]\displaystyle{ \,\lambda_i \zeta_i = 0 }[/math] this now implies [math]\displaystyle{ \,\zeta_i = 0 }[/math].

Finally, from KKT condition 3 (first part), [math]\displaystyle{ y_i(\boldsymbol{\beta}^T \boldsymbol{x_i} + \beta_0) = 1-\zeta_i }[/math], and since [math]\displaystyle{ \,\zeta_i = 0 }[/math], the point is on the margin and we call it a support vector.

These three scenarios are depicted in Fig..

Case 4: if [math]\displaystyle{ \boldsymbol \zeta_i \gt 0 }[/math] implies [math]\displaystyle{ \boldsymbol \lambda_i=0 }[/math] this now implies [math]\displaystyle{ \boldsymbol \alpha_i=\gamma }[/math] from which we know that [math]\displaystyle{ y_1(\beta^*\mathbf{x}+\beta_0)\ge 1-\zeta_i }[/math] it is closer to the boundary, so [math]\displaystyle{ x_i }[/math] is inside the margin.

Soft Margin SVM with Kernel

Like hard-margin SVM, we can use the kernel trick to find a non-linear classifier using the dual formulation.

In particular, we define a non-linear mapping for [math]\displaystyle{ \boldsymbol{x_i} }[/math] as [math]\displaystyle{ \Phi(\boldsymbol{x_i}) }[/math], then in dual objective we compute [math]\displaystyle{ \Phi^T(\boldsymbol{x_i}) \Phi(\boldsymbol{x_j}) }[/math] instead of [math]\displaystyle{ \boldsymbol{x_i}^T \boldsymbol{x_j} }[/math]. Using a kernel function [math]\displaystyle{ K(\boldsymbol{x_i}, \boldsymbol{x_j}) = \Phi^T(\boldsymbol{x_i}) \Phi(\boldsymbol{x_j}) }[/math] from the list provided in the previous lecture notes, we then do not need to explicitly map [math]\displaystyle{ \Phi(\boldsymbol{x_i}) }[/math].

The dual problem we solve is:

[math]\displaystyle{ \begin{align} \max_{\boldsymbol{\alpha}} \quad & \sum_i \alpha_i - \frac{1}{2} \sum_{i,j} \alpha_i \alpha_j y_i y_j K(\boldsymbol{x_i}, \boldsymbol{x_j}) \\ \text{s.t. } & \sum_i \alpha_i y_i = 0 \\ & 0 \le \alpha_i \le \gamma, \qquad i=1,...,M \end{align} }[/math]

where [math]\displaystyle{ \, K(\boldsymbol{x_i}, \boldsymbol{x_i}) }[/math] is an appropriate kernel function specification.

To make it clear why we do not need to explicitly map [math]\displaystyle{ \Phi(\boldsymbol{x_i}) }[/math]: If we use the kernel trick, both hard- and soft-margin SVMs find the following value for the optimum [math]\displaystyle{ \boldsymbol{\beta} }[/math]:

[math]\displaystyle{ \boldsymbol{\beta} = \sum_i \alpha_i y_i \Phi(\boldsymbol{x_i}) }[/math]

From the definition of the classifier, the class labels for points are given by:

[math]\displaystyle{ \boldsymbol{\beta}^T \Phi(\boldsymbol{x}) + \beta_0 }[/math]

Plugging the formula for [math]\displaystyle{ \boldsymbol{\beta} }[/math] in the expression above we get:

[math]\displaystyle{ \sum_i \alpha_i y_i \Phi(\boldsymbol{x_i}) \Phi(\boldsymbol{x}) + \Beta_0 }[/math]

which, from the properties of kernel functions, is equal to:

[math]\displaystyle{ \sum_i \alpha_i y_i K(\boldsymbol{x_i}, \boldsymbol{x_i}) + \Beta_0 }[/math]

Thus, we do not need to explicitly map [math]\displaystyle{ \boldsymbol{x_i} }[/math] to a higher dimension.

Soft Margin SVM Implementation

The SVM optimization problem is a quadratic program and we can use any quadratic solver to accomplish this. For example, matlab's optimization toolbox provides quadprog. Alternatively, CVX (by Stephen Boyd) is an excellent optimization toolbox that integrates with matlab and allows one to enter convex optimization problems as though they are written on paper (and it is free).

We prefer to solve the dual since it is an easier problem (and also allows to use a Kernel). Using CVX this would be coded as

K = X*X';           % Linear kernel
H = (y*y') .* K;
cvx_begin 
    variable  alpha(M,1);
    maximize  (sum(alpha) - 0.5*alpha'*H*alpha)
    subject to
         y'*alpha == 0; 
         alpha >= 0;
         alpha <= gamma
cvx_end

which provides us with optimal [math]\displaystyle{ \,\boldsymbol{\alpha} }[/math].

Now we can obtain [math]\displaystyle{ \,\beta_0 }[/math] by using any point on the margin (i.e. [math]\displaystyle{ \,0 \lt \alpha_i \lt \gamma }[/math]), and solving

[math]\displaystyle{ y_i \left(\sum_j y_j \alpha_j K(\boldsymbol{x_j}, \boldsymbol{x_i}) + \beta_0 \right) = 1 }[/math]

Note, [math]\displaystyle{ \,K(\boldsymbol{x_i}, \boldsymbol{x_j}) = \boldsymbol{x_i}^T \boldsymbol{x_j} }[/math] can also be the linear kernel.

Finally, we can classify a new data point [math]\displaystyle{ \,\boldsymbol{x} }[/math], according to

[math]\displaystyle{ h(\boldsymbol{x}) = \begin{cases} +1, \ \ \text{if } \sum_j y_j \alpha_j K(\boldsymbol{x_j}, \boldsymbol{x}) + \beta_0 \gt 0\\ -1, \ \ \text{if } \sum_j y_j \alpha_j K(\boldsymbol{x_j}, \boldsymbol{x}) + \beta_0 \lt 0 \end{cases} }[/math]

Alternatively, using traditional Mat Lab the following code finds b and b0.

ell = size(X, 1);
H = (y * y') .* (X * X' + (1/gamma) * eye(ell));
f = -ones(1, ell);
LB = zeros(ell, 1);
UB = gamma * ones(ell, 1);
alpha = quadprog(H, f, [], [], y', 0, LB, UB);
b = X*(alpha.*y);
# Here we try to select the closest point to the margin for b0, thus finding the best origin for our classifer
i =min(find((alpha>0.1)&(y==1)));
b0 = 1 - (X * X')(i, :) * (alpha .* y);
Intuitive Connection to Hard Margin Case

The form of the dual in both the Hard Margin & Soft Margin case are exceedingly; the only difference is a further restriction([math]\displaystyle{ \ \alpha_i \lt \gamma }[/math]) on the dual variable. You could even implement the soft margin problem to solve a case where the hard margin problem is feasible. This is not typically done but doing can give considerable insight into how the soft margin problem reacts to changes in [math]\displaystyle{ \ \gamma }[/math]. If we let [math]\displaystyle{ \ \gamma \to +\infty }[/math] we see that the soft margin problem approaches the hard margin problem. If we examine the primal problem this matches our intuitive expectation. As [math]\displaystyle{ \ \gamma \to +\infty }[/math] the penalty for being inside the margin increases to infinity and thus the optimal solution will place paramount importance of having a hard margin.

When choosing [math]\displaystyle{ \ \gamma }[/math] one needs to be careful and understand the implications. Values of [math]\displaystyle{ \ \gamma }[/math] that are too large will result in slavish dedication to getting as close to a hard margin as possible. This can result in poor decisions especially if there are outliers involved. Values of [math]\displaystyle{ \ \gamma }[/math] that are too small do not adequate punish the problem for misclassifying points. It is important to both test different values for [math]\displaystyle{ \ \gamma }[/math] and to exercise discretion when selecting possible values of [math]\displaystyle{ \ \gamma }[/math] to test. It is also important to examine the impact of outliers as their impact can be extremely destructive to the usefulness of the SVM classifier.


Multiclass Support Vector Machines

Support vector machines were originally designed for binary classification; therefore we need a methodology to adopt the binary SVMs to a multi-class problem. How to effectively extend SVMs for multi-class classification is still an ongoing research issue. Currently the most popular approach for multi-category SVM is by constructing and combining several binary classifiers.Different coding and decoding strategies can be used for this purpose among which one-against-all and one-against-one (pairwise) are the most popular ones <ref name="CMBishop" />. .

One-Against-All method

Assume that we have [math]\displaystyle{ \ k }[/math] discrete classes. For a one-against-all SVM, we determine [math]\displaystyle{ \ k }[/math] decision functions that separate one class from the remaining classes. Let the [math]\displaystyle{ \ i^{th} }[/math] decision function, with the maximum margin, that separates class [math]\displaystyle{ \ i }[/math] from the remaining classes be:


[math]\displaystyle{ D_i(\mathbf{x})=\mathbf{w}_i^Tf(\mathbf{x})+b_i }[/math]


The hyperplane[math]\displaystyle{ \ D_i(\mathbf{x})=0 }[/math] forms the optimal separating hyperplane and if the classification problem is separable, the training data [math]\displaystyle{ \mathbf{x} }[/math] belonging to class [math]\displaystyle{ \ i }[/math] satisfy

[math]\displaystyle{ \begin{cases} F_i(\mathbf{x})\geq1 &,\mathbf{x}\text{ belong to class }i\\ F_i(\mathbf{x})\leq-1 &,\mathbf{x}\text{ belong to remaining classes}\\ \end{cases} }[/math]

In other words, the decision function is the sign of [math]\displaystyle{ \ D_i(\mathbf{x}) }[/math] and therefore it is a discrete function. If the above equation is satisfied for plural [math]\displaystyle{ \ i's }[/math] , or there is no [math]\displaystyle{ \ i }[/math] that satisfies this equation, [math]\displaystyle{ \mathbf{x}) }[/math] is unclassifiable. Figure below demonstrates the one-vs-all multi-class scheme where the pink area is the unclassifiable region.

File:one-vs-all multiclass.jpg
one-against-all multi-class scheme

One-Against-One (Pairwise) method

In this method we construct a binary classifier for each possible pair of classes and therefore for [math]\displaystyle{ \ k }[/math] classes we will have [math]\displaystyle{ \frac{(k)(k-1)}{2} }[/math] decision functions. The decision function for the pair of classes [math]\displaystyle{ i }[/math] and [math]\displaystyle{ j }[/math] is given by

[math]\displaystyle{ D_{ij}=\mathbf{w}_{ij}^Tf(\mathbf{x})+b_{ij} }[/math]


where [math]\displaystyle{ D_{ij}(\mathbf{x})=-D_{ij}(\mathbf{x}) }[/math].


The final decision is achieved by maximum voting scheme. That is for the datum [math]\displaystyle{ \mathbf{x} }[/math] we calculate


[math]\displaystyle{ D_i(\mathbf{x})=\sum_{j\neq i,i=1}sign(D_{ij}(\mathbf{x})) }[/math]


And [math]\displaystyle{ \mathbf{x} }[/math] is classified into the class: [math]\displaystyle{ arg\quad \max_i\quad D_i({\mathbf{x}}) }[/math]


Figure below demonstrates the one-vs-one multi-class scheme where the pink area is the unclassifiable region.


File:one-vs-one multiclass.jpg
one-vs-one multi-class scheme

Advantages of Support Vector Machines

  • SVMs provide a good out-of-sample generalization. This means that, by choosing an appropriate generalization grade,

SVMs can be robust, even when the training sample has some bias. This is mainly due to selection of optimal hyperplane.

  • SVMs deliver a unique solution, since the optimality problem is convex. This is an advantage compared

to Neural Networks, which have multiple solutions associated with local minima and for this reason may not be robust over different samples.

  • State-of-the-art accuracy on many problems.
  • SVM can handle any data types by changing the kernel.

Disadvantages of Support Vector Machines

  • Difficulties in choice of the kernel (Which we will study about in future).
  • limitation in speed and size, both in training and testing
  • Discrete data presents another problem, although with suitable rescaling excellent results have nevertheless been obtained.
  • The optimal design for multiclass SVM classifiers is a further area for research.
  • A problem with SVMs is the high algorithmic complexity and extensive memory requirements of the required quadratic programming in large-scale tasks.

Comparison with Neural Networks <ref>www.cs.toronto.edu/~ruiyan/csc411/Tutorial11.ppt</ref>

  1. Neural Networks:
    1. Hidden Layers map to lower dimensional spaces
    2. Search space has multiple local minima
    3. Training is expensive
    4. Classification extremely efficient
    5. Requires number of hidden units and layers
    6. Very good accuracy in typical domains
  2. SVMs
    1. Kernel maps to a very-high dimensional space
    2. Search space has a unique minimum
    3. Training is extremely efficient
    4. Classification extremely efficient
    5. Kernel and cost the two parameters to select
    6. Very good accuracy in typical domains
    7. Extremely robust

The Naive Bayes Classifier

The naive Bayes classifier is a very simple (and often effective) classifier based on Bayes rule. For further reading check [11]

Bayes assumption is that all the features are conditionally independent given the class label. Even though this is usually false (since features are usually dependent), the resulting model is easy to fit and works surprisingly well.

Each feature or variable [math]\displaystyle{ \,x_{ij} }[/math] is independent for [math]\displaystyle{ \,j = 1, ..., d }[/math], where [math]\displaystyle{ \, \mathbf{x}_i \in \mathbb{R}^d }[/math].

Thus the Bayes classifier is [math]\displaystyle{ h(\mathbf{x}) = \arg\max_k \quad \pi_k f_k(\mathbf{x}) }[/math]

where [math]\displaystyle{ \hat{f}_k(\mathbf{x}) = \hat{f}_k(x_1 x_2 ... x_d)= \prod_{j=1}^d \hat{f}_{kj}(x_j) }[/math].

We can see this a direct application of Bayes rule [math]\displaystyle{ P(Y=k|X=\mathbf{x}) =\frac{P(X=\mathbf{x}|Y=y) P(Y=y)} {P(X=\mathbf{x})} = \frac{f_k(\mathbf{x}) \pi_k} {\sum_k f_k \pi_k} }[/math],

with [math]\displaystyle{ \, f_k(\mathbf{x})=f_1(\mathbf{x})f_2(\mathbf{x})...f_k(\mathbf{x}) }[/math] and [math]\displaystyle{ \ \mathbf{x} \in \mathbb{R}^d }[/math].

Note, earlier we assume class-conditional densitites which were multivariate normal with a dense covariance matrix. In this case we are forcing the covariance matrix to be a diagonal. This simplification, while not realistic, can provide a more robust model.

As another example, consider the 'iris' dataset in R. We would like to use known data (sepal length, sepal width, petal length, and petal width) to predict species of iris. As is typically done, we will use the maximum a posteriori (MAP) rule to decide the class to which each observation belongs. The code for using a built-in function in R to classify is:

 #If you were to use a built-in function for Naive Bayes Classification, 
 #this is how it would work:

library(lattice) #these are the libraries from which packages are needed
library(class)
library(e1071)

count = 0 #This will keep track of properly classified objects
attach(iris)
model <- (Species ~ Sepal.Length + Sepal.Width + Petal.Length + Petal.Width)
m <- naiveBayes(model, data = iris)
p <- predict(m, iris) 			#You could also use a table here
for(i in 1:length(Species)) {
if (p[i] == Species[i]) {
count = count + 1
}}
misclass = (length(Species)-count)/length(Species)
misclass
#So we get that 4% of the points are misclassified.

In this particular dataset, we would not expect naïve Bayes to be the best approach for classification, since the assumption of independent predictor variables is violated (sepal length and sepal width are related, for example). However, misclassification rate is low, which indicates that naïve Bayes does a good job of classifying these data.

K-Nearest-Neighbors(k-NN)

Classifying x by assigning it the label most frequently represented among k nearest samples and use a voting scheme.

Given a data point x, find the k nearest data points to x and classify x using the majority vote of these k neighbors (k is a positive integer, typically small.) If k=1, then the object is simply assigned to the class of its nearest neighbor.


  1. Ties can be broken randomly.
  2. k can be chosen by cross-validation
  3. k-nearest neighbor algorithm is sensitive to the local structure of the data<ref>

http://www.saylor.org/site/wp-content/uploads/2011/02/Wikipedia-k-Nearest-Neighbor-Algorithm.pdf</ref>.

  1. Nearest neighbor rules in effect compute the decision boundary in an implicit manner.
Requirements of k-NN:

<ref>http://courses.cs.tamu.edu/rgutier/cs790_w02/l8.pdf</ref>

  1. An integer k
  2. A set of labeled examples (training data)
  3. A metric to measure “closeness”
Advantages:
  1. Able to obtain optimal solution in large sample.
  2. Simple implementation
  3. There are some noise reduction techniques that work only for k-NN to improve the efficiency and accuracy of the classifier.
Disadvantages:
  1. If the training set is too large, it may have poor run-time performance.
  2. k-NN is very sensitive to irrelevant features since all features contribute to the similarity and thus to classification.<ref>

http://www.google.ca/url?sa=t&rct=j&q=k%20nearest%20neighbors%20disadvantages&source=web&cd=1&ved=0CCIQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.100.1131%26rep%3Drep1%26type%3Dpdf&ei=3feyToHMG8Xj0QGOoMDKBA&usg=AFQjCNFF1XsYgZy1W2YLQMNTq_7s07mfqg&sig2=qflY4MffEHwP9n-WpnWMdg</ref>

  1. small training data can lead to high misclassification rate.
  2. kNN suffers from the curse of dimensionality. As the number of dimensions of the feature space increases, points become further apart from each other, making it harder to classify new points. In 10 dimensions, each point needs to cover an area of approximately 80% the value of each coordinate to capture 10% of the data. (See textbook page 23). Algorithms to solve this problem include approximate nearest neighbour. <ref>P. Indyk and R. Motwani, Approximate nearest neighbors: towards removing the curse of dimensionality. STOC '98 Proceedings of the thirtieth annual ACM symposium on Theory of computing. pg 604-613.</ref>
Extensions and Applications

In order to improve the obtained results, we can do following:

  1. Preprocessing: smoothing the training data (remove any outliers and isolated points)
  2. Adapt metric to data

Besides classification, k-nearest-neighbours is useful for other tasks as well. For example, the k-NN has been used in Regression or Product Recommendation system<ref> http://www.cs.ucc.ie/~dgb/courses/tai/notes/handout4.pdf</ref>.

In 1996 Support Vector Regression <ref>"Support Vector Regression Machines". Advances in Neural Information Processing Systems 9, NIPS 1996, 155–161, MIT Press.</ref> was proposed. SVR depends only on a subset of training data since the cost function ignores training data close to the prediction withing a threshold.

SVM is commonly used in Bioinformatics. Common uses include classification of DNA sequences and promoter recognition and identifying disease-related microRNAs. Promoters are short sequences of DNA that act as a signal for gene expression. In one paper, Robertas Damaševičius tries using a power series kernel function and 11 classification rules for data projection to classifty these sequences, to aid active gene location.<ref>Damaševičius, Robertas. "Analysis of Binary Feature Mapping Rules for Promoter Recognition in Imbalanced DNA Sequence Datasets using Support Vector Machine". Proceedings from 4th International IEEE Conference "Intelligent Systems". 2008.</ref> MicroRNAs are non-coding RNAs that target mRNAs for cleavage in protein synthesis. There is growing evidence suggesting that mRNAs "play important roles in human disease development, progression, prognosis, diagnosis and evaluation of treatment response". Therefore, there is increasing research in the role of mRNAs underlying human diseases. SVM has been proposed as a method of classifying positive mRNA disease-associations from negative ones.<ref>Jiang, Qinghua; Wang, Guohua; Zhang, Tianjiao; Wang, Yadong. "Predicting Human microRNA-disease Associations Based on Support Vector Machine". Proceedings from IEEE International Conference on Bioinformatics and Biomedicine. 2010.</ref>

Selecting k

Generally speaking, a large k classifies data more precisely than a smaller k as it reduces the overall noise. But as k increases so does the complexity of computation. To determine an optimal k, cross-validation can be used.<ref>http://chem-eng.utoronto.ca/~datamining/dmc/k_nearest_neighbors_reg.htm</ref> Traditionally, k is fixed for each test example. Another approach, namely Adaptive k-nearest neighbor algorithm, was proposed to improve the selection of k. In the algorithm, k is not a fixed number but is dependent on the nearest neighbour of the data point. In training phase, the algorithm calculates the optimal k for each training data point, which is the minimum number of neighbors required to get the correct class label. In the testing phase, it finds out the nearest neighbor of the testing data point and its corresponding optimal k. Then it performs the k-NN algorithm using such k to classify the data point. <ref>Shiliang Sun, Rongqing Huang, "An adaptive k-nearest neighbor algorithm", 2010 Seventh International Conference on Fuzzy Systems and Knowledge Discovery (FSKD), 2010.</ref>

Further Readings

1- SVM-KNN: Discriminative Nearest Neighbor Classification for Visual Category Recognition here

2- SVM application listhere

3- The kernel trick for distances here

4- Exploiting the kernel trick to correlate fragment ions for peptide identification via tandem mass spectrometry here

5- General overview of SVM and Kernel Methods. Easy to understand presentation. here

Supervised Principal Component Analysis (Lecture: Nov. 8, 2011)

Recall that PCA finds the direction of maximum variation of [math]\displaystyle{ d }[/math]-dimensional data, and may be used as a dimensionality reduction pre-processing operation for classification. FDA is a form of supervised dimensionality reduction or feature extraction that finds the best direction to project the data in order for the data points to be easily separated into their respective classes by considering inter- and intra-class distances (i.e. minimize intra-class distance and variance, maximize inter-class distance and variance). PCA differs from FDA in that PCA is an unsupervised classifier, whereas FDA is supervised classifier. Thus, FDA is better at finding the directions separating the data points for classification in a supervised problem.

Supervised PCA (SPCA) is a generalization of PCA. SPCA can use label information for classification tasks and it has some advantages over FDA. For example, FDA will only project onto [math]\displaystyle{ \ k-1 }[/math] dimensional space regardless of the dimensionality of the data where [math]\displaystyle{ \ k }[/math] is the number of classes. This is not always desirable for dimensionality reduction.

SPCA estimates the sequence of principal components having the maximum dependency on the response variable. It can be solved in closed-form, has a dual formulations that reduces the computational complexity when the dimension of the data is significantly greater than the number of data points, and it can be kernelized. <ref>Elnaz Barshan, Ali Ghodsi, Zohreh Azimifar, and Mansoor Zolghadri. Supervised Principal Component Analysis: Visualization, Classification and Regression on Subspaces and Submanifolds , Journal of Pattern Recognition, to appear 2011</ref>

SPCA Problem Statement

Suppose we are given a set of data [math]\displaystyle{ \ \{x_i, y_i\}_{i=1}^n , x_i \in R^{p}, y_i \in R^{l} }[/math]. Note that [math]\displaystyle{ \ y_i }[/math] is not restricted to binary classes. So the assumption of having only discrete values for labels is relaxed here, which means this model can be used for regression as well. Target values ([math]\displaystyle{ \ y }[/math]) don't have to be in a one dimensional space. Just as for PCA, we are looking for a lower dimensional subspace [math]\displaystyle{ \ S = U^T X }[/math], where [math]\displaystyle{ \ U }[/math] is an orthogonal projection. However, instead of finding the direction of maximum variation (as is the case in regular PCA), we are looking for the subspace that contains as much predictive information about [math]\displaystyle{ \ Y }[/math] as the original covariate [math]\displaystyle{ \ X }[/math], i.e. we are trying to determine a projection matrix [math]\displaystyle{ \ U }[/math] such that [math]\displaystyle{ \ P(Y|X)=P(Y|U^TX) }[/math]. We know that the predictive information must exist between the original covariate [math]\displaystyle{ \ X }[/math] and [math]\displaystyle{ \ Y }[/math], which are assumed to be drawn iid from the distribution [math]\displaystyle{ \ \{x_i, y_i\}_{i=1}^n }[/math], because if they are completely independent there is no way of doing classification or regression.

Warning

If we project our data into a high enough dimension, we can fit any data - even noise. In his book "The God gene: how faith is hardwired into our genes", Dean H. Hamer discusses how factor analysis (model which "uses regression modelling techniques to test hypotheses producing error terms" <ref>use regression modelling techniques to test hypotheses producing error terms</ref>) was used to find a correlations between the gene (VMAT2) and a person's belief in God. The full book is available at: <ref>http://books.google.ca/books?id=TmR6uAAHEssC&pg=PA33&lpg=PA33&dq=god+gene+statistics&source=bl&ots=8q-jSwKZ8O&sig=O8OBe2YaPbE0vMp9A6PxEC9DwL0&hl=en&ei=lWO8Tp_nN4H40gGA2uXjBA&sa=X&oi=book_result&ct=result&resnum=2&ved=0CCEQ6AEwAQ#v=onepage&q&f=false </ref>.

It appears as though finding a correlation between seemingly uncorrelated data is sometimes statistically trivial. One study found correlations between people shopping habits and their genetics. Family members were shown to have far more similar consumer habits than those who did not share DNA. This was then used to explain "fondness for specific products such as chocolate, science-fiction movies, jazz, hybrid cars and mustard." <ref>http://www.businessnewsdaily.com/genetics-incluence-shopping-habits-0593/</ref>.

The main idea is that when we are in a highly dimensional space [math]\displaystyle{ \ \mathbb{R}^d }[/math], if we do not have enough data (i.e. [math]\displaystyle{ n \approx d }[/math]), then it is easy to find a classifier that separates the data across its many dimensions.

Different Techniques for Dimensionality Reduction

  • Classical Fisher's Discriminant Analysis (FDA)

The goal of FDA is to reduce the dimensionality of data in [math]\displaystyle{ \ \mathbb{R}^d }[/math] in order to have separable data points in a new space [math]\displaystyle{ \ \mathbb{R}^{d-1} }[/math].

  • Metric Learning (ML)

This is a large family of methods.

  • Sufficient Dimensionality Reduction (SDR)

This is also a family of methods. In recent years SDR has been used to denote a body of new ideas and methods for dimension reduction. Like Fisher's classical notion of a sufficient statistic, SDR strives for reduction without loss of information. But unlike sufficient statistics, sufficient reductions may contain unknown parameters and thus need to be estimated.

  • Supervised Principal Components (BSPC)

A method proposed by Bair et al. This is a different method from the SPCA method discussed in class despite having a similar name.

Metric Learning

First define a new metric as:

[math]\displaystyle{ \ d_A(\mathbf{x}_i, \mathbf{x}_j)=||\mathbf{x}_i -\mathbf{x}_j|| = \sqrt{(\mathbf{x}_i - \mathbf{x}_j)^TA(\mathbf{x}_i - \mathbf{x}_j)} }[/math]

This metric will only satisfy the requisite properties of a metric if [math]\displaystyle{ \ A }[/math] is a positive definite matrix. This restriction is often relaxed to positive semi-definate. Relaxing this condition may be required if we wish to disregard uninformative covariated.

Note 1: [math]\displaystyle{ \ A }[/math] being positive semi-definite ensures that this metric respects non-negativity and the triangle inequality, but allows [math]\displaystyle{ \ d_A(\mathbf{x}_i,\mathbf{x}_j)=0 }[/math] to not imply [math]\displaystyle{ \ \mathbf{x}_i=\mathbf{x}_j }[/math] <ref name="Xing">Xing, EP. Distance metric learning with application to clustering with side-information. [12]</ref>.

Common choices for A

1)[math]\displaystyle{ \ A=I }[/math] This represents Euclidean distance.

2)[math]\displaystyle{ \ A=D }[/math] where [math]\displaystyle{ \ D }[/math] is a diagonal matrix. The diagonal values can be thought of reweighting the importance of each covariate and these weights are learned can be learned from training data.

3)[math]\displaystyle{ \ A=D }[/math] where [math]\displaystyle{ \ D }[/math] is a diagonal matrix with [math]\displaystyle{ \ D_{ii} = Var(i^{th} covariate)^{-1} }[/math] This represents scaling down each covariate so that they all have equal variance and thus have equal impact on the distance. This metric is consistant with and works very well for covariates that are independant and normally distributed.

4)[math]\displaystyle{ \ A=\Sigma^{-1} }[/math] where [math]\displaystyle{ \ \Sigma }[/math] is the covariance matrix for your set of covariates. This metric is consistant with and works very well for covariates that are normally distributed. The corresponding metric is called Mahalanobis distance.

When dealing with data that are on different measurement scales using choices 3 or 4 are vastly preferable to Euclidean distance as it prevents covariates with large measurement scales from dominating the metric.


For metric learning, construct the Mahalonobis distance over the input space and use it instead of the Euclidean distance. This is really equivalent to transforming the data points using a linear transformation and then computing the Euclidean distance in the new transformed space. To see that this is true, suppose we project each data points on a subspace [math]\displaystyle{ \ S }[/math] using [math]\displaystyle{ \ x^' = U^Tx }[/math] and calculate the Euclidean distance:

[math]\displaystyle{ \ ||\mathbf{x}_i^' - \mathbf{x}_j^'||_2^2= (U^T\mathbf{x}_i -U^T\mathbf{x}_j)^T(U\mathbf{x}_i -U\mathbf{x}_j) = (\mathbf{x}_i -\mathbf{x}_j)^TU^TU(\mathbf{x}_i -\mathbf{x}_j) }[/math]

This is the same as Mahalanobis distance in the new space for [math]\displaystyle{ \ A=UU^T }[/math].

1 One way to find [math]\displaystyle{ \ A }[/math] is to consider the set of similar pairs [math]\displaystyle{ \ (\mathbf{x}_i,\mathbf{x}_j) \in S }[/math] and the set of dissimilar pairs [math]\displaystyle{ \ (\mathbf{x}_i,\mathbf{x}_j) \in D }[/math]. Then we can solve the convex optimization problem below <ref name="Xing" />.

[math]\displaystyle{ min_A \sum_{(\mathbf{x}_i,\mathbf{x}_j)\in S} (\mathbf{x}_i - \mathbf{x}_j)^TA(\mathbf{x}_i - \mathbf{x}_j) }[/math]

s.t. [math]\displaystyle{ \sum_{(\mathbf{x}_i,\mathbf{x}_j)\in D} (\mathbf{x}_i - \mathbf{x}_j)^TA(\mathbf{x}_i - \mathbf{x}_j)\ge 1 }[/math] and [math]\displaystyle{ \ A }[/math] positive semi-definite.


Overall, the metric learning technique will attempt to minimize the squared induced distance between similar points while maximizing the squared induced distance between dissimilar points and search for a metric which allows points from the same class to be near one another and points from different classes to be far from one another.

Sufficient Dimensionality Reduction (SDR)

The goal of dimensionality reduction is to find a function [math]\displaystyle{ \ S(\mathbf{x}) }[/math] that maps [math]\displaystyle{ \ \mathbf{x} }[/math] from [math]\displaystyle{ \ \mathbb{R}^n }[/math] to a proper subspace, which means that the dimension of [math]\displaystyle{ \ \mathbf{x} }[/math] is being reduced. An example of [math]\displaystyle{ \ S(\mathbf{x}) }[/math] would be a function that uses several linear combinations of [math]\displaystyle{ \ \mathbf{x} }[/math].

For a dimensionality reduction to be sufficient the following condition must hold:

[math]\displaystyle{ \ P_{Y|X}(y|x) = P_{Y|S(X)}(y|S(x)) }[/math]

Which is equivalent to saying that the distribution of [math]\displaystyle{ \ y|S(\mathbf{x}) }[/math] is the same as [math]\displaystyle{ \ y |\mathbf{x} }[/math] [13]

This method aims to find a linear subspace [math]\displaystyle{ \ R }[/math] such that the projection onto this subspace preserves [math]\displaystyle{ \ P_{Y|X}(y|x) }[/math].

Suppose that [math]\displaystyle{ \ S(\mathbf{x}) = U^T\mathbf{x} }[/math] is a sufficient dimensional reduction, then

[math]\displaystyle{ \ P_{Y|X}(y|x) = P_{Y|U^TX}(y|U^T x) }[/math]

for all [math]\displaystyle{ \ x \in X }[/math], and [math]\displaystyle{ \ y \in Y }[/math], where [math]\displaystyle{ \ U^T X }[/math] is the orthogonal projection of [math]\displaystyle{ \ X }[/math] onto [math]\displaystyle{ \ R }[/math].

Graphical Motivation

In a regression setting, it is often useful to summarize the distribution of [math]\displaystyle{ y|\textbf{x} }[/math] graphically. For instance, one may consider a scatter plot of [math]\displaystyle{ y }[/math] versus one or more of the predictors. A scatter plot that contains all available regression information is called a sufficient summary plot.

When [math]\displaystyle{ \textbf{x} }[/math] is high-dimensional, particularly when the number of features of [math]\displaystyle{ \ X }[/math] exceed 3, it becomes increasingly challenging to construct and visually interpret sufficiency summary plots without reducing the data. Even three-dimensional scatter plots must be viewed via a computer program, and the third dimension can only be visualized by rotating the coordinate axes. However, if there exists a sufficient dimension reduction [math]\displaystyle{ R(\textbf{x}) }[/math] with small enough dimension, a sufficient summary plot of [math]\displaystyle{ y }[/math] versus [math]\displaystyle{ R(\textbf{x}) }[/math] may be constructed and visually interpreted with relative ease.

Hence sufficient dimension reduction allows for graphical intuition about the distribution of [math]\displaystyle{ y|\textbf{x} }[/math], which might not have otherwise been available for high-dimensional data.

Most graphical methodology focuses primarily on dimension reduction involving linear combinations of [math]\displaystyle{ \textbf{x} }[/math]. The rest of this article deals only with such reductions.[14]

Other Methods for Reduction

Two very common examples of SDR are Sliced Inverse Regression (SIR) and Sliced Average Variance Estimation (SAVE). More information on SIR can be found here [15]. In addition [16] also provides some examples for SIR.

Supervised Principal Components (BSPC)

BSPC algorithm:

1. Compute (univariate) standard regression coefficients for each feature j using the following formula:

[math]\displaystyle{ \ s_j=\frac{{X_j}^TY}{\sqrt{X_j^T X_j}} }[/math]

2. Reduce the data matrix [math]\displaystyle{ Xo }[/math] corresponding to all the columns where [math]\displaystyle{ \ |S_j|\gt \theta }[/math]. Find [math]\displaystyle{ \ \theta }[/math] by cross-validation.

3. Compute the first principal component of the reduced data matrix [math]\displaystyle{ Xo }[/math]

4. Use the principal component calculated in step (3) in a regression model or a classification algorithm to produce the outcome


Bair's SPCA is consistent. In Normal PCA as the number of data points increases PCA takes different directions for the components. However the direction of the first component of SPCA remains consistent as the number of points increase <ref>Bair E., Prediction by supervised principal components. [17]</ref>.

Hilbert-Schmidt Independence Criterion (HSIC)

"Hilbert-Schmidt Norm of the Cross-Covariance operator" is proposed as an independence criterion in reproducing kernel Hilbert spaces (RKHSs).

The measure is refered to as Hilbert-Schmidt Indepence Criterion (HSIC).

Let [math]\displaystyle{ \ z=\{(x_1,y_1),...,(x_n,y_n)\} \in\ \mathcal{X} }[/math]x[math]\displaystyle{ \mathcal{Y} }[/math] be a series of [math]\displaystyle{ \ n }[/math] independent observation drawn from [math]\displaystyle{ \ P_{(X,Y)}(x,y) }[/math] . An estimator of HSIC is given by

[math]\displaystyle{ HSIC=\frac{1}{(n-1)^2}Tr(KHBH) }[/math]

where H,K,B [math]\displaystyle{ \in\mathbb{R}^{n x n} }[/math]

[math]\displaystyle{ K_{ij} =k(x_i,x_j),B_{ij}=b(y_i,y_j), H=I-\frac{1}{n}\boldsymbol{e} \boldsymbol{e}^{T} }[/math], where [math]\displaystyle{ \ k }[/math] and [math]\displaystyle{ \ b }[/math] are positive semidefinite kernel functions, and [math]\displaystyle{ \ \boldsymbol{e} = [1 1 \ldots 1]^T }[/math].

XH is centralized version of X ( subtracting the mean of each row):

[math]\displaystyle{ XH=X(I- \frac{1}{n}\boldsymbol{e} \boldsymbol{e}^T)=X -\frac{1}{n}X\boldsymbol{e} \boldsymbol{e}^T }[/math] where each entry in row i of [math]\displaystyle{ \frac{1}{n}Xee^T }[/math] is mean of [math]\displaystyle{ i^{th} }[/math] row of X

[math]\displaystyle{ HBH }[/math] is double centeralized version of B (subtracting mean of row and column)

We introduced a way of measuring independence between two distributions. The key idea is that good features should maximize such dependence. Feature selection for various supervised learning problems is unified under HSIC, and the solutions can be approximated using a backward-elimination algorithm. To explain this, we started by explaining how to tell if two distributions are same. Specifically, if two distributions have different mean values, then we can say right away that these are two different distributions. However, if they share a same mean value, then we need to look at second moments of these distributions, from which we can derive variance. Hence we need to look at higher dimension to tell if two distributions are equal.

It can be mathematically shown(although not done in class) that if we define a mapping,[math]\displaystyle{ \ \phi }[/math] of random variable X, which maps X to higher dimension, then there exists a unique mapping between the [math]\displaystyle{ \ \mu_x }[/math], which is the average of x in the higher dimension, and the distribution of X. This suggests that [math]\displaystyle{ \ \mu_x }[/math] can reproduce the distribution of P.

Hence to figure out if two random variables X and Y have the same distribution, we can take the difference between E[math]\displaystyle{ \ \phi }[/math](x) and E[math]\displaystyle{ \ \phi }[/math](y), and take the norm of this to see if two distributions are equal. i.e. [math]\displaystyle{ || E \phi (x) - E\phi(y) ||^2 }[/math] If this value is equal to 0, then we know that they have the same distribution.

Now to test the independence of [math]\displaystyle{ \ P_x }[/math] and [math]\displaystyle{ \ P_y }[/math] then we can use the previous formula on [math]\displaystyle{ \ P_{xy} }[/math] and [math]\displaystyle{ \ (P_x)(P_y) }[/math] - if it equals 0, then two distributions [math]\displaystyle{ \ P_x }[/math] and [math]\displaystyle{ \ P_y }[/math] are independent. The larger the difference is, then the distributions of X and Y are more different.

Utilizing this, we can find the [math]\displaystyle{ \ U^TX }[/math] from [math]\displaystyle{ \ P(Y|X)=P(Y|U^TX) }[/math] such that it maximizes the HSIC between [math]\displaystyle{ \ Y }[/math], which implies the maximum dependence between [math]\displaystyle{ \ U^TX }[/math] and [math]\displaystyle{ \ Y }[/math].


you come up with index called HSIC:

[math]\displaystyle{ \ KHBH }[/math]

X, Y random variables.

K- kernel matrix over X.

B- kernel matrix over Y.

Kernel Function

A positive definite kernel can always be written as inner products of a feature mapping.
To prove a valid kernel function:
1. define a feature [math]\displaystyle{ \phi(x) }[/math] mapping into some vector space.
2. define a dot product in a strictly positive definite form
3. Show that [math]\displaystyle{ \ k(x, x') = \lt \phi(x),\phi(x')\gt }[/math]
[18]</ref>.

Kernel function will be used when calculating [math]\displaystyle{ \|| E\phi(x) - E\phi(y) ||^2 }[/math] The possible kernel functions we can choose are:

  • Linear kernel: [math]\displaystyle{ \,k(x,y)=x \cdot y }[/math]
  • Polynomial kernel: [math]\displaystyle{ \,k(x,y)=(x \cdot y)^d }[/math]
  • Gaussian kernel: [math]\displaystyle{ e^{-\frac{|x-y|^2}{2\sigma^2}} }[/math]
  • Delta Kernel: [math]\displaystyle{ \,k(x_i,x_j) = \begin{cases} 1 & \text{if }x_i=x_j \\ 0 & \text{if }x_i\ne x_j \end{cases} }[/math]

H is a constant matrix of the form: [math]\displaystyle{ \ H = I - \frac{1}{n}ee^T }[/math]

where, [math]\displaystyle{ \ e = \left( \begin{array}{c}1 \\ \\ \vdots \\ \\ 1 \\ \\ 1 \end{array} \right) }[/math].

H centralizes any matrix that you multiply it to. So HBH makes B double centred


We wanted the transformation [math]\displaystyle{ \ U^TX }[/math] such that it had the maximum dependance to Y. So we use the index HSIC to find the dependance between U^TX and Y and maximize it.

H centralize the mean of X by XH [math]\displaystyle{ X-\mu }[/math]: the larger the value is, they dependence more of each other.

So basically we want to maximize [math]\displaystyle{ \ Tr(KHBH) }[/math]

[math]\displaystyle{ \ max Tr(KHBH) }[/math]

[math]\displaystyle{ \ max Tr(X^TUU^TXHBH) }[/math]

[math]\displaystyle{ \ max Tr(U^TXHBHX^TU) }[/math]

we add a constraint to solve this problem

[math]\displaystyle{ \ U^TU=I }[/math]

Then this is identical to PCA if [math]\displaystyle{ \ B=I }[/math]

SPCA: Supervised Principle Compenent Analysis

We need to find [math]\displaystyle{ \ U }[/math] to maximize [math]\displaystyle{ \ Tr(HKHB) }[/math] where K is a Kernel of [math]\displaystyle{ \ U^T X }[/math] (eg: [math]\displaystyle{ \ X^T UU^T X }[/math]) and [math]\displaystyle{ \ B }[/math] is a Kernel of [math]\displaystyle{ \ Y }[/math](eg: [math]\displaystyle{ \ Y^T Y }[/math]):

[math]\displaystyle{ \ X }[/math] [math]\displaystyle{ \ Y }[/math]
[math]\displaystyle{ \ U^T X }[/math] [math]\displaystyle{ \ Y }[/math]
[math]\displaystyle{ \ (U^T X)^T (U^T X) = X^T UU^T X }[/math] [math]\displaystyle{ \ B }[/math]
   [math]\displaystyle{ \max \; Tr(HKHB)  }[/math]
   [math]\displaystyle{ \ \; \; = \; \max Tr(HX^T UU^T XHB)  }[/math]
   [math]\displaystyle{ \ \; \; = \; \max Tr(U^T XHBHX^T U)  }[/math]
   [math]\displaystyle{ \ subject \; to \; U^T U = I  }[/math]

Supervised Principle Components Analysis and Conventional PCA

Dimensionality Reduction of the 0-1-2 Data, Using PCA
Dimensionality Reduction of the 0-1-2 Data, Using Supervised PCA


This is idential to PCA if B = I

    [math]\displaystyle{ (XHBHX^T) = cov(x) = (x-\mu)(x-\mu)^T }[/math]

SPCA

Algorithm 1
- Recover basis: Calculate [math]\displaystyle{ Q=XHBHX^T }[/math] and let u=eigenvector of Q corresponding to the top d eigenvalues.
- Encode training data: [math]\displaystyle{ Y=U^TXH }[/math] where Y is dxn matrix of the original data
- Reconstruct training data: [math]\displaystyle{ \hat{X}=UY=UU^TX }[/math]
- Encode test example: [math]\displaystyle{ y=U^T(x-\mu) }[/math] where y is a d dimensional encoding of x.
- Reconstruct test example: [math]\displaystyle{ \hat{X}=U_y=UU^T(x-\mu) }[/math]

Find U that would maximize [math]\displaystyle{ Tr(HKHB) }[/math] where K is a kernel of [math]\displaystyle{ U^TX }[/math] (e.g. [math]\displaystyle{ K=x^Tuu^Tx }[/math]) and B is a kernel of Y (e.g. [math]\displaystyle{ B=y^Ty }[/math]).

[math]\displaystyle{ max_U Tr(KHBH) = max_U Tr(x^Tuu^TxHBH) = max_U Tr(u^TxHBHx^Tu) }[/math] since we can switch the order around for traces

Dual Supervised Principle Component Analysis

Let [math]\displaystyle{ Q = XHBHX^T }[/math] and B are both PSD

      [math]\displaystyle{ Q = \psi\psi^T }[/math]
      [math]\displaystyle{ B = \Delta\Delta^T }[/math]
      [math]\displaystyle{ \psi = XH\Delta^T }[/math]

The solution for U can be expressed as singular value decomposition (SVD) of [math]\displaystyle{ \psi }[/math]:

      [math]\displaystyle{ \psi = U \Sigma V^T }[/math]
  [math]\displaystyle{ \rightarrow \psi V = U \Sigma }[/math]
  [math]\displaystyle{ \rightarrow \psi V \Sigma^-1 = U }[/math]
  [math]\displaystyle{ \rightarrow \Sigma^{-1} V^T \psi^T XH  }[/math]
  [math]\displaystyle{ \rightarrow \Sigma^{-1} V^T V \Sigma^T U^T XH  }[/math]

It gives a relationship between V and U. Your can replace these in the algorithm above and define everything based on V instead of U. By doing this you do not need to find eigenvectors of Q which have a high dimensionality.


Algorithm 2
Recover basis: calculate [math]\displaystyle{ \psi^T \psi }[/math] and let V=eigenvector of [math]\displaystyle{ \psi^T \psi }[/math] corresponding to the top d eigenvalues. Let [math]\displaystyle{ \Sigma }[/math]=diagonal matrix of square roots of the top d eigenvalues.

Reconstruct training data: [math]\displaystyle{ \hat{X}=UZ=XH\Delta^T V \Sigma^{-2}V^T\Delta H(X^T X)H }[/math]

Encode test examples: [math]\displaystyle{ y=U^T(x-\mu)=\Sigma^{-1}V^T \Delta H[X^T(x-\mu)] }[/math] where y is a d dimensional encoding of x.

Towards a Unified Network

B Constraint Component
PCA I [math]\displaystyle{ \omega^T \omega = I }[/math]
FDA[math]\displaystyle{ ^{(1)} }[/math] [math]\displaystyle{ B_0 }[/math] [math]\displaystyle{ \omega^T S_\omega \omega = I }[/math] [math]\displaystyle{ S_\omega = X B_s X^T }[/math]
CFML I[math]\displaystyle{ ^{(2)} }[/math] [math]\displaystyle{ B_0 - B_s }[/math] [math]\displaystyle{ \omega^T \omega = I }[/math]
CFML II[math]\displaystyle{ ^{(2)} }[/math] [math]\displaystyle{ B_0 }[/math] [math]\displaystyle{ \omega^T S_\omega \omega = I }[/math] [math]\displaystyle{ S_\omega = X B_s X^T }[/math]

(1)[math]\displaystyle{ B_s=F(F^{T}F)^{-1}F^T }[/math], (2) [math]\displaystyle{ B_s=\tfrac{1}{n}FF^{T} }[/math] ,[math]\displaystyle{ B_D=H-B_s }[/math], [math]\displaystyle{ n }[/math] # of data points, [math]\displaystyle{ F }[/math] indicator matrix of cluster, [math]\displaystyle{ H }[/math] the centering matrix

Dual Supervised PCA

B Constraint Component
KPCA I [math]\displaystyle{ UU^T = I }[/math] Arbitrary
K-means I [math]\displaystyle{ UU^T = I, U\ge 0 }[/math] Linear

Boosting (Lecture: Nov. 10, 2011)

Boosting is a meta-algorithm for starting with a simple classifier and improving the classifer by refitting the data giving higher weight to misclassified samples.


Suppose that [math]\displaystyle{ \mathcal{H} }[/math] is a collection of classifiers. Assume that [math]\displaystyle{ \ y_i \in \{-1, 1\} }[/math] and that each [math]\displaystyle{ \ h(x)\in \{-1, 1\} }[/math]. Start with [math]\displaystyle{ \ h_1(x) }[/math]. Based on how well [math]\displaystyle{ \ h_1 (x) }[/math] classifies points, adjust the weights of each input and reclassify. Misclassified points are given higher weight to ensure the classifier "pays more attention" to them, to fit better in the next iteration. The idea behind boosting is to obtain a classification rule from each classifer [math]\displaystyle{ h_i(x)\in\mathcal{H} }[/math], regardless of how well it classifies the data on its own (with the proviso that its performance be better than chance), and combine all of these rules to obtain a final classifier that performs well.

File:boosting1.jpg


An intuitive way to look at boosting and the concept of weight is to think about extreme weightings. Suppose you are doing classification on a set with some points being misclassified. Suppose that any points that have been classified correctly are to be removed from the data. So the weak classifier may do a good job on these new data. This is how early versions of boosting worked, instead of re-weighting.

AdaBoost

Adaptive Boosting (AdaBoost) was formulated by Yoav Freund and Robert Schapire. AdaBoost is defined as an algorithm for constructing a “strong” classifier as linear combination [math]\displaystyle{ f(\mathbf{x}) = \sum_{t=1}^T \alpha_t h_t(\mathbf{x}) }[/math] of simple “weak” classifiers [math]\displaystyle{ \ h_t(\mathbf{x}) }[/math]. It is very popular and widely known as the first algorithm that could adapt to weak learners <ref>http://www.cs.ubbcluj.ro/~csatol/mach_learn/bemutato/BenkKelemen_Boosting.pdf </ref>.

It has the following properties:

  • It is a linear classifier with all its desirable properties
  • It has good generalization properties
  • It is a feature selector with a principled strategy (minimisation of upper bound on empirical error)
  • It is close to sequential decision making

Algorithm Version 1

The AdaBoost algorithm presented in the lecture is as follows (for more info see [19]):

1 Set the weights [math]\displaystyle{ \ w_i=\frac{1}{n}, i = 1,...,n. }[/math]

2 For [math]\displaystyle{ \ j =1,...,J }[/math], do the following steps:

a) Find the classifier [math]\displaystyle{ \ h_j: \mathbf{x} \rightarrow \{-1,1\} }[/math] that minimizes the weighted error [math]\displaystyle{ \ L_j }[/math]:
[math]\displaystyle{ \ h_j= arg \underset{h_j\in \mathcal{H}}{\mbox{min}} L_j }[/math]
where [math]\displaystyle{ \ L_j = \frac{\sum_{i=1}^{n}w_iI[y_i\ne h_j(x_i)]}{\sum_{i=1}^{n} w_i} }[/math]
[math]\displaystyle{ \ H }[/math] is a set of classifiers which need to be improved and [math]\displaystyle{ \ I }[/math] is
[math]\displaystyle{ \, I= \left\{\begin{matrix} 1 & for \quad y_i\neq h_j(\mathbf{x}_i) \\ 0 & for \quad y_i = h_j(\mathbf{x}_i) \end{matrix}\right. }[/math]
b) Let [math]\displaystyle{ \alpha_j= log(\frac{1-L_j}{L_j}) }[/math]
Note that [math]\displaystyle{ \ \alpha }[/math] indicates the "goodness" of the classifier, where a larger [math]\displaystyle{ \ \alpha }[/math] value indicates a better classifier. Also, [math]\displaystyle{ \ \alpha }[/math] is always 0 or positive as long as the classification accuracy is 0.5 or higher. For example, if working with coin flips, then [math]\displaystyle{ \ L_j=0.5 }[/math] and [math]\displaystyle{ \ \alpha=0 }[/math].
c) Update the weights:
[math]\displaystyle{ \ w_i \leftarrow w_i e^{\alpha_j I[y_i\ne h_j(\mathbf{x}_i)]} }[/math]
Note that the weights are only increased for points that have been misclassified by a good classifier.

3 The final classifier is: [math]\displaystyle{ \ h(\mathbf{x}) = sign (\sum_{j=1}^{J}\alpha_j h_j(\mathbf{x})) }[/math].

Note that this is basically an aggregation of all the classifiers found and the classification outcomes of better classifiers are weighted more using [math]\displaystyle{ \ \alpha }[/math].

Algorithm Version 2 <ref>http://www.cs.ubbcluj.ro/~csatol/mach_learn/bemutato/BenkKelemen_Boosting.pdf</ref>

One of the main ideas of this algorithm is to maintain a distribution or set of weights over the training set. Initially, all weights are set equally, but on each round, the weights of incorrectly classified examples are increased so that the weak learner is forced to focus on the hard examples in the training set.

  • Given [math]\displaystyle{ \left(\mathbf{x}_1,y_1\right),\dots,\left(\mathbf{x}_m,y_m\right) }[/math] where [math]\displaystyle{ {\mathbf{x}_i \in X} }[/math], [math]\displaystyle{ {y_i \in \{-1,+1\}} }[/math].
  • Initialize weights [math]\displaystyle{ D_1(i) = \frac{1}{m} }[/math]
  • Iterate [math]\displaystyle{ t=1,\dots, T }[/math]
    • Train weak learner using distribution [math]\displaystyle{ \ D_t }[/math]
    • Get weak classifier: [math]\displaystyle{ h_t:X\rightarrow R }[/math]
    • Choose [math]\displaystyle{ {\alpha_t \in R} }[/math]
    • Update the weights: [math]\displaystyle{ D_{t+1}(i) = \frac {D_i e^{-\alpha_t y_i h_t(\mathbf{x}_i)}} {Z_t} }[/math]
where [math]\displaystyle{ \ Z_t }[/math] is a normalization factor (chosen so that [math]\displaystyle{ \ D_t+1 }[/math] will be a distribution)
  • The final classifier is:
[math]\displaystyle{ H(\mathbf{x})=\mbox{sign}\left(\sum_{t=1}^T \alpha_t h_t(\mathbf{x})\right) }[/math]

Example

In R, we can do boosting on a simulated classifer. Suppose we are working with the built-in R dataset "iris". These data consist of petal length, sepal length, petal width, and sepal width of three different species of iris. This is an adaptive boosting algorithm as applied to these data.

> crop1 <- iris[1:100,1] #the function "ada" will only handle two classes
> crop2 <- iris[1:100,2] #and the iris dataset has 3. So crop the third off.
> crop3 <- iris[1:100,3]
> crop4 <- iris[1:100,4]
> crop5 <- iris[1:100,5] #This is the response variable, indicating species of iris
> x <- cbind(crop1, crop2, crop3, crop4, crop5) #combine all the columns
> fr1 <- as.data.frame(x, row.names=NULL) #and coerce into a data frame
> 
> a = 2 #number of iterations
> AdaBoostDiscrete <- ada(crop5~., data=fr1, iter=a, loss="e", type = "discrete", control = rpart.control())
> AdaBoostDiscrete 
Call:
ada(crop5 ~ ., data = fr1, iter = a, loss = "e", type = "discrete", 
    control = rpart.control())

Loss: exponential Method: discrete   Iteration: 2 

Final Confusion Matrix for Data:
          Final Prediction
True value  1  2
         1 50  0
         2  0 50

Train Error: 0 

Out-Of-Bag Error:  0  iteration= 1 

Additional Estimates of number of iterations:

train.err1 train.kap1 
         1          1 

> #Since this yields "perfect" results, we may not need boosting here after all.
> #This was just an illustration of the ada function in R.

Advantages and Disadvantages

The advantages and disadvantages of AdaBoost are listed below.

Advantages :

  • Very simple to implement
  • Fairly good generalization
  • The prior error need not be known ahead of time

Disadvantages:

  • Suboptimal solution
  • Can over fit in presence of noise

Other boosters

There are many other more recent boosters such as LPBoost, TotalBoost, BrownBoost, MadaBoost, LogitBoost, stochastic boost etc. The main difference between many of them is the way they weigh the points in the training data set at each iteration. Some of these boosters, such as AdaBoost, MadaBoost and LogitBoost, can be interpreted as performing a gradient descent to minimize the convex cost function (They fit into the AnyBoost framework). However, a recent research study showed that this class of boosters are prone to random classification noise, thereby questioning their applicability to real world noisy classification problems. <ref>Pillip M. Long, Rocco A. Servedio, "Random Classification Noise Defeats All Convex Potential Boosters", 2000</ref>

Relation to SVM

SVM and Boosting are very similar except for the way to measure the margin or the way they optimize their weight vector. SVMs use the [math]\displaystyle{ l_2 }[/math] norm for both the instance vector and the weight vector, while Boosting uses the [math]\displaystyle{ l_1 }[/math] norm for the weight vector. ie. SVMs need to use the [math]\displaystyle{ l_2 }[/math] norm to implicitly compute scalar products in feature space with the help of the kernel trick. No other norm can be expressed in terms of scalar products.

Although SVM and AdaBoost share some similarities. However, there are several important differences:

  • Different norms can result in very different margins: In boosting or in SVM, the dimension is usually very high, this makes the difference between [math]\displaystyle{ l_1 }[/math] norm and [math]\displaystyle{ l_2 }[/math] norm can be significant enough in the margin values.

e.g suppose the weak hypotheses all have range {-1,1} and that the label y on all examples can be computed by a majority vote of k of the weak hypotheses. In this case, it can be shown that if the number of relevant weak hypotheses is a small fraction of the total number of weak hypotheses then the margin associated with AdaBoost will be much larger than the one associated with support vector machines.

  • The computation requirements are different: The difference between the two methods in this regard is that SVM cor-responds to quadratic programming, while AdaBoost corresponds only to linear programming.

Bagging

When bagging, we split up the data, train separate classifiers and then recreate a final classifier

Bagging (Bootstrap aggregating) was proposed by Leo Breiman in 1994. Bagging is another meta-algorithm for improving classification results by combining the classification of randomly generated training sets. [20][21]


The idea behind bagging is very similar to that behind boosting. However, instead of using multiple classifiers on essentially the same dataset (but with adaptive weights), we sample from the original dataset containing m items B times with replacement, obtaining B samples each with m items. This is called bootstrapping. Then, we train the classifier on each of the bootstrapped samples. Taking a majority vote of a combination of all the classifiers, we arrive at a final classifier for the original dataset. [22]

Bagging is the effective intensive procedure that can improve on unstable classifiers. It is most useful for highly nonlinear classifiers, such as trees.

As we know the idea of boosting is to incorporate unequal weights in learning h given higher weight to misclassified points. Bagging is a method for reducing the variability of a classifier. The idea is to train classifiers [math]\displaystyle{ \ h_{1}(x) }[/math] to [math]\displaystyle{ \ h_{B}(x) }[/math] using B bootstrap samples from the data set. The final classification is obtained using an average or 'plurality vote' of the B classifiers as follows:


[math]\displaystyle{ \, h(x)= \left\{\begin{matrix} 1 & \frac{1}{B} \sum_{i=1}^{B} h_{b}(x) \geq \frac{1}{2} \\ 0 & \mathrm{otherwise} \end{matrix}\right. }[/math]

Boosting vs. Bagging

• boosting can help us do the procedure on stable models, but bagging may not work for stable models.

• bagging is easier to parallelize and more helpful in practice.

• Many classifiers, such as trees, already have underlying functions that estimate the class probabilities at x. An alternative strategy is to average these class probabilities instead of the final classifiers. This approach can produce bagged estimates with lower variance and usually better performance.

• Bagging doesn’t work so well with stable models.Boosting might still help.

• Boosting might hurt performance on noisy datasets. Bagging doesn’t have this problem.

• In practice bagging almost always helps.

• On average, boosting usually helps more than bagging, but it is also more common for boosting to hurt performance.

• The weights grow exponentially.

• Bagging is easier to parallelize.

Decision Trees

File:simple decision tree.jpg
A basic example of a decision tree, iteratively ask questions to navigate the tree until we reach a decision node.

Decision tree learning is a method commonly used in statistics, data mining and machine learning. The goal is to create a model that predicts the value of a target variable based on several input variables. It is a very flexible classifier, can classify non-linear data and it can be used for classification, regression, or both. A tree is usually used as a visual and analytical decision support tool, where the expected values of competing alternatives are calculated.


It uses principle of divide and conquer for classification. The trees have traditionally been created manually. Trees map features of a decision problem onto a conclusion, or label. We fit a tree model by minimizing some measure of impurity. For a single covariate [math]\displaystyle{ \ X_1 }[/math] we choose a point t on the real line that splits the real line into two sets [math]\displaystyle{ \ R_1 = (-\infty, t] , R_2 = [ t, \infty) }[/math] in a way that minimizes impurity.

File:p.jpg
Node impurity for two-class classification, as a function of the proportion p in class 2. Cross-entropy has been scaled to pass through (0.5,0.5).

Let [math]\displaystyle{ \hat{p_s}(j) }[/math] be the proportion of observations in [math]\displaystyle{ \boldsymbol R_s }[/math] such that [math]\displaystyle{ \ Y_i = j }[/math]

[math]\displaystyle{ \hat{p_s}(j) = \frac {\sum_{i=1}^n I(Y_i = j, X_i \in \boldsymbol R_s)}{\sum_{i=1}^n I(x_i \in \boldsymbol R_s)} }[/math]


Node impurity measures (see figure to the right):

Misclassification error: [math]\displaystyle{ \ 1 - \hat{p_s}(j) }[/math]
Gini index:[math]\displaystyle{ \sum_{j \neq 1} \hat{p_s}(j)\hat{p_s}(i) }[/math]

Limitions in Decision Trees

1. Overfitting problem: Decision Trees are extremely flexible models; this flexibility means that they can easily perfectly match any training set. This makes overfitting a prime consideration when training a decision tree. There is no robust way to avoid fitting noise in the data but two common approaches include:

  • do not fit all trees, stop when the training set reaches perfection
  • fully grow the tree and then prune the resulting tree. Pruning algorithms include cost complexity pruning, minimum description length pruning and pessimistic pruning. This results in a tree with less branches, which can generalize better. <ref>J. R. Quinlan, Decision Trees and Decision Making, IEEE Transactions on Systems, Man and Cybernetics, vol 20, no 2, March/April 1990, pg 339-346.</ref>


2. time-consuming and complex: compare to other decision-making models, decision trees is a relatively easier tool to use, however, if the tree contains a large amount branches, it will become complex in nature and take time to solve the problem. Moreover, decision trees only examine a single field at a time, which leads to rectangular classification boxes. And the complexity adds costs to train people to have the extensive knowledge to complete the decision tree analysis. <ref> http://www.brighthub.com/office/project-management/articles/106005.aspx </ref>


Some specific decision-tree algorithms:

  • ID3 algorithm [23]
  • C4.5 algorithm [24]
  • C5 algorithm

A comparison of bagging and boosting methods using the decision trees classifiers: [25]

CART (Classifcation and Regression Tree)

The Classification and Regression Tree (CART) is a non-parametric Decision tree learning technique that produces either classification or regression trees, depending on whether the dependent variable is categorical or numeric, respectively. (Wikipedia) The CART is good in working with outliers during the process. CART will isolate the outliers in a separate node.

Advantages<ref>http://www.statsoft.com/textbook/classification-and-regression-trees/</ref>:

  • Simplicity of results. In most cases the results are summarized in a very simple tree. This is important for fast classification and for creating a simple model for explaining the observations.
  • Tree methods are nonparametric and nonlinear. There is no implicit assumption that the underlying relationships between the predictor variables and the dependent variable is linear or monotonic. Thus tree methods are well suited to data mining tasks where there is little a priori knowledge of any related variables.

Advantages and Disadvantages

Decision Tree Advantages

1. Easy to understand

2. Map nicely to a set of business rules

3. Applied to real problems

4. Make no prior assumptions about the data

5. Able to process both numerical and categorical data

Decision Tree Disadvantages

1. Output attribute must be categorical

2. Limited to one output attribute

3. Decision tree algorithms are unstable

4. Trees created from numeric datasets can be complex

Read more: http://wiki.answers.com/Q/List_the_advantages_and_disadvantages_for_both_decision_table_and_decision_tree#ixzz1dNGFaOpi

Ranking Features

In implementation of a tree model it is important how the features are ranked (i.e. in what order the features appear in the tree). The general way to do this is to choose the features with the highest dependence on Y to be the first feature in the tree and then going down the tree with lower dependence.

Feature ranking strategies

1. Fisher score (F-score)

  • simple in nature
  • efficient in measuring the the discrimination between a feature and the label.
  • independent of the classifier.

2 Linear SVM Weight

The following is an algorithm based on linear SVM weights:

  • input the training sets: [math]\displaystyle{ (x_i, y_i), i = 1, \dots l }[/math]
  • obtain the sorted feature ranking list as output:
    • Using grid search to find the best parameter C.
    • Training a [math]\displaystyle{ L2- }[/math]loss linear SVM model using the best available C.
    • Then features can be sorted according to the absolute values of weights.

3. Change of AUC with/without Removing Each Feature

4. Change of Accuracy with/without Removing Each Feature

5. Normalized Information Gain (difference in entropy)

note: for details, please read <ref> http://jmlr.csail.mit.edu/proceedings/papers/v3/chang08a/chang08a.pdf </ref>

Random Forest

Decision trees are unstable. An application of bagging is to combine trees into random forest. A random forest is a classifier consisting of a collection of tree-structured classifiers [math]\displaystyle{ \left \lbrace \ h(x, \Theta_k ), k = 1, . . . \right \rbrace }[/math] where the [math]\displaystyle{ {\Theta_k } }[/math] are independent identically distributed random vectors and each tree casts a unit vote for the most popular class at input [math]\displaystyle{ x }[/math] <ref>Breiman N., Random Forests Machine learning [26]</ref>.

In a random forest, the trees are grown quite similarly to the standard classification tree. However, no pruning is done in the random forest technique.

Compared with other methods, random forests have some positive characteristics:

  • runs faster than bagging or boosting
  • has similar accuracy as Adaboost, and sometimes even better than Adaboost
  • relatively robust to noise
  • delivers useful estimates of error, correlation

For larger data sets, more accuracy can be obtained by combining random features with boosting.

This is how a single tree is grown:

First, suppose the number of elements in the training set is K. We then sample K elements with replacement. Second, if there are a total of N inputs to the tree, choose an integer n << N such that for each node of the tree, n variables are randomly selected from N. the best split on these n variables is used to allow the node to make a decision (hence a "decision tree"). Third, grow the tree as large as possible.

Each tree contributes one classification. That is, each tree gets one "vote" to classify an element. The beauty of random forest is that all of these votes are added up, similar to boosting, and the final decision is the result of the vote. This is an extremely robust algorithm.

There are two things that can contribute to error in random forest:

1. correlation between trees 2. the ability of an individual tree to classify well.

This is seen intuitively, since if many trees are very similar to one another, then it is likely they will all classify the elements in the same way. If a single tree is not a very good classifier, it does not matter in the long run because the other trees will compensate for its error. However, if many trees are bad classifiers, the result will be garbage.

To avoid both of the above problems, there is an algorithm to optimize n, the number of variables to use in each decision tree. Unfortunately, an optimal value is not found on its own; instead, an optimal range is found. Thus, to properly program a random forest, there is a parameter that must be "tuned". Looking at various types of error rate, this is easily found (we want to minimize error, as characterized by the Gini index, or the misclassification rate, or the entropy). [27]

An algorithm for the Random Forest can be described as below: we first let [math]\displaystyle{ N_trees }[/math] to be the number of trees need to build for each of [math]\displaystyle{ N_trees }[/math] iterations, then we select a new bootstrap sample from training set and grow an un-pruned tree on this bootstrap, next, at each internal node, randomly select m predictors and determine the best split using only these predictors. Finally do not perform cost complexity pruning and save tree as is, along side those built thus far. <ref> Albert A. Montillo,Guest lecture: Statistical Foundations of Data Analysis "Random Forests", April,2009. <http://www.dabi.temple.edu/~hbling/8590.002/Montillo_RandomForests_4-2-2009.pdf> </ref>

Further Reading

Boosting: <ref>Chunhua Shen; Zhihui Hao. “A direct formulation for totally-corrective multi-class boosting”. IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2011.</ref>

Bagging: <ref>Xiaoyuan Su; Khoshgoftarr, T.M.; Xingquan Zhu. “VoB predictors: Voting on bagging classifications”. 19th IEEE International Conference on Pattern Recognition. 2008.</ref>

Decision Tree: <ref> Zhuowen Tu. “Probabilistic boosting-tree: learning discriminative models for classification, recognition, and clustering”. Tenth IEEE International Conference on Computer Vision. 2005.</ref>

Graphical Models

A graphical model is a probabilistic model for which a graph denotes the conditional independence structure between random variables. They are commonly used in probability theory, statistics—particularly Bayesian statistics—and machine learning.(Wikipedia)

Graphical models provide a compact representation of the joint distribution where V vertices (nodes) represent random variables and edges E represent the dependency between the variables. There are two forms of graphical models (Directed and Undirected graphical model). Directed graphical models consist of arcs and nodes where arcs indicate that the parent is a explanatory variable for the child. Undirected graphical models are based on the assumptions that two nodes or two set of nodes are conditionally independent given their neighbour1.

Similiar types of analysis predate the area of Probablistic Graphical Models and it's terminology. Bayesian Network and Belief Network are preceeding terms used to a describe directed acyclical graphical model. Similarly Markov Random Field (MRF) and Markov Network are preceeding terms used to decribe a undirected graphical model. Probablistic Graphical Models have united some of the theory from these older theories and allow for more generalized distributions than were possible in the previous methods.

File:directed.png
Fig.1 A directed graph.
File:undirected.png
Fig.2 An undirected graph.

In the case of directed graphs, the direction of the arrow indicates "causation". This assumption makes these networks useful for the cases that we want to model causality. So these models are more useful for applications such as computational biology and bioinformatics, where we study effect (cause) of some variables on another variable. For example:
[math]\displaystyle{ A \longrightarrow B }[/math]: [math]\displaystyle{ A\,\! }[/math] "causes" [math]\displaystyle{ B\,\! }[/math].


Y Y
[math]\displaystyle{ \downarrow }[/math] [math]\displaystyle{ \uparrow }[/math]
Generative LDA Linear Discrimanation

Probabilistic Discriminative Models: Model posterior probability P(Y|X) directly (example: LDA).

Advantages of discriminative models

  • Obtain desired posterior probability directly
  • Less parameters

Generative Model: Compute posterior probabilities using Bayes Rule - class-conditional densities and class priors. <ref>http://www.google.ca/imgres?q=generative+vs+discriminative+model&hl=en&client=firefox-a&hs=9tQ&sa=X&rls=org.mozilla:en-GB:official&biw=1454&bih=840&tbm=isch&prmd=imvns&tbnid=GZd3ZvkGOWmvnM:&imgrefurl=https://liqiangguo.wordpress.com/2011/05/26/discriminative-model-vs-generative-model/&docid=9D6p6EAceYNlSM&imgurl=http://liqiangguo.files.wordpress.com/2011/05/d_g1.jpg&w=938&h=336&ei=4pjBTrmjOqHc0QG-u_WCAw&zoom=1&iact=hc&vpx=369&vpy=193&dur=203&hovh=72&hovw=202&tx=162&ty=89&sig=116704843266645309182&page=1&tbnh=72&tbnw=202&start=0&ndsp=25&ved=1t:429,r:1,s:0</ref>

Advantages of generative models:

  • Can generate new points
  • Can sample a new point

for an introduction to Graphical model you can see: [28]

Boltzmann Machines

Introduction

Reference: [2]

Boltzmann machines are networks of connected nodes which, using a stochastic decision-making process, decide to be on or off. These connections need not be directive; that is, they can go back and forth between layers. This type of formulation leads the reader to think immediately of a binomial distribution, with some probability p of each node being on or off. In a classification problem, a Boltzmann Machine is presented with a set of binary vectors, each entry of the vector called a “unit”, with the goal of learning to generate these vectors. [1]

Similar to the neural networks already discussed in class, a Boltzmann Machine must assign weights to inputs, compute some combination of the weights times contributing node values, and optimize the weights such that a certain cost function (such as the relative entropy, as discussed later) is minimized. The cost function depends on the complexity of the model and the “correctness” of the classification. The main idea is to make small updates in the connection weights iteratively.

Boltzmann Machines are often used in generative models. That is, we start with some process seen in real life and try to reproduce it, with a goal of predicting future behaviour of the system by generating from the probability distribution created by the Boltzmann Machine.

How a Boltzmann Machine Works

Suppose we start with a pattern ɣ that represents some real life dynamical system. The true probability distribution function of this system is f_ɣ. For each element in the vectors associated with this system, we create a visible unit in the Boltzmann Machine whose function is directly related to the value of that element. Then, usually, to capture higher order regularities in the pattern, we create hidden units (similar to Feed-Forward Neural Networks). Sometimes researchers choose not to use hidden units, but this leads to a lack of ability to learn high order regularity [5]. There are two possible values for each node in the Boltzmann Machine: “on” or “off”. There is a difference in energy between these states. Each node must then compute the difference in energy to see which state would be more favourable. This difference is called the “energy gap”.

Each node of the Boltzmann Machine is presented an opportunity to update its status. When a set of input vectors is shown to the layer, a computation takes place within each node to decide to convert to “on” or to remain “off”. The computation is as follows:

[math]\displaystyle{ \Delta E_i = E_{-1} - E_{+1} = \sum_j w_{ij}S_j }[/math]

Where [math]\displaystyle{ w_{ij} }[/math] represents the weight between nodes i and j, and [math]\displaystyle{ S_j }[/math] is the state of the jth component.

Then the probability that the node will adopt the “on” state is:

[math]\displaystyle{ P(+1) = \frac{1}{(1 + exp( \frac{\delta E}{T}))} }[/math]

Where T is the temperature of the system. The probability of any vector v being an output of the system is just the energy of the vector divided by the total energy of the system, or

[math]\displaystyle{ P(v) = \frac{e^{-E(v)}}{e^{E(system)}} }[/math]

And the energy of a vector is defined as:

[math]\displaystyle{ E({v}) = -\sum_i s^{v}_i b_i -\sum_{i\lt j} s^{v}_i s^{v}_j w_{ij} }[/math] [1]

Simulated annealing, a method to improve the search for a global minimum, is being used here. It may not succeed in finding the global minimum on its own [3]. This may be a foreign concept to statisticians. For more information, consult [6] and [7]. The state gets changed to whichever calculation in the logistic function step yields a decrease in energy.

Eventually, through learning, the Boltzmann Machine will reach an equilibrium state, much like a Markov Chain. This equilibrium state will have a low temperature. Once equilibrium has been reached, we can estimate the probability distribution across the nodes of the Boltzmann Machine. Using this information, we can model how the dynamical system will behave in the long run.

Since the system is in equilibrium, we can use the mean value of each visible unit to build a probability model. We wouldn’t want to do these calculations before reaching equilibrium, because they would not be representative of the long-term behaviour of the system. Let this measured distribution be denoted f_δ. Then we are interested in measuring the difference between the true distribution and this measured distribution.

There are several different methods that can be used to compare distributions. One that is commonly used is the relative entropy:

[math]\displaystyle{ G(f_\gamma ||f_\delta) = \sum_\gamma f_\gamma ln(\frac{f_\gamma}{f_\delta}) }[/math] [5]

We want to minimize this distance, since we want the measured distribution to be as close as possible to the true distribution

Learning in Boltzmann Machines

The Two-Phase Method

Boltzmann Machines using hidden units are very robust tools. Visible units are coupled, leading to a problem when trying to capture the effects of higher-dimensional regularities. When hidden units are introduced, the system has the ability to define and use these regularities.

One approach to learning Boltzmann Machines is discussed thoroughly in [5]. To summarize, this approach makes use of two phases.

Phase 1: Fix all visible units. Allow the hidden units to change as necessary to obtain equilibrium. Then, look at pairs of units. If two elements of a pair are both “on”, then increment the weight associated with them. So this phase consists entirely of “learning”. There is no control for spurious data.

Phase 2: No units are fixed. Allow all units to change as necessary to obtain equilibrium. Then sample the final equilibrium distribution to find reliable averages of the term s¬_i s_j. Then as before, look for pairs of units that are both “on”, and decrement the weight associated with them. So this is the phase in which spurious data are eliminated.

Alternate between these two phases. Eventually, the equilibrium distribution will be reached and we see that [math]\displaystyle{ “ \frac{\partial {G}}{\partial {w_{ij}}} = \frac{-1}{T} (\lt s_i s_j\gt ^{+} - \lt s_i s_j\gt ^{-}) }[/math] where [math]\displaystyle{ s_i s_j }[/math] are the probabilities of finding units i and j both “on”, when the network is ‘fixed’ and ‘free-running’, respectively” [5]. Another method, for learning Deep Boltzmann Machines, is presented in [2].

Pros and Cons of using Boltzmann Machines

Pros

  • More accurate than backpropagation [5]
  • Bayesian interpretation of how good a model is [5]

Cons

  • Very slow, because of nested loops necessary to perform phases [5]

There are many topics on which this discussion could be expanded. For example, we could get into a more in-depth discussion of simulated annealing, or look at Restricted Boltzmann Machines (RBMs) for deep learning, or different methods of learning and different measures of error. Another interesting topic would be a discussion on mean field approximation of Boltzmann Machines, which supposedly runs faster.

  • The time the machine must be run in order to collect equilibrium statistics grows exponentially with the machine's size, and with the magnitude of the connection strengths
  • Connection strengths are more plastic when the units being connected have activation probabilities intermediate between zero and one, leading to a so-called variance trap. The net effect is that noise causes the connection strengths to random walk until the activities saturate.

References:
[1] http://www.scholarpedia.org/article/Boltzmann_machine
[2] http://www.mit.edu/~rsalakhu/papers/dbm.pdf
[3] http://mathworld.wolfram.com/SimulatedAnnealing.html
[4] http://waldron.stanford.edu/~jlm/papers/PDP/Volume%201/Chap7_PDP86.pdf
[5] http://cs.nyu.edu/~roweis/notes/boltz.pdf
[6] http://neuron.eng.wayne.edu/tarek/MITbook/chap8/8_3.html
[7] Bertsimas and Tsitsiklis. Simulated Annealing. Statistical Science. 1993. Vol. 8, No. 1, 10 – 15.

References

<references />