stat841: Difference between revisions

From statwiki
Jump to navigation Jump to search
m (Conversion script moved page Stat841 to stat841: Converting page titles to lowercase)
 
Line 27: Line 27:
=== Error rate ===
=== Error rate ===


:''''True error rate'''' of a classifier(h) is defined as the probability that <math>\,h</math> does not correctly classify the points of <math>\,\mathcal{X}</math>, i.e.,
: The '''true error rate'''' <math>\,L(h)</math> of a classifier having classification rule <math>\,h</math> is defined as the probability that <math>\,h</math> does not correctly classify any new data input, i.e., it is defined as <math>\,L(h)=P(h(X) \neq Y)</math>. Here, <math>\,X \in \mathcal{X}</math> and <math>\,Y \in \mathcal{Y}</math> are the known feature values and the true class of that input, respectively. 
::<math>\, L(h)=P(h(X) \neq Y)</math>


:''''Empirical error rate(training error rate)'''' of a classifier(h) is defined as the frequency that <math>\,h</math> does not correctly classify the points in the training set, i.e.,
: The '''empirical error rate''' (or '''training error rate''') of a classifier having classification rule <math>\,h</math> is defined as the frequency at which <math>\,h</math> does not correctly classify the data inputs in the training set, i.e., it is defined as
::<math>\, L_{h}= \frac{1}{n} \sum_{i=1}^{n} I(h(X_{i}) \neq Y_{i})</math>, where <math>\,I</math> is an indicator that <math>\, I= \left\{\begin{matrix}  
<math>\,\hat{L}_{n} = \frac{1}{n} \sum_{i=1}^{n} I(h(X_{i}) \neq Y_{i})</math>, where <math>\,I</math> is an indicator variable and <math>\,I = \left\{\begin{matrix} 1 &\text{if } h(X_i) \neq Y_i  \\ 0 &\text{if } h(X_i) = Y_i  \end{matrix}\right.</math>. Here,
1 & h(X_i) \neq Y_i  \\  
<math>\,X_{i} \in \mathcal{X}</math> and <math>\,Y_{i} \in \mathcal{Y}</math> are the known feature values and the true class of the <math>\,i_th</math> training input, respectively.
0 & h(X_i)=Y_i  \end{matrix}\right.</math>.


=== Bayes Classifier ===
=== Bayes Classifier ===


The principle of Bayes Classifier is to calculate the posterior probability of a given object from its prior probability via Bayes formula, and then place the object in the class with the largest posterior probability. Intuitively speaking, to classify <math>\,x\in \mathcal{X}</math> we find <math>y \in \mathcal{Y}</math> such that <math>\,P(Y=y|X=x)</math> is maximum over all the members of <math>\mathcal{Y}</math>.
The principle of Bayes Classifier is to calculate the posterior probability of a given object from its prior probability via Bayes formula, and then place the object in the class with the largest posterior probability<ref> http://www.wikicoursenote.com/wiki/Stat841f11#Bayes_Classifier </ref>
Intuitively speaking, to classify <math>\,x\in \mathcal{X}</math> we find <math>y \in \mathcal{Y}</math> such that <math>\,P(Y=y|X=x)</math> is maximum over all the members of <math>\mathcal{Y}</math>.


Mathematically, for <math>\,k</math> classes and given object <math>\,X=x</math>, we find <math>\,y\in \mathcal{Y}</math> which  
Mathematically, for <math>\,k</math> classes and given object <math>\,X=x</math>, we find <math>\,y\in \mathcal{Y}</math> which  
Line 74: Line 73:
1) Empirical Risk Minimization: Choose a set fo classifier <math>\mathcal{H}</math> and find <math>\,h^*\in \mathcal{H}</math>  that minimizes some estimate of <math>\,L(h)</math>
1) Empirical Risk Minimization: Choose a set fo classifier <math>\mathcal{H}</math> and find <math>\,h^*\in \mathcal{H}</math>  that minimizes some estimate of <math>\,L(h)</math>


2) Regression: Find an estimate <math> (\hat r) </math> of the function <math> r </math> and deifne
2) Regression: Find an estimate <math> (\hat r) </math> of the function <math> r </math> and define
:<math>\, h(X)= \left\{\begin{matrix}  
:<math>\, h(X)= \left\{\begin{matrix}  
1 &  \hat r(x)>\frac{1}{2}  \\  
1 &  \hat r(x)>\frac{1}{2}  \\  
Line 180: Line 179:
* <math>\,\pi_k</math> is called the [http://en.wikipedia.org/wiki/Prior_probability '''prior probability''']. This is a probability distribution that represents what we know (or believe we know) about a population.
* <math>\,\pi_k</math> is called the [http://en.wikipedia.org/wiki/Prior_probability '''prior probability''']. This is a probability distribution that represents what we know (or believe we know) about a population.
* <math>\,\Sigma_k</math> is the sum with respect to all <math>\,k</math> classes.
* <math>\,\Sigma_k</math> is the sum with respect to all <math>\,k</math> classes.
''Theorem'': Suppose that <math>\,Y \in \mathcal{Y}= \{1,\dots,k\}</math>, the optimal rule is :<math>\,h^*(X) = \arg\max_{k}{P(Y = k|X = x)}</math>


====Approaches====
====Approaches====


Representing the optima method, Bayes classifier cannot be used in most practical situations though, since usually the prior probability is unknown. Fortunately, other methods of classification have evolved. These methods fall into three general categories.
Representing the optimal method, Bayes classifier cannot be used in the most practical situations though, since usually the prior probability is unknown. Fortunately, other methods of classification have been evolved. These methods fall into three general categories.


1 Empirical Risk Minimization:Choose a set fo classifier <math>\mathcal{H}</math> and find <math>\,h^* \epsilon H</math>, minimize some estimate of <math>\,L(H)</math>.
1 [http://en.wikipedia.org/wiki/Supervised_learning Empirical Risk Minimization]:Choose a set fo classifier <math>\mathcal{H}</math> and find <math>\,h^* \epsilon H</math>, minimize some estimate of <math>\,L(H)</math>.


2 Regression:Find an estimate <math> (\hat r) </math> of the function <math> r </math> and deifne  
2 Regression:Find an estimate <math> (\hat r) </math> of the function <math>\ r </math> and deifne  
:<math>\, h(X)= \left\{\begin{matrix}  
:<math>\, h(X)= \left\{\begin{matrix}  
1 &  \hat r(x)>\frac{1}{2}  \\  
1 &  \hat r(x)>\frac{1}{2}  \\  
0 &  \mathrm{otherwise}  \end{matrix}\right.</math>
0 &  \mathrm{otherwise}  \end{matrix}\right.</math>


3 Density estimation, estimate <math>P(X = x|Y = 0)</math> and <math>P(X = x|Y = 1)</math>  
3 [http://en.wikipedia.org/wiki/Density_estimation Density estimation], estimate <math>\ P(X = x|Y = 0)</math> and <math>\ P(X = x|Y = 1)</math>  


The third approach, in this form, is not popular because density estimation doesn't work very well with dimension greater than 2.
Note:<br />
The third approach, in this form, is not popular because density estimation doesn't work very well with dimension greater than 2. However this approach is the simplest and we can assume a parametric model for the densities.
Linear Discriminate Analysis and Quadratic Discriminate Analysis are examples of the third approach, density estimation.
Linear Discriminate Analysis and Quadratic Discriminate Analysis are examples of the third approach, density estimation.


Line 208: Line 206:
====History====
====History====
The name Linear Discriminant Analysis comes from the fact that these simplifications produce a linear model, which is used to discriminate between classes.  In many cases, this simple model is sufficient to provide a near optimal classification - for example, the Z-Score credit risk model, designed by Edward Altman in 1968, which is essentially a weighted LDA, [http://pages.stern.nyu.edu/~ealtman/Zscores.pdf revisited in 2000], has shown an 85-90% success rate predicting bankruptcy, and is still in use today.
The name Linear Discriminant Analysis comes from the fact that these simplifications produce a linear model, which is used to discriminate between classes.  In many cases, this simple model is sufficient to provide a near optimal classification - for example, the Z-Score credit risk model, designed by Edward Altman in 1968, which is essentially a weighted LDA, [http://pages.stern.nyu.edu/~ealtman/Zscores.pdf revisited in 2000], has shown an 85-90% success rate predicting bankruptcy, and is still in use today.
'''Purpose'''
1 feature selection
2 which classification rule best seperate the classes


====Definition====
====Definition====
To perform LDA we make two assumptions.
To perform [http://en.wikipedia.org/wiki/Linear_discriminant_analysis LDA] we make two assumptions.
 
* The clusters belonging to all classes each follow a multivariate normal distribution. <br /><math>x \in \mathbb{R}^d</math> <math>f_k(x)=\frac{1}{ (2\pi)^{d/2}|\Sigma_k|^{1/2} }\exp\left( -\frac{1}{2} [x - \mu_k]^\top \Sigma_k^{-1} [x - \mu_k] \right)</math>
 
where <math>\ f_k(x)</math> is a class conditional density


# The clusters belonging to all classes each follow a multivariate normal distribution. <br /><math>x \in \mathbb{R}^d</math> <math>f_k(x)=\frac{1}{ (2\pi)^{d/2}|\Sigma_k|^{1/2} }\exp\left( -\frac{1}{2} [x - \mu_k]^\top \Sigma_k^{-1} [x - \mu_k] \right)</math>
* Simplification Assumption: Each cluster has the same covariance matrix <math>\,\Sigma</math> equal to the covariance of <math>\Sigma_k \forall k</math>.
# Each cluster has the same variance <math>\,\Sigma</math> equal to the mean variance of <math>\Sigma_k \forall k</math>.




We wish to solve for the boundary where the error rates for classifying a point are equal, where one side of the boundary gives a lower error rate for one class and the other side gives a lower error rate for the other class.
We wish to solve for the [http://en.wikipedia.org/wiki/Decision_boundary decision boundary] where the error rates for classifying a point are equal, where one side of the boundary gives a lower error rate for one class and the other side gives a lower error rate for the other class.


So we solve <math>\,r_k(x)=r_l(x)</math> for all the pairwise combinations of classes.
So we solve <math>\,r_k(x)=r_l(x)</math> for all the pairwise combinations of classes.
Line 247: Line 254:
<math>\,\Rightarrow  \log(\frac{\pi_k}{\pi_l})-\frac{1}{2}\left(  \mu_k^\top\Sigma^{-1}\mu_k-\mu_l^\top\Sigma^{-1}\mu_l - 2x^\top\Sigma^{-1}(\mu_k-\mu_l)  \right)=0</math> after canceling out like terms and factoring.
<math>\,\Rightarrow  \log(\frac{\pi_k}{\pi_l})-\frac{1}{2}\left(  \mu_k^\top\Sigma^{-1}\mu_k-\mu_l^\top\Sigma^{-1}\mu_l - 2x^\top\Sigma^{-1}(\mu_k-\mu_l)  \right)=0</math> after canceling out like terms and factoring.


We can see that this is a linear function in x with general form <math>\,ax+b=0</math>.  
We can see that this is a linear function in <math>\ x </math> with general form <math>\,ax+b=0</math>.  


Actually, this linear log function shows that the decision boundary between class <math>k</math> and class <math>l</math>, i.e. <math>Pr(G=k|X=x)=Pr(G=l|X=x)</math>, is linear in <math>x</math>. Given any pair of classes, decision boundaries are always linear. In <math>p</math> dimensions, we separate regions by hyperplanes.  
Actually, this linear log function shows that the decision boundary between class <math>\ k </math> and class <math>\ l </math>, i.e. <math>\ P(G=k|X=x)=P(G=l|X=x) </math>, is linear in <math>\ x</math>. Given any pair of classes, decision boundaries are always linear. In <math>\ d</math> dimensions, we separate regions by hyperplanes.  


In the special case where the number of samples from each class are equal (<math>\,\pi_k=\pi_l</math>), the boundary surface or line lies halfway between <math>\,\mu_l</math> and <math>\,\mu_k</math>
In the special case where the number of samples from each class are equal (<math>\,\pi_k=\pi_l</math>), the boundary surface or line lies halfway between <math>\,\mu_l</math> and <math>\,\mu_k</math>
====Limitation====
* LDA implicitly assumes Gaussian distribution of data.
* LDA implicitly assumes that the mean is the discriminating factor, not variance.
* LDA may overfit the data.


===QDA===
===QDA===
The concept is the same idea of finding a boundary where the error rate for classification between classes are equal, except the assumption that each cluster has the same variance <math>\,\Sigma</math> equal to the mean variance of <math>\Sigma_k \forall k</math> is removed.
The concept uses a same idea as LDA of finding a boundary where the error rate for classification between classes are equal, except the assumption that each cluster has the same variance <math>\,\Sigma</math> equal to the mean variance of <math>\Sigma_k \forall k</math> is removed. We can use a hypothesis test with <math>\ H_0 </math>: <math>\Sigma_k \forall k </math>=<math>\,\Sigma</math>.The best method is likelihood ratio test.




Line 281: Line 294:
== '''Linear and Quadratic Discriminant Analysis cont'd - October 5, 2009''' ==
== '''Linear and Quadratic Discriminant Analysis cont'd - October 5, 2009''' ==


Linear discriminant analysis[http://en.wikipedia.org/wiki/Linear_discriminant_analysis] is a statistical method used to find the ''linear combination'' of features which best separate two or more classes of objects or events. It is widely applied in classifying diseases, positioning, product management, and marketing research.
Quadratic Discriminant Analysis[http://en.wikipedia.org/wiki/Quadratic_classifier], on the other had, aims to find the ''quadratic combination'' of features. It is more general than Linear discriminant analysis. Unlike LDA however, in QDA there is no assumption that the covariance of each of the classes is identical. When the assumption is true, the best possible test for the hypothesis that a given measurement is from a given class is the [http://en.wikipedia.org/wiki/Likelihood-ratio_test likelihood ratio test]. Suppose the means of each class are known to be <math> \mu_{y=0},\mu_{y=1} </math> and the covariances <math> \Sigma_{y=0}, \Sigma_{y=1} </math>. Then the likelihood ratio will be given by
:Likelihood ratio = <math> \frac{ \sqrt{2 \pi |\Sigma_{y=1}|}^{-1} \exp \left( -\frac{1}{2}(x-\mu_{y=1})^T \Sigma_{y=1}^{-1} (x-\mu_{y=1}) \right) }{ \sqrt{2 \pi |\Sigma_{y=0}|}^{-1} \exp \left( -\frac{1}{2}(x-\mu_{y=0})^T \Sigma_{y=0}^{-1} (x-\mu_{y=0}) \right)} < t </math>
for some threshold t. After some rearrangement, it can be shown that the resulting separating surface between the classes is a quadratic.


===Summarizing LDA and QDA===
===Summarizing LDA and QDA===


We can summarize what we have learned on LDA and QDA so far into the following theorem.
We can summarize what we have learned on [http://academicearth.org/lectures/advice-for-applying-machine-learning LDA and QDA] so far into the following theorem.


'''Theorem''':  
'''Theorem''':  
Line 315: Line 335:
<math>\,\Sigma=\frac{\sum_{r=1}^{k}(n_r\Sigma_r)}{\sum_{l=1}^{k}(n_l)} </math>
<math>\,\Sigma=\frac{\sum_{r=1}^{k}(n_r\Sigma_r)}{\sum_{l=1}^{k}(n_l)} </math>


This is a Maximum Likelihood estimate.


===Computation===
===Computation===




'''Case 1: (Example) <math>\, \Sigma_k = I </math>'''
'''Case 1: (Example) <math>\, \Sigma_k = I </math>'
 
[[File:case1.jpg|300px|thumb|right]]


This means that the data is distributed symmetrically around the center <math>\mu</math>, i.e. the isocontours are all circles.
This means that the data is distributed symmetrically around the center <math>\mu</math>, i.e. the isocontours are all circles.
Line 327: Line 350:
<math> \,\delta_k  = - \frac{1}{2}log(|I|) - \frac{1}{2}(x-\mu_k)^\top I(x-\mu_k) + log (\pi_k) </math>
<math> \,\delta_k  = - \frac{1}{2}log(|I|) - \frac{1}{2}(x-\mu_k)^\top I(x-\mu_k) + log (\pi_k) </math>


We see that the first term in the above equation, <math>\,\frac{1}{2}log(|I|)</math>, is zero. The second term contains <math>\, (x-\mu_k)^\top I(x-\mu_k) = (x-\mu_k)^\top(x-\mu_k) </math>, which is the squared Euclidean distance between <math>\,x</math> and <math>\,\mu_k</math>. Therefore we can find the distance between a point and each center and adjust it with the log of the prior, <math>\,log(\pi_k)</math>. The class that has the minimum distance will maximise <math>\,\delta_k</math>. According to the theorem, we can then classify the point to a specific class <math>\,k</math>.  In addition, <math>\, \Sigma_k = I </math> implies that our data is spherical.  
We see that the first term in the above equation, <math>\,\frac{1}{2}log(|I|)</math>, is zero since <math>\ |I| </math> is the determine and <math>\ |I|=1 </math>. The second term contains <math>\, (x-\mu_k)^\top I(x-\mu_k) = (x-\mu_k)^\top(x-\mu_k) </math>, which is the [http://www.improvedoutcomes.com/docs/WebSiteDocs/Clustering/Clustering_Parameters/Euclidean_and_Euclidean_Squared_Distance_Metrics.htm squared Euclidean distance] between <math>\,x</math> and <math>\,\mu_k</math>. Therefore we can find the distance between a point and each center and adjust it with the log of the prior, <math>\,log(\pi_k)</math>. The class that has the minimum distance will maximise <math>\,\delta_k</math>. According to the theorem, we can then classify the point to a specific class <math>\,k</math>.  In addition, <math>\, \Sigma_k = I </math> implies that our data is spherical.  




Line 363: Line 386:


The answer is NO. Consider that you have two classes with different shapes, then consider transforming them to the same shape. Given a data point, justify which class this point belongs to. The question is, which transformation can you use? For example, if you use the transformation of class A, then you have assumed that this data point belongs to class A.
The answer is NO. Consider that you have two classes with different shapes, then consider transforming them to the same shape. Given a data point, justify which class this point belongs to. The question is, which transformation can you use? For example, if you use the transformation of class A, then you have assumed that this data point belongs to class A.
[http://portal.acm.org/citation.cfm?id=1340851 Kernel QDA]
In real life, QDA is always better fit the data then LDA because QDA relaxes does not have the assumption made by LDA that the covariance matrix for each class is identical. However, QDA still assumes that the class conditional distribution is Gaussian which does not be the real case in practical. Another method-kernel QDA does not have the Gaussian distribution assumption and it works better.


===The Number of Parameters in LDA and QDA===
===The Number of Parameters in LDA and QDA===
Line 373: Line 399:


[[File:Lda-qda-parameters.png|frame|center|A plot of the number of parameters that must be estimated, in terms of (K-1). The x-axis represents the number of dimensions in the data. As is easy to see, QDA is far less robust than LDA for high-dimensional data sets.]]
[[File:Lda-qda-parameters.png|frame|center|A plot of the number of parameters that must be estimated, in terms of (K-1). The x-axis represents the number of dimensions in the data. As is easy to see, QDA is far less robust than LDA for high-dimensional data sets.]]
related link:
LDA:[http://www.stat.psu.edu/~jiali/course/stat597e/notes2/lda.pdf]
[http://www.dtreg.com/lda.htm]
[http://biostatistics.oxfordjournals.org/cgi/reprint/kxj035v1.pdf Regularized linear discriminant analysis and its application in microarrays]
[http://www.isip.piconepress.com/publications/reports/isip_internal/1998/linear_discrim_analysis/lda_theory.pdf MATHEMATICAL OPERATIONS OF LDA]
[http://psychology.wikia.com/wiki/Linear_discriminant_analysis Application in face recognition and in market]
QDA:[http://portal.acm.org/citation.cfm?id=1314542]
[http://jmlr.csail.mit.edu/papers/volume8/srivastava07a/srivastava07a.pdf Bayes QDA]
[http://www.uni-leipzig.de/~strimmer/lab/courses/ss06/seminar/slides/daniela-2x4.pdf LDA & QDA]


== LDA and QDA in Matlab - October 7, 2009 ==
== LDA and QDA in Matlab - October 7, 2009 ==
Line 498: Line 542:


Then we can see that y=score, v=U.
Then we can see that y=score, v=U.
'''useful resouces:'''
LDA and QDA in Matlab[http://www.mathworks.com/products/statistics/demos.html?file=/products/demos/shipping/stats/classdemo.html],[http://www.mathworks.com/matlabcentral/fileexchange/189],[http://seed.ucsd.edu/~cse190/media07/MatlabClassificationDemo.pdf]


== Trick: Using LDA to do QDA - October 7, 2009 ==
== Trick: Using LDA to do QDA - October 7, 2009 ==
Line 517: Line 564:
<math>y = \underline{w}^Tx</math>
<math>y = \underline{w}^Tx</math>


where <math>\underline{w}</math> is a d-dimensional column vector, and <math style="vertical-align:0%;">x\ \epsilon\ \Re^d</math> (vector in d dimensions).
where <math>\underline{w}</math> is a d-dimensional column vector, and <math style="vertical-align:0%;">x\ \epsilon\ \mathbb{R}^d</math> (vector in d dimensions).


We also have a non-linear function <math>g(x) = y = x^Tvx + \underline{w}^Tx</math> that we cannot estimate.
We also have a non-linear function <math>g(x) = y = x^Tvx + \underline{w}^Tx</math> that we cannot estimate.
Line 623: Line 670:
: Labeling the lines directly on the graph makes it easier to interpret.
: Labeling the lines directly on the graph makes it easier to interpret.


== Ficher's Discriminant Analysis (FDA) ==
 
=== Distance Metric Learning VS FDA ===
In many fundamental machine learning problems, the Euclidean distances between data points do not represent the desired topology that we are trying to capture. Kernel methods address this problem by mapping the points into new spaces where Euclidean distances may be more useful. An alternative approach is to construct a Mahalanobis distance (quadratic Gaussian metric) over the input space and use it in place of Euclidean distances. This
approach can be equivalently interpreted as a linear transformation of the original inputs,followed by Euclidean distance in the projected space. This approach has attracted a lot of recent interest.
 
Some of the proposed algorithms are iterative and computationally expensive. In the paper,"[http://www.aaai.org/Papers/AAAI/2008/AAAI08-095.pdf Distance Metric Learning VS FDA] " written by our instructor, they propose a closed-form solution to one algorithm that previously required expensive semidefinite optimization. They provide a new problem setup in which the algorithm performs better or as well as some standard methods, but without the computational complexity. Furthermore, they show a strong relationship between these methods and the Fisher Discriminant Analysis (FDA). They also extend the approach by kernelizing it, allowing for non-linear transformations of the metric.
 
== Fisher's Discriminant Analysis (FDA) - October 9, 2009 ==


The goal of FDA is to reduce the dimensionality of data in order to have separable data points in a new space.
The goal of FDA is to reduce the dimensionality of data in order to have separable data points in a new space.
Line 630: Line 684:
* multi-class problem
* multi-class problem


=== Two-class problem - October 9, 2009 ===
[[File:graph.jpg|500px|thumb|right| PCA vs FDA]]
 
=== Two-class problem  ===
In the two-class problem, we have the pre-knowledge that data points belong to two classes. Intuitively speaking points of each class form a cloud around the mean of the class, with each class having possibly different size. To be able to separate the two classes we must determine the class whose mean is closest to a given point while also accounting for the different size of each class, which is represented by the covariance of each class.
In the two-class problem, we have the pre-knowledge that data points belong to two classes. Intuitively speaking points of each class form a cloud around the mean of the class, with each class having possibly different size. To be able to separate the two classes we must determine the class whose mean is closest to a given point while also accounting for the different size of each class, which is represented by the covariance of each class.


Line 646: Line 702:
<br/>
<br/>
<br/>
<br/>
Original points are <math>\underline{x_{i}} \in \mathbb{R}^{d}</math><br />
Original points are <math>\underline{x_{i}} \in \mathbb{R}^{d}</math><br /> <math>\ \{ \underline x_1 \underline x_2 \cdot \cdot \cdot \underline x_n \} </math>
Projected points are <math>\underline{z_{i}} \in \mathbb{R}^{1}</math> with <math>\underline{z_{i}} = \underline{w}^T \cdot\underline{x_{i}}</math>
 
 
Projected points are <math>\underline{z_{i}} \in \mathbb{R}^{1}</math> with <math>\underline{z_{i}} = \underline{w}^T \cdot\underline{x_{i}}</math> <math>\ z_i </math> is a sclar


==== Between class covariance ====
==== Between class covariance ====
Line 660: Line 718:
&= \underline{w}^T(\underline{\mu_{1}}-\underline{\mu_{2}})(\underline{\mu_{1}}-\underline{\mu_{2}})^T\underline{w}
&= \underline{w}^T(\underline{\mu_{1}}-\underline{\mu_{2}})(\underline{\mu_{1}}-\underline{\mu_{2}})^T\underline{w}
\end{align}
\end{align}
</math>
</math> which is scalar




Line 690: Line 748:
:<math>\underset{\underline{w}}{max}\ \frac{\underline{w}^T S_{B} \underline{w}}{\underline{w}^T S_{W} \underline{w}}</math>
:<math>\underset{\underline{w}}{max}\ \frac{\underline{w}^T S_{B} \underline{w}}{\underline{w}^T S_{W} \underline{w}}</math>


This maximization problem is equivalent to <math>\underset{\underline{w}}{max}\ \underline{w}^T S_{B} \underline{w}</math> subject to constraint <math>\underline{w}^T S_{W} \underline{w} = 1</math>. We can use the Lagrange multiplier method to solve it:
This maximization problem is equivalent to <math>\underset{\underline{w}}{max}\ \underline{w}^T S_{B} \underline{w} \equiv \max(\underline w^T S_B \underline w)</math> subject to constraint <math>\underline{w}^T S_{W} \underline{w} = 1</math>,
where <math>\ \underline w^T S_B \underline w</math> is no upper bound and <math>\ \underline w^T S_w \underline w</math> is no lower bound.
 
We can use the Lagrange multiplier method to solve it:


:<math>L(\underline{w},\lambda) = \underline{w}^T S_{B} \underline{w} - \lambda(\underline{w}^T S_{W} \underline{w} - 1)</math>
:<math>L(\underline{w},\lambda) = \underline{w}^T S_{B} \underline{w} - \lambda(\underline{w}^T S_{W} \underline{w} - 1)</math> where <math>\ \lambda </math> is the weight
<br /><br />
 
With <math>\frac{\part L}{\part \underline{w}} = 0</math> we get:
 
With <math>\frac{\part L}{\part \underline{w}} = 0</math> we get:
:<math>
:<math>
\begin{align}
\begin{align}
Line 704: Line 766:
Note that <math>\, S_{W}=\Sigma_1+\Sigma_2</math> is sum of two positive matrices and so it has an inverse.
Note that <math>\, S_{W}=\Sigma_1+\Sigma_2</math> is sum of two positive matrices and so it has an inverse.


Here <math>\underline{w}</math> is the eigenvector of <math>S_{w}^{-1}\ S_{B}</math> corresponding to the largest eigenvalue.
Here <math>\underline{w}</math> is the eigenvector of <math>S_{w}^{-1}\ S_{B}</math> corresponding to the largest eigenvalue <math>\ \lambda </math>.


In facts, this expression can be simplified even more.<br>
In facts, this expression can be simplified even more.<br>
Line 743: Line 805:


[[File:PCA-VS-FDA.png|frame|center|PCA and FDA primary dimension for normal multivariate data, using matlab]]
[[File:PCA-VS-FDA.png|frame|center|PCA and FDA primary dimension for normal multivariate data, using matlab]]
From the graph: when we see using PCA, we have a huge overlap for two classes, so PCA is not good. However, there is no overlap for the two classes and they are seperated pretty. Thus, FDA is better than PCA here.


==== Practical example of 2_3 ====
==== Practical example of 2_3 ====
Line 774: Line 838:


[[File:fda2-3.jpg|frame|center|FDA projection of data 2_3, using [http://www.mathwork.com Matlab].]]
[[File:fda2-3.jpg|frame|center|FDA projection of data 2_3, using [http://www.mathwork.com Matlab].]]
Map the data into a linear line, and the two classes are seperated perfectly here.
==== An extension of Fisher's discriminant analysis for stochastic processes ====
A general notion of Fisher's linear discriminant analysis can extend the classical multivariate concept to situations that allow for function-valued random elements. The development uses a bijective mapping that connects a second order process to the reproducing kernel Hilbert space generated by its within class covariance kernel. This approach provides a seamless transition between Fisher's original development and infinite dimensional settings that lends itself well to computation via smoothing and regularization.
Link for Algorithm introduction:[[http://statgen.ncsu.edu/icsa2007/talks/HyejinShin.pdf]]


== FDA for Multi-class Problems - October 14, 2009  ==
== FDA for Multi-class Problems - October 14, 2009  ==
===FDA method for Multi-class Problems ===


For the <math>k</math>-class problem, we need to find a projection from
For the <math>k</math>-class problem, we need to find a projection from
Line 1,039: Line 1,115:
\end{align}
\end{align}
</math>
</math>
===Generalization of Fisher's Linear Discriminant ===
Fisher's linear discriminant (Fisher, 1936) is very popular among users of discriminant analysis.Some of the reasons for this are its simplicity
and unnecessity of strict assumptions. However it has optimality properties only if the underlying distributions of the groups are multivariate normal. It is also easy to verify that the discriminant rule obtained can be very harmed by only a small number of outlying observations. Outliers are very hard to detect in multivariate data sets and even when they are detected simply discarding them is not the most effcient way of handling the situation. Therefore the need for robust procedures that can accommodate the outliers and are not strongly affected by them. Then, a generalization of Fisher's linear discriminant algorithm [[http://www.math.ist.utl.pt/~apires/PDFs/APJB_RP96.pdf]]is developed to lead easily to a very robust procedure.


== Linear Regression Models - October 14, 2009  ==
== Linear Regression Models - October 14, 2009  ==


Regression analysis is a general statistical technique for modelling and analyzing how a dependent variable changes according to changes in independent variables. In classification, we are interested in how a label, <math>\,y</math>, changes according to changes in <math>\,X</math>.
[http://en.wikipedia.org/wiki/Regression_analysis Regression analysis] is a general statistical technique for modelling and analyzing how a dependent variable changes according to changes in independent variables. In classification, we are interested in how a label, <math>\,y</math>, changes according to changes in <math>\,X</math>.


We will start by considering a very simple regression model, the linear regression model.
We will start by considering a very simple regression model, the linear regression model.


General information on linear regression can be found at the [http://numericalmethods.eng.usf.edu/topics/linear_regression.html University of South Florida] and [http://academicearth.org/lectures/applications-to-linear-estimation-least-squares this MIT lecture].
General information on [http://en.wikipedia.org/wiki/Linear_regression linear regression] can be found at the [http://numericalmethods.eng.usf.edu/topics/linear_regression.html University of South Florida] and [http://academicearth.org/lectures/applications-to-linear-estimation-least-squares this MIT lecture].


For the purpose of classification, the linear regression model assumes
For the purpose of classification, the linear regression model assumes
Line 1,052: Line 1,133:
<math>\,\mathbf{x}_{1}, ..., \mathbf{x}_{p}</math>.
<math>\,\mathbf{x}_{1}, ..., \mathbf{x}_{p}</math>.


The linear regression model has the general form:
The simple linear regression model has the general form:


:<math>
:<math>
Line 1,059: Line 1,140:
\end{align}
\end{align}
</math>
</math>
where <math>\,\beta</math> is a <math>1 \times d</math> vector.
where <math>\,\beta</math> is a <math>1 \times d</math> vector and <math>\ x_i </math> is a <math>d \times 1</math> vector .


Given input data <math>\,\mathbf{x}_{1}, ..., \mathbf{x}_{p}</math> and <math>\,y_{1}, ..., y_{p}</math> our goal is to find <math>\,\beta</math> and <math>\,\beta_0</math> such that the linear model fits the data while minimizing sum of squared errors using the Least Squares method.
Given input data <math>\,\mathbf{x}_{1}, ..., \mathbf{x}_{p}</math> and <math>\,y_{1}, ..., y_{p}</math> our goal is to find <math>\,\beta</math> and <math>\,\beta_0</math> such that the linear model fits the data while minimizing sum of squared errors using the [http://en.wikipedia.org/wiki/Least_squares Least Squares method].


Note that vectors <math>\mathbf{x}_{i}</math> could be numerical inputs,
Note that vectors <math>\mathbf{x}_{i}</math> could be numerical inputs,
Line 1,119: Line 1,200:


where <math>\mathbf{H} = \mathbf{X}
where <math>\mathbf{H} = \mathbf{X}
(\mathbf{X}^{T}\mathbf{X})^{-1}\mathbf{X}^{T}</math> is called the hat matrix.
(\mathbf{X}^{T}\mathbf{X})^{-1}\mathbf{X}^{T}</math> is called the [http://en.wikipedia.org/wiki/Hat_matrix hat matrix].


<br/>
<br/>
Line 1,125: Line 1,206:
<math>r(x)= P( Y=k | X=x )= \frac{f_{k}(x)\pi_{k}}{\Sigma_{k}f_{k}(x)\pi_{k}}</math><br/>
<math>r(x)= P( Y=k | X=x )= \frac{f_{k}(x)\pi_{k}}{\Sigma_{k}f_{k}(x)\pi_{k}}</math><br/>
It is clear that to make sense mathematically, <math>\displaystyle r(x)</math> must be a value between 0 and 1.  If this is estimated with the  
It is clear that to make sense mathematically, <math>\displaystyle r(x)</math> must be a value between 0 and 1.  If this is estimated with the  
regression function <math>\displaystyle r(x)=E(Y|X=x)</math> and <math>\mathbf{\hat\beta} </math> is learned as above, then there is nothing that would restrict <math>\displaystyle r(x)</math> to taking values between 0 and 1.
regression function <math>\displaystyle r(x)=E(Y|X=x)</math> and <math>\mathbf{\hat\beta} </math> is learned as above, then there is nothing that would restrict <math>\displaystyle r(x)</math> to taking values between 0 and 1. This is more direct approach to classification since it do not need to estimate <math>\ f_k(x) </math> and <math>\ \pi_k </math>.
 
<math>\ 1 \times P(Y=1|X=x)+0 \times P(Y=0|X=x)=E(Y|X) </math>
This model does not classify Y between 0 and 1, so it is not good and sometimes it can lead to a decent classifier. <math>\ y_i=\frac{1}{n_1} </math>  <math>\ \frac{-1}{n_2} </math>
====A linear regression example in Matlab====
====A linear regression example in Matlab====


Line 1,160: Line 1,243:
[[File: linearregression.png|center|frame| the figure shows that the classification of the data points in 2_3.m by the linear regression model]]
[[File: linearregression.png|center|frame| the figure shows that the classification of the data points in 2_3.m by the linear regression model]]


==Logistic Regression- October 16, 2009==
====Comments about Linear regression model====
===Intuition behind Logistic Regression===
Recall that, for classification purposes, the linear regression model presented in the above section is not correct because it does not force <math>\,r(x)</math> to be between 0 and 1 and sum to 1. Consider the following [http://en.wikipedia.org/wiki/Logit log odds] model (for two classes):


:<math>\log\left(\frac{P(Y=1|X=x)}{P(Y=0|X=x)}\right)=\beta^Tx</math>
Linear regression model is almost the easiest and most popular way to analyze the relationship of different data sets. However, it has some disadvantages as well as its advantages. We should be clear about them before we apply the model.


Calculating <math>\,P(Y=1|X=x)</math> leads us to the logistic regression model, which as opposed to the linear regression model, allows the modelling of the posterior probabilities of the classes through linear methods and at the same time ensures that they sum to one and are between 0 and 1. It is a type of [http://en.wikipedia.org/wiki/Generalized_linear_model Generalized Linear Model (GLM)].
''Advantages'': Linear least squares regression has earned its place as the primary tool for process modeling because of its effectiveness and completeness. Though there are types of data that are better described by functions that are nonlinear in the parameters, many processes in science and engineering are well-described by linear models. This is because either the processes are inherently linear or because, over short ranges, any process can be well-approximated by a linear model. The estimates of the unknown parameters obtained from linear least squares regression are the optimal estimates from a broad class of possible parameter estimates under the usual assumptions used for process modeling. Practically speaking, linear least squares regression makes very efficient use of the data. Good results can be obtained with relatively small data sets. Finally, the theory associated with linear regression is well-understood and allows for construction of different types of easily-interpretable statistical intervals for predictions, calibrations, and optimizations. These statistical intervals can then be used to give clear answers to scientific and engineering questions.  


===The Logistic Regression Model===
''Disadvantages'': The main disadvantages of linear least squares are limitations in the shapes that linear models can assume over long ranges, possibly poor extrapolation properties, and sensitivity to outliers. Linear models with nonlinear terms in the predictor variables curve relatively slowly, so for inherently nonlinear processes it becomes increasingly difficult to find a linear model that fits the data well as the range of the data increases. As the explanatory variables become extreme, the output of the linear model will also always more extreme. This means that linear models may not be effective for extrapolating the results of a process for which data cannot be collected in the region of interest. Of course extrapolation is potentially dangerous regardless of the model type. Finally, while the method of least squares often gives optimal estimates of the unknown parameters, it is very sensitive to the presence of unusual data points in the data used to fit a model. One or two outliers can sometimes seriously skew the results of a least squares analysis. This makes model validation, especially with respect to outliers, critical to obtaining sound answers to the questions motivating the construction of the model.
The logistic regression model for the two class case is defined as


'''Class 1'''
'''useful link''':[http://www.uco.es/dptos/prod-animal/p-animales/cerdo-iberico/Bibliografia/p253.pdf]
[[File:Picture1.png‎|150px|thumb|right|<math>P(Y=1 | X=x)</math>]]
[http://www.cs.au.dk/~cstorm/courses/ML/slides/linear-regression-and-classification.pdf]
:<math>P(Y=1 | X=x) =\frac{\exp(\underline{\beta}^T \underline{x})}{1+\exp(\underline{\beta}^T \underline{x})}=P(x;\underline{\beta})</math>


==Logistic Regression- October 16, 2009==


Then we have that


'''Class 0'''
The [http://en.wikipedia.org/wiki/Logistic_regression logistic regression] model arises from the desire to model the posterior probabilities of the <math>\displaystyle K</math> classes via linear functions in <math>\displaystyle x</math>, while at the same time ensuring that they sum to one and remain in [0,1].Logistic regression models are usually fit by maximum likelihood, using the conditional likelihood ,using <math>\displaystyle Pr(Y|X)</math>.  Since  <math>\displaystyle Pr(Y|X)</math> completely specifies the conditional distribution, the multinomial distribution is appropriate. This model is widely used in biostatistical applications for two classes. For instance: people survive or die, have a disease or not, have a risk factor or not.
[[File:Picture2.png‎ |150px|thumb|right|<math>P(Y=0 | X=x)</math>]]
:<math>P(Y=0 | X=x) = 1-P(Y=1 | X=x)=1-\frac{\exp(\underline{\beta}^T \underline{x})}{1+\exp(\underline{\beta}^T \underline{x})}=\frac{1}{1+\exp(\underline{\beta}^T \underline{x})}</math>


===Fitting a Logistic Regression===
=== logistic function ===
Logistic regression tries to fit a distribution.  The fitting of logistic regression models is usually accomplished by maximum likelihood. The maximum likelihood of <math>\underline\beta</math> maximizes the probability of obtaining the data <math>\displaystyle{x_{1},...,x_{n}}</math> from the known distribution.  Combining <math>\displaystyle P(Y=1 | X=x)</math> and <math>\displaystyle P(Y=0 | X=x)</math> as follows, we can consider the two classes at the same time:


:<math>p(\underline{x_{i}};\underline{\beta}) = \left(\frac{\exp(\underline{\beta}^T \underline{x_i})}{1+\exp(\underline{\beta}^T \underline{x_i})}\right)^{y_i} \left(\frac{1}{1+\exp(\underline{\beta}^T \underline{x_i})}\right)^{1-y_i}</math>


Assuming the data <math>\displaystyle {x_{1},...,x_{n}}</math> is drawn independently, the likelihood function is


:<math>
A [http://en.wikipedia.org/wiki/Logistic_function logistic function] or logistic curve is the most common sigmoid curve.  
\begin{align}
\mathcal{L}(\theta)&=p({x_{1},...,x_{n}};\theta)\\
&=\displaystyle p(x_{1};\theta) p(x_{2};\theta)... p(x_{n};\theta)  \quad    \mbox{(by independence)}\\
&= \prod_{i=1}^n p(x_{i};\theta)
\end{align}
</math>


Since it is more convenient to work with the log-likelihood function, take the log of both sides, we get
:<math>y = \frac{1}{1+e^{-x}}</math>
:<math>\displaystyle l(\theta)=\displaystyle \sum_{i=1}^n \log p(x_{i};\theta)</math>


So,
1. <math>\frac{dy}{dx} = y(1-y)=\frac{e^{x}}{(1+e^{x})^{2}}</math>
:<math>
\begin{align}
l(\underline\beta)&=\displaystyle\sum_{i=1}^n y_{i}\log\left(\frac{\exp(\underline{\beta}^T \underline{x_i})}{1+\exp(\underline{\beta}^T \underline{x_i})}\right)+(1-y_{i})\log\left(\frac{1}{1+\exp(\underline{\beta}^T\underline{x_i})}\right)\\
&= \displaystyle\sum_{i=1}^n y_{i}(\underline{\beta}^T\underline{x_i}-\log(1+\exp(\underline{\beta}^T\underline{x_i}))+(1-y_{i})(-\log(1+\exp(\underline{\beta}^T\underline{x_i}))\\
&= \displaystyle\sum_{i=1}^n y_{i}\underline{\beta}^T\underline{x_i}-y_{i} \log(1+\exp(\underline{\beta}^T\underline{x_i}))- \log(1+\exp(\underline{\beta}^T\underline{x_i}))+y_{i} \log(1+\exp(\underline{\beta}^T\underline{x_i}))\\
&=\displaystyle\sum_{i=1}^n y_{i}\underline{\beta}^T\underline{x_i}- \log(1+\exp(\underline{\beta}^T\underline{x_i}))\\
\end{align}
</math>


2. <math>y(0) = \frac{1}{2}</math>


To maximize the log-likelihood, set its derivative to 0.
3. <math> \int y dx = ln(1 + e^{x})</math>
:<math>
\begin{align}
\frac{\partial l}{\partial \underline{\beta}} &= \sum_{i=1}^n \left[{y_i} \underline{x}_i- \frac{\exp(\underline{\beta}^T \underline{x_i})}{1+\exp(\underline{\beta}^T \underline{x}_i)}\underline{x}_i\right]\\
&=\sum_{i=1}^n \left[{y_i} \underline{x}_i - p(\underline{x}_i;\underline{\beta})\underline{x}_i\right]
\end{align}
</math>


To solve this equation, the [http://numericalmethods.eng.usf.edu/topics/newton_raphson.html Newton-Raphson algorithm] is used which requires the second derivative in addition to the first derivative. This is demonstrated in the next section.
4. <math> y(x) = \frac{1}{2} + \frac{1}{4}x - \frac{1}{48}x^{3} + \frac{1}{48}x^{5} \cdots </math>


== Logistic Regression(2) - October 19, 2009  ==
5. The logistic curve shows early exponential growth for negative t, which slows to linear growth of slope 1/4 near t = 0, then approaches y = 1 with an exponentially decaying gap.


===Logistic Regression Model===
===Intuition behind Logistic Regression===
Recall that, for classification purposes, the linear regression model presented in the above section is not correct because it does not force <math>\,r(x)</math> to be between 0 and 1 and sum to 1. Consider the following [http://en.wikipedia.org/wiki/Logit log odds] model (for two classes):


Recall that in the last lecture, we learned the logistic regression model.
:<math>\log\left(\frac{P(Y=1|X=x)}{P(Y=0|X=x)}\right)=\beta^Tx</math>


* <math>P(Y=1 | X=x)=P(\underline{x}_i;\underline{\beta})=\frac{exp(\underline{\beta}^T \underline{x})}{1+exp(\underline{\beta}^T \underline{x})}</math>
Calculating <math>\,P(Y=1|X=x)</math> leads us to the logistic regression model, which as opposed to the linear regression model, allows the modelling of the posterior probabilities of the classes through linear methods and at the same time ensures that they sum to one and are between 0 and 1. It is a type of [http://en.wikipedia.org/wiki/Generalized_linear_model Generalized Linear Model (GLM)].
* <math>P(Y=0 | X=x)=1-P(\underline{x}_i;\underline{\beta})=\frac{1}{1+exp(\underline{\beta}^T \underline{x})}</math>


===Find <math>\underline{\beta}</math> ===
===The Logistic Regression Model===
The logistic regression model for the two class case is defined as


'''Criteria''': find a <math>\underline{\beta}</math> that maximizes the conditional likelihood of Y given X using the training data.
'''Class 1'''
[[File:Picture1.png‎|150px|thumb|right|<math>P(Y=1 | X=x)</math>]]
:<math>P(Y=1 | X=x) =\frac{\exp(\underline{\beta}^T \underline{x})}{1+\exp(\underline{\beta}^T \underline{x})}=P(x;\underline{\beta})</math>  


From above, we have the first derivative of the log-likelihood:


<math>\frac{\partial l}{\partial \underline{\beta}} = \sum_{i=1}^n \left[{y_i} \underline{x}_i- \frac{exp(\underline{\beta}^T \underline{x_i})}{1+exp(\underline{\beta}^T \underline{x}_i)}\underline{x}_i\right] </math>
Then we have that
<math>=\sum_{i=1}^n \left[{y_i} \underline{x}_i - P(\underline{x}_i;\underline{\beta})\underline{x}_i\right]</math>


The [http://en.wikipedia.org/wiki/Newton%27s_method Newton-Raphson algorithm] requires the second-derivative or [http://en.wikipedia.org/wiki/Hessian_matrix Hessian matrix].
'''Class 0'''
[[File:Picture2.png‎ |150px|thumb|right|<math>P(Y=0 | X=x)</math>]]
:<math>P(Y=0 | X=x) = 1-P(Y=1 | X=x)=1-\frac{\exp(\underline{\beta}^T \underline{x})}{1+\exp(\underline{\beta}^T \underline{x})}=\frac{1}{1+\exp(\underline{\beta}^T \underline{x})}</math>


<math>\frac{\partial^{2} l}{\partial \underline{\beta} \partial \underline{\beta}^T }=
===Fitting a Logistic Regression===
\sum_{i=1}^n - \underline{x_i} \frac{(exp(\underline{\beta}^T\underline{x}_i) \underline{x}_i^T)(1+exp(\underline{\beta}^T \underline{x}_i))-\underline{x}_i exp(\underline{\beta}^T\underline{x}_i)exp(\underline{\beta}^T\underline{x}_i)}{(1+exp(\underline{\beta}^T \underline{x}))^2}</math>  
Logistic regression tries to fit a distribution.  The fitting of logistic regression models is usually accomplished by [http://en.wikipedia.org/wiki/Maximum_likelihood maximum likelihood], using Pr(Y|X). The maximum likelihood of <math>\underline\beta</math> maximizes the probability of obtaining the data <math>\displaystyle{x_{1},...,x_{n}}</math> from the known distribution.  Combining <math>\displaystyle P(Y=1 | X=x)</math> and <math>\displaystyle P(Y=0 | X=x)</math> as follows, we can consider the two classes at the same time:


('''note''': <math>\frac{\partial\underline{\beta}^T\underline{x}_i}{\partial \underline{\beta}^T}=\underline{x}_i^T</math> you can check it [http://www.ee.ic.ac.uk/hp/staff/dmb/matrix/intro.html here], it's a very useful website including a Matrix Reference Manual that you can find information about linear algebra and the properties of real and complex matrices.)
:<math>p(\underline{x_{i}};\underline{\beta}) = \left(\frac{\exp(\underline{\beta}^T \underline{x_i})}{1+\exp(\underline{\beta}^T \underline{x_i})}\right)^{y_i} \left(\frac{1}{1+\exp(\underline{\beta}^T \underline{x_i})}\right)^{1-y_i}</math>


Assuming the data <math>\displaystyle {x_{1},...,x_{n}}</math> is drawn independently, the likelihood function is


::<math>=\sum_{i=1}^n - \underline{x}_i \frac{(-\underline{x}_i exp(\underline{\beta}^T\underline{x}_i) \underline{x}_i^T)}{(1+exp(\underline{\beta}^T \underline{x}))(1+exp(\underline{\beta}^T \underline{x}))}</math> (by cancellation)
:<math>
 
\begin{align}
::<math>=\sum_{i=1}^n - \underline{x}_i \underline{x}_i^T P(\underline{x}_i;\underline{\beta}))[1-P(\underline{x}_i;\underline{\beta})])</math>(since <math>P(\underline{x}_i;\underline{\beta})=\frac{exp(\underline{\beta}^T \underline{x})}{1+exp(\underline{\beta}^T \underline{x})}</math> and <math>1-P(\underline{x}_i;\underline{\beta})=\frac{1}{1+exp(\underline{\beta}^T \underline{x})}</math>)
\mathcal{L}(\theta)&=p({x_{1},...,x_{n}};\theta)\\
&=\displaystyle p(x_{1};\theta) p(x_{2};\theta)... p(x_{n};\theta)  \quad    \mbox{(by independence)}\\
&= \prod_{i=1}^n p(x_{i};\theta)
\end{align}
</math>


The same second derivative can be achieved if we reduce the occurrences of beta to 1 by the identity<math>\frac{a}{1+a}=1-\frac{1}{1+a}</math>
Since it is more convenient to work with the log-likelihood function, take the log of both sides, we get
:<math>\displaystyle l(\theta)=\displaystyle \sum_{i=1}^n \log p(x_{i};\theta)</math>


And solving <math>\frac{\partial}{\partial \underline{\beta}^T}\sum_{i=1}^n \left[{y_i} \underline{x}_i-\left[1-\frac{1}{1+exp(\underline{\beta}^T \underline{x}_i)}\right]\underline{x}_i\right] </math>
So,
:<math>
\begin{align}
l(\underline\beta)&=\displaystyle\sum_{i=1}^n y_{i}\log\left(\frac{\exp(\underline{\beta}^T \underline{x_i})}{1+\exp(\underline{\beta}^T \underline{x_i})}\right)+(1-y_{i})\log\left(\frac{1}{1+\exp(\underline{\beta}^T\underline{x_i})}\right)\\
&= \displaystyle\sum_{i=1}^n y_{i}(\underline{\beta}^T\underline{x_i}-\log(1+\exp(\underline{\beta}^T\underline{x_i}))+(1-y_{i})(-\log(1+\exp(\underline{\beta}^T\underline{x_i}))\\
&= \displaystyle\sum_{i=1}^n y_{i}\underline{\beta}^T\underline{x_i}-y_{i} \log(1+\exp(\underline{\beta}^T\underline{x_i}))- \log(1+\exp(\underline{\beta}^T\underline{x_i}))+y_{i} \log(1+\exp(\underline{\beta}^T\underline{x_i}))\\
&=\displaystyle\sum_{i=1}^n y_{i}\underline{\beta}^T\underline{x_i}- \log(1+\exp(\underline{\beta}^T\underline{x_i}))\\
\end{align}
</math>




Starting with <math>\,\underline{\beta}^{old}</math>, the Newton-Raphson update is
To maximize the log-likelihood, set its derivative to 0.
:<math>
\begin{align}
\frac{\partial l}{\partial \underline{\beta}} &= \sum_{i=1}^n \left[{y_i} \underline{x}_i- \frac{\exp(\underline{\beta}^T \underline{x_i})}{1+\exp(\underline{\beta}^T \underline{x}_i)}\underline{x}_i\right]\\
&=\sum_{i=1}^n \left[{y_i} \underline{x}_i - p(\underline{x}_i;\underline{\beta})\underline{x}_i\right]
\end{align}
</math>  


<math>\,\underline{\beta}^{new}\leftarrow \,\underline{\beta}^{old}- (\frac{\partial ^2 l}{\partial \underline{\beta}\partial \underline{\beta}^T})^{-1}(\frac{\partial l}{\partial \underline{\beta}})</math> where the derivatives are evaluated at <math>\,\underline{\beta}^{old}</math>
There are n+1 nonlinear equations in <math>/ \beta </math>. The first column is vector 1, then <math>\ \sum_{i=1}^n {y_i} =\sum_{i=1}^n p(\underline{x}_i;\underline{\beta}) </math> i.e. the expected number of class ones matches the observed number.


The iteration will terminate when <math>\underline{\beta}^{new}</math> is very close to <math>\underline{\beta}^{old}</math>.
To solve this equation, the [http://numericalmethods.eng.usf.edu/topics/newton_raphson.html Newton-Raphson algorithm] is used which requires the second derivative in addition to the first derivative. This is demonstrated in the next section.


The iteration can be described in matrix form.
====Advantages and Disadvantages====
Logistic regression has several advantages over discriminant analysis:
* it is more robust: the independent variables don't have to be normally distributed, or have equal variance in each group
* It does not assume a linear relationship between the IV and DV
* It may handle nonlinear effects
* You can add explicit interaction and power terms
* The DV need not be normally distributed.
* There is no homogeneity of variance assumption.
* Normally distributed error terms are not assumed.
* It does not require that the independents be interval.
* It does not require that the independents be unbounded.
With all this flexibility, you might wonder why anyone would ever use discriminant analysis or any other method of analysis. Unfortunately, the advantages of logistic regression come at a cost: it requires much more data to achieve stable, meaningful results. With standard regression, and DA, typically 20 data points per predictor is considered the lower bound. For logistic regression, at least 50 data points per predictor is necessary to achieve stable results.


* Let <math>\,\underline{Y}</math> be the column vector of <math>\,y_i</math>. (<math>n\times1</math>)
some resources: [http://www.statgun.com/tutorials/logistic-regression.html], [http://etd.library.pitt.edu/ETD/available/etd-04122006-102254/unrestricted/realfinalplus_ETD2006.pdf]
* Let <math>\,X</math> be the <math>{d}\times{n}</math> input matrix.
* Let <math>\,\underline{P}</math> be the <math>{n}\times{1}</math> vector with <math>i</math>th element <math>P(\underline{x}_i;\underline{\beta}^{old})</math>.
* Let <math>\,W</math> be an <math>{n}\times{n}</math> diagonal matrix with <math>i</math>th element <math>P(\underline{x}_i;\underline{\beta}^{old})[1-P(\underline{x}_i;\underline{\beta}^{old})]</math>


then
====Extension====


<math>\frac{\partial l}{\partial \underline{\beta}} = X(\underline{Y}-\underline{P})</math>
* When we are dealing with a problem with more than two classes, we need to generalize our logistic regression to a [http://en.wikipedia.org/wiki/Multinomial_logit Multinomial Logit model].
 
* Limitations of Logistic Regression:
:1. We know that there is no assumptions are made about the distributions of the features of the data (i.e. the explanatory variables). However, the features should not be highly correlated with one another because this could cause problems with estimation.
:2. Large number of data points (i.e.the sample sizes) are required for logistic regression to provide sufficient numbers in both classes. The more number of features/dimensions of the data, the larger the sample size required.
 
== Logistic Regression(2) - October 19, 2009  ==
 
===Logistic Regression Model===
 
Recall that in the last lecture, we learned the logistic regression model.


<math>\frac{\partial ^2 l}{\partial \underline{\beta}\partial \underline{\beta}^T} = -XWX^T</math>
* <math>P(Y=1 | X=x)=P(\underline{x}_i;\underline{\beta})=\frac{exp(\underline{\beta}^T \underline{x})}{1+exp(\underline{\beta}^T \underline{x})}</math>
* <math>P(Y=0 | X=x)=1-P(\underline{x}_i;\underline{\beta})=\frac{1}{1+exp(\underline{\beta}^T \underline{x})}</math>


The Newton-Raphson step is
===Find <math>\underline{\beta}</math> ===


<math>\underline{\beta}^{new} \leftarrow \underline{\beta}^{old}+(XWX^T)^{-1}X(\underline{Y}-\underline{P})</math>
'''Criteria''': find a <math>\underline{\beta}</math> that maximizes the conditional likelihood of Y given X using the training data.


This equation is sufficient for computation of the logistic regression model. However, we can simplify further to uncover an interesting feature of this equation.
From above, we have the first derivative of the log-likelihood:


<math>
<math>\frac{\partial l}{\partial \underline{\beta}} = \sum_{i=1}^n \left[{y_i} \underline{x}_i- \frac{exp(\underline{\beta}^T \underline{x_i})}{1+exp(\underline{\beta}^T \underline{x}_i)}\underline{x}_i\right] </math>
\begin{align}
<math>=\sum_{i=1}^n \left[{y_i} \underline{x}_i - P(\underline{x}_i;\underline{\beta})\underline{x}_i\right]</math>
\underline{\beta}^{new} &= (XWX^T)^{-1}(XWX^T)\underline{\beta}^{old}+(XWX^T)^{-1}XWW^{-1}(\underline{Y}-\underline{P})\\
&=(XWX^T)^{-1}XW[X^T\underline{\beta}^{old}+W^{-1}(\underline{Y}-\underline{P})]\\
&=(XWX^T)^{-1}XWZ
\end{align}</math>


where <math>Z=X^T\underline{\beta}^{old}+W^{-1}(\underline{Y}-\underline{P})</math>
Newton-Raphson algorithm:<br />
If we want to find <math>\ x^* </math> such that <math>\ f(x^*)=0</math>


Recall that linear regression by least square finds the following minimum: <math>\min_{\underline{\beta}}(\underline{y}-\underline{\beta}^T X)^T(\underline{y}-\underline{\beta}^TX)</math>
<math>\ X^{new} \leftarrow x^{old}-\frac {f(x^{old})}{\partial f(x^{old})} </math>


we have <math>\underline\hat{\beta}=(XX^T)^{-1}X\underline{y}</math>
<math>\ x^{new} \rightarrow x^* </math>


Similarly, we can say that <math>\underline{\beta}^{new}</math> is the solution of a weighted least square problem:
If we want to maximize or minimize <math>\ f(x) </math>, then solve for  <math>\ \partial f(x)=0 </math>


<math>\underline{\beta}^{new} \leftarrow \min_{\underline{\beta}}(Z-X\underline{\beta}^T)W(Z-X\underline{\beta})</math>
<math>\ X^{new} \leftarrow x^{old}-\frac {\partial f(x^{old})}{\partial^2 f(x^{old})} </math>
 
 
The [http://en.wikipedia.org/wiki/Newton%27s_method Newton-Raphson algorithm] requires the second-derivative or [http://en.wikipedia.org/wiki/Hessian_matrix Hessian matrix].


====WLS====
Actually, the weighted least squares estimator minimizes the weighted sum of squared errors
<math>
S(\beta) = \sum_{i=1}^{n}w_{i}[y_{i}-\mathbf{x}_{i}^{T}\beta]^{2}
</math>
where <math>\displaystyle w_{i}>0</math>.
Hence the WLS estimator is given by
<math>
\hat\beta^{WLS}=\left[\sum_{i=1}^{n}w_{i}\mathbf{x}_{i}\mathbf{x}_{i}^{T} \right]^{-1}\left[ \sum_{i=1}^{n}w_{i}\mathbf{x}_{i}y_{i}\right]
</math>


A weighted linear regression of the iteratively computed response
<math>
\mathbf{z}=\mathbf{X}^{T}\beta^{old}+\mathbf{W}^{-1}(\mathbf{y}-\mathbf{p})
</math>


Therefore, we obtain
<math>\frac{\partial^{2} l}{\partial \underline{\beta} \partial \underline{\beta}^T }=
:<math>
\sum_{i=1}^n - \underline{x_i} \frac{(exp(\underline{\beta}^T\underline{x}_i) \underline{x}_i^T)(1+exp(\underline{\beta}^T \underline{x}_i))-\underline{x}_i exp(\underline{\beta}^T\underline{x}_i)exp(\underline{\beta}^T\underline{x}_i)}{(1+exp(\underline{\beta}^T \underline{x}))^2}</math>  
\begin{align}
& \hat\beta^{WLS}=\left[\sum_{i=1}^{n}w_{i}\mathbf{x}_{i}\mathbf{x}_{i}^{T} \right]^{-1}\left[ \sum_{i=1}^{n}w_{i}\mathbf{x}_{i}z_{i}\right]
\\&
= \left[ \mathbf{XWX}^{T}\right]^{-1}\left[ \mathbf{XWz}\right]
\\&
= \left[ \mathbf{XWX}^{T}\right]^{-1}\mathbf{XW}(\mathbf{X}^{T}\beta^{old}+\mathbf{W}^{-1}(\mathbf{y}-\mathbf{p})) \\&
= \beta^{old}+ \left[ \mathbf{XWX}^{T}\right]^{-1}\mathbf{X}(\mathbf{y}-\mathbf{p})
\end{align}
</math>


('''note''': <math>\frac{\partial\underline{\beta}^T\underline{x}_i}{\partial \underline{\beta}^T}=\underline{x}_i^T</math> you can check it [http://www.ee.ic.ac.uk/hp/staff/dmb/matrix/intro.html here], it's a very useful website including a Matrix Reference Manual that you can find information about linear algebra and the properties of real and complex matrices.)


'''note:'''Here we obtain <math>\underline{\beta}</math>, which is a <math>d\times{1}</math> vector, because we construct the model like <math>\underline{\beta}^T\underline{x}</math>. If we construct the model like <math>\underline{\beta}_0+ \underline{\beta}^T\underline{x}</math>, then similar to linear regression, <math>\underline{\beta}</math> will be a <math>(d+1)\times{1}</math> vector.
<br/>
:Choosing <math>\displaystyle\beta=0</math> seems to be a suitable starting value for the Newton-Raphson iteration procedure in this case. However, this does not guarantee convergence.  The procedure will usually converge since the log-likelihood function is concave(or convex).  In the case that it does not, we can just prove the local convergence of the method, which means the iteration would converge only if the initial point is closed enough to the exact solution. However, in practice, choosing an appropriate initial value is really trivial, namely, it is not often to find a initial too far from the exact solution to make the iteration invalid. <ref>C. T. Kelley, Iterative Methods for Linear and Nonlinear Equations, chapter 5 </ref> Besides, step-size halving will solve this problem. <ref>H. Trevor, R. Tibshirani, J. Friedman, The Elements of Statistical Learning (Springer
2009),121.</ref>


====Pseudo Code====
::<math>=\sum_{i=1}^n - \underline{x}_i \frac{(-\underline{x}_i exp(\underline{\beta}^T\underline{x}_i) \underline{x}_i^T)}{(1+exp(\underline{\beta}^T \underline{x}))(1+exp(\underline{\beta}^T \underline{x}))}</math> (by cancellation)
#<math>\underline{\beta} \leftarrow 0</math>
#Set <math>\,\underline{Y}</math>, the label associated with each observation <math>\,i=1...n</math>.
#Compute <math>\,\underline{P}</math> according to the equation <math>P(\underline{x}_i;\underline{\beta})=\frac{exp(\underline{\beta}^T \underline{x})}{1+exp(\underline{\beta}^T \underline{x})}</math> for all <math>\,i=1...n</math>.
#Compute the diagonal matrix <math>\,W</math> by setting <math>\,w_i,i</math> to <math>P(\underline{x}_i;\underline{\beta}))[1-P(\underline{x}_i;\underline{\beta})]</math> for all <math>\,i=1...n</math>.
#<math>Z \leftarrow X^T\underline{\beta}+W^{-1}(\underline{Y}-\underline{P})</math>.
#<math>\underline{\beta} \leftarrow (XWX^T)^{-1}XWZ</math>.
#If the new <math>\underline{\beta}</math> value is sufficiently close to the old value, stop; otherwise go back to step 3.


===Comparison with Linear Regression===
::<math>=\sum_{i=1}^n - \underline{x}_i \underline{x}_i^T P(\underline{x}_i;\underline{\beta}))[1-P(\underline{x}_i;\underline{\beta})])</math>(since <math>P(\underline{x}_i;\underline{\beta})=\frac{exp(\underline{\beta}^T \underline{x})}{1+exp(\underline{\beta}^T \underline{x})}</math> and <math>1-P(\underline{x}_i;\underline{\beta})=\frac{1}{1+exp(\underline{\beta}^T \underline{x})}</math>)
*'''Similarities'''
#They are both to attempt to estimate <math>\,P(Y=k|X=x)</math> (For logistic regression, we just mentioned about the case that <math>\,k=0</math> or <math>\,k=1</math> now).
#They are both have linear boundaris.
:'''note:'''For linear regression, we assume the model is linear. The boundary is <math>P(Y=k|X=x)=\underline{\beta}^T\underline{x}_i+\underline{\beta}_0=0.5</math> (linear)


::For logistic regression, the boundary is <math>P(Y=k|X=x)=\frac{exp(\underline{\beta}^T \underline{x})}{1+exp(\underline{\beta}^T \underline{x})}=0.5 \Rightarrow exp(\underline{\beta}^T \underline{x})=1\Rightarrow \underline{\beta}^T \underline{x}=0</math> (linear)
The same second derivative can be achieved if we reduce the occurrences of beta to 1 by the identity<math>\frac{a}{1+a}=1-\frac{1}{1+a}</math>


*'''Differences'''
And solving <math>\frac{\partial}{\partial \underline{\beta}^T}\sum_{i=1}^n \left[{y_i} \underline{x}_i-\left[1-\frac{1}{1+exp(\underline{\beta}^T \underline{x}_i)}\right]\underline{x}_i\right] </math>
#Linear regression: <math>\,P(Y=k|X=x)</math> is linear function of <math>\,x</math>, <math>\,P(Y=k|X=x)</math> is not guaranteed to fall between 0 and 1 and to sum up to 1.
#Logistic regression: <math>\,P(Y=k|X=x)</math> is a nonlinear function of <math>\,x</math>, and it is guaranteed to range from 0 to 1 and to sum  up to 1.


===Comparison with LDA===
#The linear logistic model only consider the conditional distribution <math>\,P(Y=k|X=x)</math>. No assumption is made about  <math>\,P(X=x)</math>.
#The LDA model specifies the joint distribution of <math>\,X</math> and <math>\,Y</math>.
#Logistic regression maximizes the conditional likelihood of <math>\,Y</math> given <math>\,X</math>: <math>\,P(Y=k|X=x)</math>
#LDA maximizes the joint likelihood of <math>\,Y</math> and <math>\,X</math>: <math>\,P(Y=k,X=x)</math>.
#If <math>\,\underline{x}</math> is d-dimensional,the number of adjustable parameter in logistic regression is <math>\,d</math>. The number of parameters grows linearly w.r.t dimension.
#If <math>\,\underline{x}</math> is d-dimensional,the number of adjustable parameter in LDA is <math>\,(2d)+d(d+1)/2+2=(d^2+5d+4)/2</math>. The number of parameters grows quardratically w.r.t dimension.
#As logistic regression relies on fewer assumptions, it seems to be more robust.
#In practice, Logistic regression and LDA often give the similar results.


====By example====
Starting with <math>\,\underline{\beta}^{old}</math>, the Newton-Raphson update is


Now we compare LDA and Logistic regression by an example. Again, we use them on the 2_3 data.
<math>\,\underline{\beta}^{new}\leftarrow \,\underline{\beta}^{old}- (\frac{\partial ^2 l}{\partial \underline{\beta}\partial \underline{\beta}^T})^{-1}(\frac{\partial l}{\partial \underline{\beta}})</math> where the derivatives are evaluated at <math>\,\underline{\beta}^{old}</math>
  >>load 2_3;
  >>[U, sample] = princomp(X');
  >>sample = sample(:,1:2);
  >>plot (sample(1:200,1), sample(1:200,2), '.');
  >>hold on;
  >>plot (sample(201:400,1), sample(201:400,2), 'r.');
:First, we do PCA on the data and plot the data points that represent 2 or 3 in different colors. See the previous example for more details.


  >>group = ones(400,1);
The iteration will terminate when <math>\underline{\beta}^{new}</math> is very close to <math>\underline{\beta}^{old}</math>.
  >>group(201:400) = 2;
:Group the data points.


  >>[B,dev,stats] = mnrfit(sample,group);
The iteration can be described in matrix form.
  >>x=[ones(1,400); sample'];
:Now we use [http://www.mathworks.com/access/helpdesk/help/toolbox/stats/index.html?/access/helpdesk/help/toolbox/stats/mnrfit.html&http://www.google.cn/search?hl=zh-CN&q=mnrfit+matlab&btnG=Google+%E6%90%9C%E7%B4%A2&aq=f&oq= mnrfit] to use logistic regression to classfy the data. This function can return B which is a <math>(d+1)\times{(k–1)}</math> matrix of estimates, where each column corresponds to the estimated intercept term and predictor coefficients. In this case, B is a <math>3\times{1}</math> matrix.


  >> B
* Let <math>\,\underline{Y}</math> be the column vector of <math>\,y_i</math>.  (<math>n\times1</math>)
  B =0.1861
* Let <math>\,X</math> be the <math>{d}\times{n}</math> input matrix.
    -5.5917
* Let <math>\,\underline{P}</math> be the <math>{n}\times{1}</math> vector with <math>i</math>th element <math>P(\underline{x}_i;\underline{\beta}^{old})</math>.
    -3.0547
* Let <math>\,W</math> be an <math>{n}\times{n}</math> diagonal matrix with <math>i</math>th element <math>P(\underline{x}_i;\underline{\beta}^{old})[1-P(\underline{x}_i;\underline{\beta}^{old})]</math>


:This is our <math>\underline{\beta}</math>. So the posterior probabilities are:
then
:<math>P(Y=1 | X=x)=\frac{exp(0.1861-5.5917X_1-3.0547X_2)}{1+exp(0.1861-5.5917X_1-3.0547X_2)}</math>.
:<math>P(Y=2 | X=x)=\frac{1}{1+exp(0.1861-5.5917X_1-3.0547X_2)}</math>


:The classification rule is:
<math>\frac{\partial l}{\partial \underline{\beta}} = X(\underline{Y}-\underline{P})</math>
:<math>\hat Y = 1</math>,    if <math>\,0.1861-5.5917X_1-3.0547X_2>=0</math>
:<math>\hat Y = 2</math>,    if <math>\,0.1861-5.5917X_1-3.0547X_2<0</math>


  >>f = sprintf('0 = %g+%g*x+%g*y', B(1), B(2), B(3));
<math>\frac{\partial ^2 l}{\partial \underline{\beta}\partial \underline{\beta}^T} = -XWX^T</math>
  >>ezplot(f,[min(sample(:,1)), max(sample(:,1)), min(sample(:,2)), max(sample(:,2))])
:Plot the decision boundary by logistic regression.
[[File:Boundary-lr.png‎|frame|center|This is a decision boundary by logistic regression.The line shows how the two classes split.]]


  >>[class, error, POSTERIOR, logp, coeff] = classify(sample, sample, group, 'linear');
The Newton-Raphson step is
  >>k = coeff(1,2).const;
  >>l = coeff(1,2).linear;
  >>f = sprintf('0 = %g+%g*x+%g*y', k, l(1), l(2));
  >>h=ezplot(f, [min(sample(:,1)), max(sample(:,1)), min(sample(:,2)), max(sample(:,2))]);
:Plot the decision boundary by LDA. See the previous example for more information about LDA in matlab.


[[File:Boundary-lda.png‎|frame|center| From this figure, we can see that the results of Logistic Regression and LDA are very similar.]]
<math>\underline{\beta}^{new} \leftarrow \underline{\beta}^{old}+(XWX^T)^{-1}X(\underline{Y}-\underline{P})</math>


== ''' 2009.10.21''' ==
This equation is sufficient for computation of the logistic regression model. However, we can simplify further to uncover an interesting feature of this equation.


=== Multi-Class Logistic Regression ===
<math>
\begin{align}
\underline{\beta}^{new} &= (XWX^T)^{-1}(XWX^T)\underline{\beta}^{old}+(XWX^T)^{-1}XWW^{-1}(\underline{Y}-\underline{P})\\
&=(XWX^T)^{-1}XW[X^T\underline{\beta}^{old}+W^{-1}(\underline{Y}-\underline{P})]\\
&=(XWX^T)^{-1}XWZ
\end{align}</math>


Our earlier goal with logistic regression was to model the posteriors for a 2 class classification problem with a linear function bounded by the interval [0,1]. In that case our model was,<br /><br />
where <math>Z=X\underline{\beta}^{old}+W^{-1}(\underline{Y}-\underline{P})</math>


<math>\log\left(\frac{P(Y=1|X=x)}{P(Y=0|X=x)}\right)= \log\left(\frac{\frac{\exp(\beta^T x)}{1+\exp(\beta^T x)}}{\frac{1}{1+\exp(\beta^T x)}}\right) =\beta^Tx</math><br /><br />
This is a adjusted response and it is solved repeatedly when <math>\ p </math>, <math>\ W </math>,  and <math>\ z </math> changes. This algorithm is called [http://en.wikipedia.org/wiki/Iteratively_reweighted_least_squares iteratively reweighted least squares] because it solves the weighted least squares problem repeatedly.


We can extend this idea to the more general case with K-classes. This model is specified with K - 1 terms where the Kth class in the denominator can be chosen arbitrarily.<br /><br />
Recall that linear regression by least square finds the following minimum: <math>\min_{\underline{\beta}}(\underline{y}-\underline{\beta}^T X)^T(\underline{y}-\underline{\beta}^TX)</math>


<math>\log\left(\frac{P(Y=i|X=x)}{P(Y=K|X=x)}\right)=\beta_i^Tx,\quad i \in \{1,\dots,K-1\} </math><br /><br />
we have <math>\underline\hat{\beta}=(XX^T)^{-1}X\underline{y}</math>


The posteriors for each class are given by,<br /><br />
Similarly, we can say that <math>\underline{\beta}^{new}</math> is the solution of a weighted least square problem:


<math>\underline{\beta}^{new} \leftarrow arg \min_{\underline{\beta}}(Z-X\underline{\beta}^T)W(Z-X\underline{\beta})</math>


<math>P(Y=i|X=x) = \frac{\exp(\beta_i^T x)}{1+\sum_{k=1}^{K-1}\exp(\beta_k^T x)}, \quad i \in \{1,\dots,K-1\}</math><br /><br />
====WLS====
Actually, the weighted least squares estimator minimizes the weighted sum of squared errors
<math>
S(\beta) = \sum_{i=1}^{n}w_{i}[y_{i}-\mathbf{x}_{i}^{T}\beta]^{2}
</math>
where <math>\displaystyle w_{i}>0</math>.
Hence the WLS estimator is given by
<math>
\hat\beta^{WLS}=\left[\sum_{i=1}^{n}w_{i}\mathbf{x}_{i}\mathbf{x}_{i}^{T} \right]^{-1}\left[ \sum_{i=1}^{n}w_{i}\mathbf{x}_{i}y_{i}\right]
</math>


<math>P(Y=K|X=x) = \frac{1}{1+\sum_{k=1}^{K-1}\exp(\beta_k^T x)}</math><br /><br />
A weighted linear regression of the iteratively computed response
<math>
\mathbf{z}=\mathbf{X}^{T}\beta^{old}+\mathbf{W}^{-1}(\mathbf{y}-\mathbf{p})
</math>


Seeing these equations as a weighted least squares problem makes them easier to derivate.
Therefore, we obtain
:<math>
\begin{align}
& \hat\beta^{WLS}=\left[\sum_{i=1}^{n}w_{i}\mathbf{x}_{i}\mathbf{x}_{i}^{T} \right]^{-1}\left[ \sum_{i=1}^{n}w_{i}\mathbf{x}_{i}z_{i}\right]
\\&
= \left[ \mathbf{XWX}^{T}\right]^{-1}\left[ \mathbf{XWz}\right]
\\&
= \left[ \mathbf{XWX}^{T}\right]^{-1}\mathbf{XW}(\mathbf{X}^{T}\beta^{old}+\mathbf{W}^{-1}(\mathbf{y}-\mathbf{p})) \\&
= \beta^{old}+ \left[ \mathbf{XWX}^{T}\right]^{-1}\mathbf{X}(\mathbf{y}-\mathbf{p})
\end{align}
</math>


Note that we still retain the property that the sum of the posteriors is 1. In general the posteriors are no longer complements of each other as in true in the 2 class problem where we could express <math>\displaystyle P(Y=1|X=x)=1-P(Y=0|X=x)</math>. Fitting a Logistic model for the K>2 class problem isn't as 'nice' as in the 2 class problem since we don't have the same simplification.


=== Perceptron (Foundation of Neural Network) ===
'''note:'''Here we obtain <math>\underline{\beta}</math>, which is a <math>d\times{1}</math> vector, because we construct the model like <math>\underline{\beta}^T\underline{x}</math>. If we construct the model like <math>\underline{\beta}_0+ \underline{\beta}^T\underline{x}</math>, then similar to linear regression, <math>\underline{\beta}</math> will be a <math>(d+1)\times{1}</math> vector.
[[Image:Simpleperceptron.jpg|thumb|right|325px|Figure 1: Diagram of a linear perceptron.]]
<br/>
Recall the use of Least Squares regression as a classifier, shown to be identical to LDA. To classify points with least squares we take the sign of a linear combination of data points and assign a label equivalent to +1 or -1.
:Choosing <math>\displaystyle\beta=0</math> seems to be a suitable starting value for the Newton-Raphson iteration procedure in this case. However, this does not guarantee convergence. The procedure will usually converge since the log-likelihood function is concave(or convex), but overshooting can occur. In the rare cases that the log-likelihood decreases, cut step
size by half, then we can always have convergence. In the case that it does not, we can just prove the local convergence of the method, which means the iteration would converge only if the initial point is closed enough to the exact solution. However, in practice, choosing an appropriate initial value is really trivial, namely, it is not often to find a initial too far from the exact solution to make the iteration invalid. <ref>C. T. Kelley, Iterative Methods for Linear and Nonlinear Equations, chapter 5 </ref> Besides, step-size halving will solve this problem. <ref>H. Trevor, R. Tibshirani, J. Friedman, The Elements of Statistical Learning (Springer
2009),121.</ref>


Least Squares returns the sign of a linear combination of data points as the class label
For multiclass cases: the Newton algorithm can also be expressed as an iteratively reweighted least squares algorithm, but with a vector of <math>\ k-1 </math> response and a nondiagonal weight matrix per observation. And we can use coordinate-descent method to maximize the log-likelihood efficiently.


sign(<math>(\underline{\beta}^T \underline{x} + {\beta}_0)</math>
====Pseudo Code====
#<math>\underline{\beta} \leftarrow 0</math>
#Set <math>\,\underline{Y}</math>, the label associated with each observation <math>\,i=1...n</math>.
#Compute <math>\,\underline{P}</math> according to the equation <math>P(\underline{x}_i;\underline{\beta})=\frac{exp(\underline{\beta}^T \underline{x})}{1+exp(\underline{\beta}^T \underline{x})}</math> for all <math>\,i=1...n</math>.
#Compute the diagonal matrix <math>\,W</math> by setting <math>\,w_i,i</math> to <math>P(\underline{x}_i;\underline{\beta}))[1-P(\underline{x}_i;\underline{\beta})]</math> for all <math>\,i=1...n</math>.
#<math>Z \leftarrow X^T\underline{\beta}+W^{-1}(\underline{Y}-\underline{P})</math>.
#<math>\underline{\beta} \leftarrow (XWX^T)^{-1}XWZ</math>.
#If the new <math>\underline{\beta}</math> value is sufficiently close to the old value, stop; otherwise go back to step 3.


===Comparison with Linear Regression===
*'''Similarities'''
#They are both to attempt to estimate <math>\,P(Y=k|X=x)</math> (For logistic regression, we just mentioned about the case that <math>\,k=0</math> or <math>\,k=1</math> now).
#They are both have linear boundaris.
:'''note:'''For linear regression, we assume the model is linear. The boundary is <math>P(Y=k|X=x)=\underline{\beta}^T\underline{x}_i+\underline{\beta}_0=0.5</math> (linear)


In the 1950s [http://en.wikipedia.org/wiki/Frank_Rosenblatt Frank Rosenblatt] developed an iterative linear classifier while at Cornell University known as the Perceptron. The concept of a perceptron was fundamental to the later development of the Artificial Neural Network models. The perceptron is a simple type of neural network which models the electrical signals of [http://en.wikipedia.org/wiki/Biological_neural_network biological neurons].  In fact, it was the first neural network to be algorithmically described. <ref>Simon S. Haykin, Neural Networks and Learning Machines, (Prentice Hall 2008). </ref>
::For logistic regression, the boundary is <math>P(Y=k|X=x)=\frac{exp(\underline{\beta}^T \underline{x})}{1+exp(\underline{\beta}^T \underline{x})}=0.5 \Rightarrow exp(\underline{\beta}^T \underline{x})=1\Rightarrow \underline{\beta}^T \underline{x}=0</math> (linear)


As in other linear classification methods like Least Squares, Rosenblatt's classifier determines a hyperplane for the decision boundary. Linear methods all determine slightly different decision boundaries, Rosenblatt's algorithm seeks to minimize the distance between the decision boundary and the misclassified points <ref>H. Trevor, R. Tibshirani, J. Friedman, The Elements of Statistical Learning (Springer 2009),156.</ref>.  
*'''Differences'''
#Linear regression: <math>\,P(Y=k|X=x)</math> is linear function of <math>\,x</math>, <math>\,P(Y=k|X=x)</math> is not guaranteed to fall between 0 and 1 and to sum up to 1.
#Logistic regression: <math>\,P(Y=k|X=x)</math> is a nonlinear function of <math>\,x</math>, and it is guaranteed to range from 0 to 1 and to sum  up to 1.


Particular to the iterative nature of the solution, the problem has no global mean (not convex). It does not converge to give a unique hyperplane, and the solutions depend on the size of the gap between classes. If the classes are separable then the algorithm is shown to converge to a local mean. The proof of this convergence is known as the ''perceptron convergence theorem''. However, for overlapping classes convergence to a local mean cannot be guaranteed.
===Comparison with LDA===
#The linear logistic model only consider the conditional distribution <math>\,P(Y=k|X=x)</math>. No assumption is made about  <math>\,P(X=x)</math>.
#The LDA model specifies the joint distribution of <math>\,X</math> and <math>\,Y</math>.
#Logistic regression maximizes the conditional likelihood of <math>\,Y</math> given <math>\,X</math>: <math>\,P(Y=k|X=x)</math>
#LDA maximizes the joint likelihood of <math>\,Y</math> and <math>\,X</math>: <math>\,P(Y=k,X=x)</math>.
#If <math>\,\underline{x}</math> is d-dimensional,the number of adjustable parameter in logistic regression is <math>\,d</math>. The number of parameters grows linearly w.r.t dimension.
#If <math>\,\underline{x}</math> is d-dimensional,the number of adjustable parameter in LDA is <math>\,(2d)+d(d+1)/2+2=(d^2+5d+4)/2</math>. The number of parameters grows quardratically w.r.t dimension.
#LDA estimate parameters more efficiently by using more information about data and samples without class labels can be also used in LDA.
#As logistic regression relies on fewer assumptions, it seems to be more robust.
#In practice, Logistic regression and LDA often give the similar results.


====By example====


If we find a hyperplane that is not unique between 2 classes, there will be infinitely many solutions obtained from the perceptron algorithm.
Now we compare LDA and Logistic regression by an example. Again, we use them on the 2_3 data.
  >>load 2_3;
  >>[U, sample] = princomp(X');
  >>sample = sample(:,1:2);
  >>plot (sample(1:200,1), sample(1:200,2), '.');
  >>hold on;
  >>plot (sample(201:400,1), sample(201:400,2), 'r.');
:First, we do PCA on the data and plot the data points that represent 2 or 3 in different colors. See the previous example for more details.


  >>group = ones(400,1);
  >>group(201:400) = 2;
:Group the data points.


As seen in Figure 1, after training, the perceptron determines the label of the data by computing the sign of a linear combination of components.
  >>[B,dev,stats] = mnrfit(sample,group);
  >>x=[ones(1,400); sample'];
:Now we use [http://www.mathworks.com/access/helpdesk/help/toolbox/stats/index.html?/access/helpdesk/help/toolbox/stats/mnrfit.html&http://www.google.cn/search?hl=zh-CN&q=mnrfit+matlab&btnG=Google+%E6%90%9C%E7%B4%A2&aq=f&oq= mnrfit] to use logistic regression to classfy the data. This function can return B which is a <math>(d+1)\times{(k–1)}</math> matrix of estimates, where each column corresponds to the estimated intercept term and predictor coefficients. In this case, B is a <math>3\times{1}</math> matrix.


  >> B
  B =0.1861
    -5.5917
    -3.0547


====A Perceptron Example====
:This is our <math>\underline{\beta}</math>. So the posterior probabilities are:
:<math>P(Y=1 | X=x)=\frac{exp(0.1861-5.5917X_1-3.0547X_2)}{1+exp(0.1861-5.5917X_1-3.0547X_2)}</math>.
:<math>P(Y=2 | X=x)=\frac{1}{1+exp(0.1861-5.5917X_1-3.0547X_2)}</math>


The perceptron network can figure out the decision boundray line even if we dont know how to draw the line. We just have to give it some examples first. For example:
:The classification rule is:
{| class="wikitable"
:<math>\hat Y = 1</math>,    if <math>\,0.1861-5.5917X_1-3.0547X_2>=0</math>
|-
:<math>\hat Y = 2</math>,     if <math>\,0.1861-5.5917X_1-3.0547X_2<0</math>
! Features:x1, x2, x3


! Answer
  >>f = sprintf('0 = %g+%g*x+%g*y', B(1), B(2), B(3));
|-
  >>ezplot(f,[min(sample(:,1)), max(sample(:,1)), min(sample(:,2)), max(sample(:,2))])
| 1,0,0
:Plot the decision boundary by logistic regression.
| +1
[[File:Boundary-lr.png‎|frame|center|This is a decision boundary by logistic regression.The line shows how the two classes split.]]
|-
| 1,0,1
| +1
|-
| 1,1,0
| +1
|-
| 0,0,1
| -1
|-
| 0,1,1
| -1
|-
| 1,1,1
| -1
|}
Then the perceptron starts out not knowing how to separate the answers so it guesses. For example we input 1,0,0 and it guesses -1. But the right answer is +1. So the perceptron adjusts its line and we try the next example. Eventually the perceptron will have all the answers right.


y=[1;1;1;-1;-1;-1];
  >>[class, error, POSTERIOR, logp, coeff] = classify(sample, sample, group, 'linear');
x=[1,0,0;1,0,1;,1,1,0;0,0,1;0,1,1;1,1,1]';
  >>k = coeff(1,2).const;
b_0=0;
  >>l = coeff(1,2).linear;
b=[1;1;1];
  >>f = sprintf('0 = %g+%g*x+%g*y', k, l(1), l(2));
rho=.5;
  >>h=ezplot(f, [min(sample(:,1)), max(sample(:,1)), min(sample(:,2)), max(sample(:,2))]);
for j=1:100;
:Plot the decision boundary by LDA. See the previous example for more information about LDA in matlab.
    changed=0;
 
    for i=1:6
[[File:Boundary-lda.png‎|frame|center| From this figure, we can see that the results of Logistic Regression and LDA are very similar.]]
        d=(b'*x(:,i)+b_0)*y(i);
        if d<0
            b=b+rho*x(:,i)*y(i);
            b_0=b_0+rho*y(i);
            changed=1;
        end
    end
    if changed==0
        break;
    end
end


==The Perceptron (Lecture October 23, 2009)==
== ''' 2009.10.21''' ==
[[File:misclass.png|300px|thumb|right|Figure 2: This figure shows a misclassified point and the movement of the decision boundary.]]
A Perceptron can be modeled as shown in Figure 1 of the previous lecture where<math>\,x_0</math> is the model intercept and <math>x_{1},\ldots,x_{d}</math> represent the feature data, <math>\sum_{i=0}^d \beta_{j}x_{j}</math> is a linear combination of some weights of these inputs, and <math>I(\sum_{i=1}^d \beta_{j}x_{j})</math>, where <math>\,I</math> indicates the sign of the expression and returns the label of the data point.


=== Multi-Class Logistic Regression ===


The Perceptron algorithm seeks a linear boundary between two classes.  A linear decision boundary can be represented by<math> \underline{\beta}^T\underline{x}+\beta_{0}. </math> The algorithm begins with an arbitrary hyperplane <math>\underline{\beta}^T\underline{x}+\beta_{0} </math> (initial guess). Its goal is to minimize the distance between the decision boundary and the misclassified data points. This is illustrated in Figure 2. It attempts to find the optimal  <math>\underline\beta</math> by iteratively adjusting the decision boundary until all points are on the correct side of the boundary.  It terminates when there are no misclassified points. 
Our earlier goal with logistic regression was to model the posteriors for a 2 class classification problem with a linear function bounded by the interval [0,1]. In that case our model was,<br /><br />
<br/>
<br/>
[[File:distance2.jpg|300px|thumb|right|Figure 3: This figure illustrates the derivation of the distance between the decision boundary and misclassified points]]
'''Derivation''':'' The distance between the decision boundary and misclassified points''. <br /><br />


If <math>\underline{x_{1}}</math> and <math>\underline{x_{2}}</math>both lie on the decision boundary then,<br /><br />
<math>\log\left(\frac{P(Y=1|X=x)}{P(Y=0|X=x)}\right)= \log\left(\frac{\frac{\exp(\beta^T x)}{1+\exp(\beta^T x)}}{\frac{1}{1+\exp(\beta^T x)}}\right) =\beta^Tx</math><br /><br />


:<math>
We can extend this idea to the more general case with K-classes. This model is specified with K - 1 terms where the Kth class in the denominator can be chosen arbitrarily.<br /><br />
\begin{align}
\underline{\beta}^T\underline{x_{1}}+\beta_{0} &=  \underline{\beta}^T\underline{x_{2}}+\beta_{0} \\
\underline{\beta}^T (x_{1}-x_{2})&=0
\end{align}
</math>


<math>\underline{\beta}^T (x_{1}-x_{2})</math> denotes an inner product. Since the inner product is 0 and <math>(\underline{x_{1}}-\underline{x_{2}})</math> is a vector lying on the decision boundary, <math>\underline{\beta}</math> is orthogonal to the decision boundary. <br /><br />
<math>\log\left(\frac{P(Y=i|X=x)}{P(Y=K|X=x)}\right)=\beta_i^Tx,\quad i \in \{1,\dots,K-1\} </math><br /><br />
Let <math>\underline{x_{i}}</math> be a misclassified point. <br /><br />  


Then the projection of the vector <math> \underline{x_{i}}</math> on the direction that is orthogonal to the decision boundary is <math>\underline{\beta}^T\underline{x_{i}}</math>.
The posteriors for each class are given by,<br /><br />
Now, if <math>\underline{x_{0}}</math> is also on the decision boundary, then <math>\underline{\beta}^T\underline{x_{0}}+\beta_{0}=0</math> and so  <math>\underline{\beta}^T\underline{x_{0}}= -\beta_{0}</math>. Looking at Figure 3, it can be seen that the distance between <math>\underline{x_{i}}</math> and the decision boundary is the absolute value of <math>\underline{\beta}^T\underline{x_{i}}+\beta_{0}. </math>
<br/>
<br/>
Consider <math>y_{i}(\underline{\beta}^T\underline{x_{i}}+\beta_{0}).</math>
:Notice that if <math>\underline{x_{i}}</math> is classified ''correctly'' then this product is positive. This is because if it is classified correctly, then either both (<math>\underline{\beta}^T\underline{x_{i}}+\beta_{0})</math> and<math>\displaystyle y_{i}</math> are positive or they are both negative.  However, if <math>\underline{x_{i}}</math> is classified ''incorrectly'' then one of <math>(\underline{\beta}^T\underline{x_{i}}+\beta_{0})</math> and <math>\displaystyle y_{i}</math> is positive and the other is negative.  The result is that the above product is negative for a point that is misclassified. 
<br/>


For the algorithm, we need only consider the distance between the misclassified points and the decision boundary.


:Consider <math>\phi(\underline{\beta},\beta_{0})= -\displaystyle\sum_{i\in M} –y_{i}(\underline{\beta}^T\underline{x_{i}}+\beta_{0}) </math>  
<math>P(Y=i|X=x) = \frac{\exp(\beta_i^T x)}{1+\sum_{k=1}^{K-1}\exp(\beta_k^T x)}, \quad i \in \{1,\dots,K-1\}</math><br /><br />
which is a summation of positive numbers and where <math>\displaystyle M</math> is the set of all misclassified points. 
<br/>
The goal now becomes to <math>\min_{\underline{\beta},\beta_{0}} \phi(\underline{\beta},\beta_{0}). </math>  


This can be done using a [http://en.wikipedia.org/wiki/Gradient_descent gradient descent approach], which is a numerical method that takes one predetermined step in the direction of the gradient, getting closer to a minimum at each step, until the gradient is zero. A problem with this algorithm is the possibility of getting stuck in a local minimum.  To continue, the following derivatives are needed:
<math>P(Y=K|X=x) = \frac{1}{1+\sum_{k=1}^{K-1}\exp(\beta_k^T x)}</math><br /><br />


:<math>\frac{\partial \phi}{\partial \underline{\beta}}= -\displaystyle\sum_{i \in M}y_{i}\underline{x_{i}}
Seeing these equations as a weighted least squares problem makes them easier to derivate.
\ \ \ \ \ \ \ \ \ \ \ \frac{\partial \phi}{\partial \beta_{0}}= -\displaystyle\sum_{i \in M}y_{i}</math>
<br/>


Then the gradient descent type algorithm (Perceptron Algorithm) is
Note that we still retain the property that the sum of the posteriors is 1. In general the posteriors are no longer complements of each other as in true in the 2 class problem where we could express <math>\displaystyle P(Y=1|X=x)=1-P(Y=0|X=x)</math>. Fitting a Logistic model for the K>2 class problem isn't as 'nice' as in the 2 class problem since we don't have the same simplification.
:<math>
 
\begin{pmatrix}
===Multi-class kernel logistic regression===
  \underline{\beta}^{\mathrm{new}}\\
 
  \underline{\beta_0}^{\mathrm{new}}
Logistic regression (LR) and kernel logistic regression (KLR) have already proven their value in the statistical and machine learning community. Opposed to an empirically risk minimization approach such as employed by Support Vector Machines (SVMs), LR and KLR yield probabilistic outcomes based on a maximum likelihood argument. It seems that this framework provides a natural extension to multiclass classification tasks, which must be contrasted to the
\end{pmatrix}
commonly used coding approach.
=
 
\begin{pmatrix}
A paper uses the LS-SVM framework to solve the KLR problem. In that paper,they see that the minimization of the negative penalized log likelihood criterium is equivalent to solving in each iteration a weighted version of least squares support vector machines (wLS-SVMs). In the derivation it turns out that the global regularization term is reflected as usual in each step. In a similar iterative weighting of wLS-SVMs, with different weighting factors is reported to converge to an SVM solution.
  \underline{\beta}^{\mathrm{old}}\\
 
  \underline{\beta_0}^{\mathrm{old}}
Unlike SVMs, KLR by its nature is not sparse and needs all training samples in its final model. Different adaptations to the original algorithm were proposed to obtain sparseness. The second one uses a sequential minimization optimization (SMO) approach and in the last case, the binary KLR problem is reformulated into a geometric programming system which can be efficiently solved by an interior-point algorithm. In the LS-SVM framework, fixed-size LS-SVM has shown its value on large data sets. It approximates the feature map using a spectral decomposition, which leads to a sparse representation of the model when estimating in the primal space.  They use this technique as a practical implementation of KLR with estimation in the primal space. To reduce the size of the Hessian, an alternating descent version of Newton’s method is used which has the extra advantage that it can be easily used in a distributed computing environment. The proposed algorithm is compared to existing algorithms using small size to large scale benchmark data sets.
\end{pmatrix}
 
+\rho
Paper's Link: [[ftp://ftp.esat.kuleuven.ac.be/pub/SISTA/karsmakers/20070424IJCNN_pk.pdf]]
\begin{pmatrix}
 
  y_i \underline{x_i}\\
=== Perceptron (Foundation of Neural Network) ===
  y_i
 
\end{pmatrix}
==== Separating Hyperplane Classifiers ====
</math>
Separating hyperplane trys to separate the data using linear decision boundaries. When the classes overlap, it can be generalized to support vector machine, which constructs nonlinear boundaries by constructing a linear boundary in an enlarged and transformed feature space.
where <math>\displaystyle\rho</math> is the magnitude of each step called the "learning rate" or the "convergence rate". The algorithm continues until <math>
\begin{pmatrix}
==== Perceptron ====
  \underline{\beta}^{\mathrm{new}}\\
[[Image:Simpleperceptron.jpg|thumb|right|325px|Figure 1: Diagram of a linear perceptron.]]
  \underline{\beta_0}^{\mathrm{new}}
Recall the use of Least Squares regression as a classifier, shown to be identical to LDA. To classify points with least squares we take the sign of a linear combination of data points and assign a label equivalent to +1 or -1.
\end{pmatrix}
 
=
Least Squares returns the sign of a linear combination of data points as the class label
\begin{pmatrix}
 
  \underline{\beta}^{\mathrm{old}}\\
<math>sign(\underline{\beta}^T \underline{x} + {\beta}_0) = sign(\beta_{0}+\beta_{1}x_{1}+\beta_{2}x_{2})</math>
  \underline{\beta_0}^{\mathrm{old}}
 
\end{pmatrix} </math>
 
or until it has iterated a specified number of times. If the algorithm converges, it has found a linear classifier, ie., there are no misclassified points.  
In the 1950s [http://en.wikipedia.org/wiki/Frank_Rosenblatt Frank Rosenblatt] developed an iterative linear classifier while at Cornell University known as the Perceptron. The concept of a perceptron was fundamental to the later development of the [http://en.wikipedia.org/wiki/Artificial_neural_network Artificial Neural Network] models. The perceptron is a simple type of neural network which models the electrical signals of [http://en.wikipedia.org/wiki/Biological_neural_network biological neurons]. In fact, it was the first neural network to be algorithmically described. <ref>Simon S. Haykin, Neural Networks and Learning Machines, (Prentice Hall 2008). </ref>
<br/>
<br/>
====Problems with the Algorithm and Issues Affecting Convergence====
#If the data is not separable, then the Perceptron algorithm will not converge since it cannot find a linear classifier that classifies all of the points correctly.  
#Convergence rates depend on the size of the gap between classes. If the gap is large, then the algorithm converges quickly. However, if the gap is small, the algorithm converges slowly. This problem can be eliminated by using basis expansions technique. To be specific, we try to find a hyperplane not in the original space, but in the enlarged space obtained by using some basis functions.
#If the classes are separable, there exists infinitely many solutions to Perceptron, all of which are hyperplanes.  
#The speed of convergence of the algorithm is also dependent on the value of <math>\displaystyle\rho</math>, the learning rate.  A larger value of <math>\displaystyle\rho</math> could yield quicker convergence, but if this value is too large, it may also result in “skipping over” the minimum that the algorithm is trying to find and possibly oscillating forever between the last two points, before and after the min.
#A perfect separation is not always available even desirable. If observations comes from different classes sharing the same imput, the classification model seems to be overfitting and will generally have poor predictive performance.
#The [http://annet.eeng.nuim.ie/intro/course/chpt2/convergence.shtml perceptron convergence theorem] states that if there exists an exact solution (in other words, if the training data set is linearly separable), then the perceptron learning algorithm is guaranteed to find an exact solution in a finite number of steps. Proofs of this theorem can be found for example in Rosenblatt (1962), Block (1962), Nilsson (1965), Minsky and Papert (1969), Hertz et al. (1991), and Bishop (1995a). Note, however, that the number of steps required to achieve convergence could still be substantial, and in practice, until convergence is achieved we will not be able to distinguish between a nonseparable problem and one that is simply slow to converge<ref>
Pattern Recognition and Machine Learning,Christopher M. Bishop,194


</ref>.
As in other linear classification methods like Least Squares, Rosenblatt's classifier determines a hyperplane for the decision boundary. Linear methods all determine slightly different decision boundaries, Rosenblatt's algorithm seeks to minimize the distance between the decision boundary and the misclassified points <ref>H. Trevor, R. Tibshirani, J. Friedman, The Elements of Statistical Learning (Springer 2009),156.</ref>.  
====Comment on gradient descent algorithm====
Consider yourself on the peak  and you want to get to the land as fast as possible. So which direction should you step? Intuitively it should be the direction in which the height decreases fastest, which is given by the gradient. However, if the mountain has a saddle shape and you initially stand in the middle, then you will finally arrive at the saddle point (local minimum) and get stuck there.


In addition, note that in the final form of our gradient descent algorithm, we get rid of the summation over <math>\,i</math> (all data points). Actually, this is an alternative of the original gradient descent algorithm (sometimes called batch gradient descent) known as Stochastic gradient descent, where we approximate the true gradient by only evaluating on a single training example. This means that <math>\,{\beta}</math> gets improved by computation of only one sample. When there is a large data set, say, population database, it's very time-consuming to do summation over millions of samples. By Stochastic gradient descent, we can treat the problem sample by sample and still get decent result in practice.
Particular to the iterative nature of the solution, the problem has no global mean (not convex). It does not converge to give a unique hyperplane, and the solutions depend on the size of the gap between classes. If the classes are separable then the algorithm is shown to converge to a local mean. The proof of this convergence is known as the ''perceptron convergence theorem''. However, for overlapping classes convergence to a local mean cannot be guaranteed.


<br/>
*A Perceptron applet can be found at http://isl.ira.uka.de/neuralNetCourse/2004/VL_11_5/Perceptron.html .


==Neural Networks (NN) - October 28, 2009 ==
If we find a hyperplane that is not unique between 2 classes, there will be infinitely many solutions obtained from the perceptron algorithm.


A neural network is a parallel, distributed information processing structure consisting of processing elements interconnected together with signal channels called connections. Each processing element has a single output connection with branches that "fan out" onto as many connections as desired each carrying the same signal - the processing element output signal. <ref>
Theory of the Backpropagation Neural Network, R. Necht-Nielsen </ref> It is a multistage regression or classification model represented by a network. Figure 1 is an example of a typical neural network but it can have many different forms.
[[File:NN.png|300px|thumb|right|Figure 1: General Structure of a Neural Network.]]
A regression problem typically has only one unit in the output layer. In a k-class classification problem, there are usually k units in the output layer that each represent the probability of class '''k''' and each <math>\displaystyle y_k</math> is coded (0,1).


===Activation Function===
As seen in Figure 1, after training, the perceptron determines the label of the data by computing the sign of a linear combination of components.
Activation Function is a term that is frequently used in classification by NN.  


In perceptron, we have a "sign" function that takes the sign of a weighted sum of input features.
====A Perceptron Example====


[[File:signfuncperceptron.png|200px|]]
The perceptron network can figure out the decision boundray line even if we dont know how to draw the line. We just have to give it some examples first. For example:
<br>The sign function is of the form [[File:signfunc1.png|30px|]] and is not continuous at 0. Thus, we replace it by a smooth function <math>\displaystyle \sigma </math> of the form [[File:signfunc2.png|30px|]] and call it the '''activation function'''.
{| class="wikitable"
<br>The choice of this function <math>\displaystyle \sigma </math> is determined by the properties of the data and the assumed distribution of target variables, but for multiple binary classification problems <math>\sigma(a)=\frac {1}{1+e^{-a}}</math> (inverse-logit) form is often used.
|-
! Features:x1, x2, x3


'''Note:''' A key difference between the perceptron and NN is that the neural network uses continuous nonlinearities in the units, for the purpose of differentiation, whereas the perceptron often uses a non-differentiable activation function. The neural network function is differentiable with respect to the network parameters so that a gradient descent method can be used in training. Moreover, perceptron is a linear classifier, while NN, by combining layers of perceptrons, is able to classify non-linear problems through proper training.
! Answer
|-
| 1,0,0
| +1
|-
| 1,0,1
| +1
|-
| 1,1,0
| +1
|-
| 0,0,1
| -1
|-
| 0,1,1
| -1
|-
| 1,1,1
| -1
|}
Then the perceptron starts out not knowing how to separate the answers so it guesses. For example we input 1,0,0 and it guesses -1. But the right answer is +1. So the perceptron adjusts its line and we try the next example. Eventually the perceptron will have all the answers right.


By assigning some weights to the connectors in the neural network (see diagram above) we weigh the input that comes into the perceptron, to get an output that in turn acts as an input to the next layer of perceptrons, and so on for each layer. This type of neural network is called [http://en.wikipedia.org/wiki/Feedforward_neural_network Feed-Forward Neural Network]. Applications to Feed-Forward Neural Networks include data reduction, speech recognition, sensor signal processing, and ECG abnormality detection, to name a few. <ref>J. Annema, Feed-Forward Neural Networks, (Springer 1995), pp. 9 </ref>
y=[1;1;1;-1;-1;-1];
 
  x=[1,0,0;1,0,1;,1,1,0;0,0,1;0,1,1;1,1,1]';
===Back-propagation===
b_0=0;
For a while, the Neural Network model was just an idea, since there were no algorithms for training the model until 1986, when Geoffrey Hinton <ref>
b=[1;1;1];
http://www.cs.toronto.edu/~hinton/backprop.html
rho=.5;
</ref> came up with an algorithm called '''back-propagation'''. After that, a number of other training algorithms and various configurations of Neural Networks were implemented.
for j=1:100;
    changed=0;
    for i=1:6
        d=(b'*x(:,i)+b_0)*y(i);
        if d<0
            b=b+rho*x(:,i)*y(i);
            b_0=b_0+rho*y(i);
            changed=1;
        end
    end
    if changed==0
        break;
    end
end


When we were talking about perceptrons, we applied gradient descent algorithm for optimizing the weights. Back-propagation uses this idea of gradient descent to train neural network based on the chain rule in calculus.  
==The Perceptron (Lecture October 23, 2009)==
[[File:misclass.png|300px|thumb|right|Figure 2: This figure shows a misclassified point and the movement of the decision boundary.]]
A Perceptron can be modeled as shown in Figure 1 of the previous lecture where<math>\,x_0</math> is the model intercept and <math>x_{1},\ldots,x_{d}</math> represent the feature data, <math>\sum_{i=0}^d \beta_{j}x_{j}</math> is a linear combination of some weights of these inputs, and <math>I(\sum_{i=1}^d \beta_{j}x_{j})</math>, where <math>\,I</math> indicates the sign of the expression and returns the label of the data point.  


Assume that last output layer has only one unit, so we are working with a regression problem. Later we will see how this can be extended to more output layers and thus turn into a classificaiton problem.


For simplicity, there is only 1 unit at the end and assume for the moment we are doing regression.
The Perceptron algorithm seeks a linear boundary between two classes.  A linear decision boundary can be represented by<math> \underline{\beta}^T\underline{x}+\beta_{0}. </math> The algorithm begins with an arbitrary hyperplane <math>\underline{\beta}^T\underline{x}+\beta_{0} </math> (initial guess). Its goal is to minimize the distance between the decision boundary and the misclassified data points. This is illustrated in Figure 2. It attempts to find the optimal  <math>\underline\beta</math> by iteratively adjusting the decision boundary until all points are on the correct side of the boundary.  It terminates when there are no misclassified points. 
<br/>
<br/>
[[File:distance2.jpg|300px|thumb|right|Figure 3: This figure illustrates the derivation of the distance between the decision boundary and misclassified points]]
'''Derivation''':'' The distance between the decision boundary and misclassified points''. <br /><br />


[[File:backpropagation.png|300px|]]
If <math>\underline{x_{1}}</math> and <math>\underline{x_{2}}</math>both lie on the decision boundary then,<br /><br /> 


Note that we make a distinction between the input weights <math>\displaystyle (w_i)</math> and hidden weights <math>\displaystyle (u_i)</math>.
:<math>
<br><br>Within each perceptron we have a function <math>\displaystyle z_i=\sigma(a_i)</math> that takes input <math>\displaystyle a_i</math> and outputs <math>\displaystyle z_i's</math>. The <math>\displaystyle z_i's</math> are the inputs into the final output of the model <math>\Rightarrow \hat y_i=\sum_{i=1}^p w_i z_i</math>
\begin{align}
\underline{\beta}^T\underline{x_{1}}+\beta_{0} &= \underline{\beta}^T\underline{x_{2}}+\beta_{0} \\
\underline{\beta}^T (x_{1}-x_{2})&=0
\end{align}
</math>


We can find the error of the neural network output by evaluating the squared difference between the true classification and the resulting classification output <math>\Rightarrow \displaystyle error=||y-\hat y ||^2  </math>
<math>\underline{\beta}^T (x_{1}-x_{2})</math> denotes an inner product. Since the inner product is 0 and <math>(\underline{x_{1}}-\underline{x_{2}})</math> is a vector lying on the decision boundary, <math>\underline{\beta}</math> is orthogonal to the decision boundary. <br /><br />
   
Let <math>\underline{x_{i}}</math> be a misclassified point. <br /><br />  


<br>'''First find derivative of the model error with respect to output weights <math>\displaystyle w_i</math>'''<br><math>\frac{\partial err}{\partial w_i}=\frac{\partial err}{\partial \hat y} \cdot \frac{\partial \hat y}{\partial w_i}</math>  
Then the projection of the vector <math> \underline{x_{i}}</math> on the direction that is orthogonal to the decision boundary is <math>\underline{\beta}^T\underline{x_{i}}</math>.
<br><math>\frac{\partial err}{\partial w_i}=2(y-\hat y) \cdot z_i</math>
Now, if <math>\underline{x_{0}}</math> is also on the decision boundary, then <math>\underline{\beta}^T\underline{x_{0}}+\beta_{0}=0</math> and so  <math>\underline{\beta}^T\underline{x_{0}}= -\beta_{0}</math>. Looking at Figure 3, it can be seen that the distance between <math>\underline{x_{i}}</math> and the decision boundary is the absolute value of <math>\underline{\beta}^T\underline{x_{i}}+\beta_{0}. </math>
<br/>
<br/>
Consider <math>y_{i}(\underline{\beta}^T\underline{x_{i}}+\beta_{0}).</math>
:Notice that if <math>\underline{x_{i}}</math> is classified ''correctly'' then this product is positive. This is because if it is classified correctly, then either both (<math>\underline{\beta}^T\underline{x_{i}}+\beta_{0})</math> and<math>\displaystyle y_{i}</math> are positive or they are both negative.  However, if <math>\underline{x_{i}}</math> is classified ''incorrectly'' then one of <math>(\underline{\beta}^T\underline{x_{i}}+\beta_{0})</math> and <math>\displaystyle y_{i}</math> is positive and the other is negative.  The result is that the above product is negative for a point that is misclassified. 
<br/>


<br>'''Now we need to find the derivative of the model error with respect to hidden weights <math>\displaystyle u_i's</math>'''
For the algorithm, we need only consider the distance between the misclassified points and the decision boundary.
<br>Consider the following diagram that opens up the hidden layers of the neural network:


[[File:propagationhidden.png|300px|]]
:Consider <math>\phi(\underline{\beta},\beta_{0})= -\displaystyle\sum_{i\in M} –y_{i}(\underline{\beta}^T\underline{x_{i}}+\beta_{0}) </math>
which is a summation of positive numbers and where <math>\displaystyle M</math> is the set of all misclassified points.
<br/>
The goal now becomes to <math>\min_{\underline{\beta},\beta_{0}} \phi(\underline{\beta},\beta_{0}). </math>


''i j are reversed!''
This can be done using a [http://en.wikipedia.org/wiki/Gradient_descent gradient descent approach], which is a numerical method that takes one predetermined step in the direction of the gradient, getting closer to a minimum at each step, until the gradient is zero. A problem with this algorithm is the possibility of getting stuck in a local minimum.  To continue, the following derivatives are needed:


Notice that the weighted sum on the output of the perceptrons at layer <math>\displaystyle l</math> are the inputs into the perceptrons at layer <math>\displaystyle j</math> and so on for all hidden layers.
:<math>\frac{\partial \phi}{\partial \underline{\beta}}= -\displaystyle\sum_{i \in M}y_{i}\underline{x_{i}}
\ \ \ \ \ \ \ \ \ \ \ \frac{\partial \phi}{\partial \beta_{0}}= -\displaystyle\sum_{i \in M}y_{i}</math>
<br/>


So, using the chain rule
Then the gradient descent type algorithm (Perceptron Algorithm) is
<br><math>\frac{\partial err}{\partial u_{jl}}=\frac{\partial err}{\partial a_j} \cdot \frac{\partial a_j}{\partial u_{jl}}</math>
:<math>
<br><math>\frac{\partial err}{\partial u_{jl}}=\delta_j \cdot z_l</math>
\begin{pmatrix}
 
  \underline{\beta}^{\mathrm{new}}\\
Note that a change in <math>\,a_j</math> causes changes in all <math>\,a_i</math> in the next layer which the error based on, thus we need to sum over i in the chain:
  \underline{\beta_0}^{\mathrm{new}}
<math>\delta_j = \frac{\partial err}{\partial a_j} = \sum_i \frac{\partial err}{\partial a_i} \cdot \frac{\partial a_i}{\partial a_j} =\sum_i \delta_i \cdot \frac{\partial a_i}{\partial a_j}</math>
\end{pmatrix}
<br><math>\,\frac{\partial a_i}{\partial a_j}=\frac{\partial a_i}{\partial z_j} \cdot \frac{\partial z_j}{\partial a_j}=u_{ij} \cdot \sigma'(a_j)</math> Using the activation function <math>\,\sigma(\cdot)</math>
=
 
\begin{pmatrix}
So <math>\delta_j = \sum_i \delta_i \cdot u_{ij} \cdot \sigma'(a_j)</math>
  \underline{\beta}^{\mathrm{old}}\\
<br><math>\delta_j = \sigma'(a_j)\sum_i \delta_i \cdot u_{ij}</math>
  \underline{\beta_0}^{\mathrm{old}}
 
\end{pmatrix}
Having calculated the error that the output creates, we can propagate this error back to the previous layers while adjusting the weights to solve a particular problem.
+\rho
 
\begin{pmatrix}  
==Neural Networks (NN) - October 30, 2009 ==
  y_i \underline{x_i}\\  
 
  y_i
=== Back-propagation ===
\end{pmatrix}
The idea is that we first feed an input from the training set to the Neural Network, then find the error rate at the output and then we propagate the error to previous layers and for each edge of weight <math>\,u_{ij}</math> we find <math>\frac{\partial \mathrm{err}}{\partial u_{ij}}</math>. Having the error rates at hand we adjust the weight of each edge by taking steps proportional to the negative of the gradient to decrease the error at output. The next step is to apply the next input from the training set and go through the described adjustment procedure.
</math>
The overview of Back-propagation algorithm:
where <math>\displaystyle\rho</math> is the magnitude of each step called the "learning rate" or the "convergence rate".  The algorithm continues until <math>
#Feed a point <math>\,x</math> in the training set to the network, and find the output of all the nodes.
\begin{pmatrix}  
#Evaluate <math>\,\delta_k=y_k-\hat{y_k}</math> for all output units, where <math>y_k</math> is the expected output and <math>\hat{y_k}</math> is the real output.
  \underline{\beta}^{\mathrm{new}}\\  
#By propagating to the previous layers evaluate all <math>\,\delta_j</math>s for hidden units: <math>\,\delta_j=\sigma'(a_j)\sum_i \delta_i u_{ij}</math> where <math>i</math> is associated to the previous layer.
  \underline{\beta_0}^{\mathrm{new}}
#Using <math>\frac{\partial \mathrm{err}}{\partial u_{jl}} = \delta_j\cdot z_l</math> find all the derivatives.
\end{pmatrix}
#Adjust each weight by taking steps proportional to the negative of the gradient: <math>u_{jl}^{\mathrm{new}} \leftarrow u_{jl}^{\mathrm{old}} -\rho \frac{\partial \mathrm{err}}{u_{ij}}</math>
=
#Feed the next point in the training set and repeat the above steps.
\begin{pmatrix}  
This still leaves the question of how to initialize the weights <math>\,u_{ij}, w_i</math>.  The method of choosing weights mentioned in class was to randomize the weights before the first step. This is not likely to be near the optimal solution in every case, but is simple to implement. To more specific, random values near zero will be good choice of the initial weights. In this case, the model evolves from a nearly linear one to a nonlinear one as we desired. An alternative is to use an orthogonal least squares method to find the initial weights <ref>http://www.mitpressjournals.org/doi/abs/10.1162/neco.1995.7.5.982</ref>.  Regression is performed on the weights and output by using a linear approximation of <math>\,\sigma(a_i)</math>, and finds optimal weights in the linear model.  Back propagation is used afterward to find the optimal solution, since the NN is non-linear.
  \underline{\beta}^{\mathrm{old}}\\  
  \underline{\beta_0}^{\mathrm{old}}
\end{pmatrix} </math>  
or until it has iterated a specified number of times. If the algorithm converges, it has found a linear classifier, ie., there are no misclassified points.
<br/>
<br/>
====Problems with the [http://www.cs.cmu.edu/~avrim/ML09/lect0126.pdf Algorithm] and Issues Affecting Convergence====
#The output values of a perceptron can take on only one of two values (+1 or -1); that is, it only can be used for two-class classification.
#If the data is not separable, then the Perceptron algorithm will not converge since it cannot find a linear classifier that classifies all of the points correctly.  
#Convergence rates depend on the size of the gap between classes.  If the gap is large, then the algorithm converges quickly. However, if the gap is small, the algorithm converges slowly. This problem can be eliminated by using basis expansions technique. To be specific, we try to find a hyperplane not in the original space, but in the enlarged space obtained by using some basis functions.
#If the classes are separable, there exists infinitely many solutions to Perceptron, all of which are hyperplanes.  
#The speed of convergence of the algorithm is also dependent on the value of <math>\displaystyle\rho</math>, the learning rate. A larger value of <math>\displaystyle\rho</math> could yield quicker convergence, but if this value is too large, it may also result in “skipping over” the minimum that the algorithm is trying to find and possibly oscillating forever between the last two points, before and after the min.
#A perfect separation is not always available even desirable. If observations comes from different classes sharing the same imput, the classification model seems to be overfitting and will generally have poor predictive performance.
#The [http://annet.eeng.nuim.ie/intro/course/chpt2/convergence.shtml perceptron convergence theorem] states that if there exists an exact solution (in other words, if the training data set is linearly separable), then the perceptron learning algorithm is guaranteed to find an exact solution in a finite number of steps. Proofs of this theorem can be found for example in Rosenblatt (1962), Block (1962), Nilsson (1965), Minsky and Papert (1969), Hertz et al. (1991), and Bishop (1995a). Note, however, that the number of steps required to achieve convergence could still be substantial, and in practice, until convergence is achieved we will not be able to distinguish between a nonseparable problem and one that is simply slow to converge<ref>
Pattern Recognition and Machine Learning,Christopher M. Bishop,194


The learning rate <math>\,\rho</math> is usually a constant. As a form of stochastic approximation process, <math>\,\rho</math> should decrease as the iteration increase. Theoretically, we can obtain the optimal solution.
</ref>.


=== Dimensionality reduction application ===
====Comment on gradient descent algorithm====
[[File:NN-bottelneck.png|350px|thumb|right|Figure 1: Bottleneck configuration for applying dimensionality reduction.]]
Consider yourself on the peak  and you want to get to the land as fast as possible. So which direction should you step? Intuitively it should be the direction in which the height decreases fastest, which is given by the gradient. However, if the mountain has a saddle shape and you initially stand in the middle, then you will finally arrive at the saddle point (local minimum) and get stuck there.
One possible application of Neural Networks is to perform dimensionality reduction, like other techniques, e.g., PCA, MDS, LLE and Isomap.
 
In addition, note that in the final form of our gradient descent algorithm, we get rid of the summation over <math>\,i</math> (all data points). Actually, this is an alternative of the original gradient descent algorithm (sometimes called batch gradient descent) known as Stochastic gradient descent, where we approximate the true gradient by only evaluating on a single training example. This means that <math>\,{\beta}</math> gets improved by computation of only one sample. When there is a large data set, say, population database, it's very time-consuming to do summation over millions of samples. By Stochastic gradient descent, we can treat the problem sample by sample and still get decent result in practice.


Consider the following configuration as shown in figure 1:
<br/>
As we go forward in layers of this Neural Network, the number of nodes is reduced, until we reach a layer with the number of nodes representing the desired dimensionality. However, note that at the very first few layers the number of nodes may not be strictly decreasing, as long as finally it can reach a layer with less nodes. From now on in the Neural Network
*A Perceptron applet can be found at http://isl.ira.uka.de/neuralNetCourse/2004/VL_11_5/Perceptron.html .
the previous layers are mirrored. So at the output layer we have the same number of states as we have in the input layer. Now note that if we feed the network with each point and get an output approximately equal to the fed input, that means at the output the same input is reconstructed from the middle layer units. So the output of the middle layer units can represent the input with less dimensions.


To train this Neural Network, we feed the network with a training point and through back propagation we adjust the network weights based on the error between the input layer and the reconstruction at the output layer. Our low dimensional mapping will be the observed output from the middle layer. Data reconstruction consists of putting the low dimensional data through the second half of the network.
==Neural Networks (NN) - October 28, 2009 ==
===Introduction===


=== Deep Neural Network ===
A [http://en.wikipedia.org/wiki/Neural_network neural network] is a two stage regression for classification model. It can be represented by a network diagram. It is a parallel, distributed information processing structure consisting of processing elements interconnected together with signal channels called connections. Each processing element has a single output connection with branches that "fan out" onto as many connections as desired, each carrying the same signal - the
Back-propagation in practice may not work well when there are too many hidden layers, since the <math>\,\delta</math> may become negligible and the errors vanish. This is a numerical problem, where it is difficult to estimate the errors. So in practice configuring a
processing element output signal.  
Neural Network with Back-propagation faces some subtleties.
Deep Neural Networks became popular two or three years ago, when introduced by Bradford Nill in his PhD thesis. Deep Neural Network training algorithm deals with the training of a Neural Network with a large number of layers.


The approach of training the deep network is to assume the network has only two layers first and train these two layers. After that we train the next two layers, so on and so forth.
<ref> Haykin, Simon (2009). Neural Networks and Learning Machines. Pearson Education, Inc. </ref>
A neural network resembles the brain in two respects:


Although we know the input and we expect a particular output, we do not know the correct output of the hidden layers, and this will be the issue that the algorithm mainly deals with.
1. Knowledge is acquired by the network from its environment through a learning process.
There are two major techniques to resolve this problem: using Boltzman machine to minimize the energy function, which is inspired from the theory in atom physics concerning the most stable condition; or somehow finding out what output of the second layer is most likely to lead us to the expected output at the output layer.


===Neural Networks in Practice===
2. Interneuron connection strengths, known as synaptic weights, are used to store the acquired knowledge.
Now that we know so much about Neural Networks, what are suitable real world applications? Neural Networks have already been successfully applied in many industries.  


Since neural networks are good at identifying patterns or trends in data, they are well suited for prediction or forecasting needs, such as customer research, sales forecasting, risk management and so on.
<ref>
Theory of the Backpropagation Neural Network, R. Necht-Nielsen </ref> It is a multistage regression or classification model represented by a network. Figure 1 is an example of a typical neural network but it can have many different forms. It network applies both to regression or classification.
[[File:NN.png|300px|thumb|right|Figure 1: General Structure of a Neural Network.]]


Take a specific marketing case for example. A feedforward neural network was trained using back-propagation to assist the marketing control of airline seat allocations. The neural approach was adaptive to the rule. The system is used to monitor and recommend booking advice for each departure.
*A regression problem typically has only one unit <math>\ y_1 </math> in the output layer, but these networks can handle multiple quantitative responses in a seamless fashion.


=== Issues with Neural Network ===
*In a k-class classification problem, there are usually k target measurements units <math>\ y_1,...,y_k </math> in the output layer that each represent the probability of class '''k''' and each <math>\displaystyle y_k</math> is coded (0,1).
When Neural Networks was first introduced they were thought to be modeling human brains, hence they were given the fancy name "Neural Network". But now we know that they are just logistic regression layers on top of each other but have nothing to do with the real function principle in the brain.


We do not know why deep networks turn out to work quite well in practice. Some people claim that they mimic the human brains, but this is unfounded. As a result of these kinds of claims it is important to keep the right perspective on what this field of study is trying to accomplish. For example, the goal of machine learning may be to mimic the 'learning' function of the brain, but necessarily the processes the brain uses to learn.
===Activation Function===
[http://en.wikipedia.org/wiki/Activation_function Activation Function] is a term that is frequently used in classification by NN.  


As for the algorithm, since it does not have a convex form, we still face the problem of local minimum, although people have devised other techniques to avoid this dilemma.
In perceptron, we have a "sign" function that takes the sign of a weighted sum of input features.  


In sum, Neural Network lacks a strong learning theory to back up its "success", thus it's hard for people to wisely apply and adjust it. Having said that, it is not an active research area in machine learning. NN still has wide applications in the engineering field such as in control.
[[File:signfuncperceptron.png|200px|]]
<br>The sign function is of the form [[File:signfunc1.png|30px|]] ; it is not continuous at 0 and we cannot take derivative of it. Thus, we replace it by a smooth function <math>\displaystyle \sigma </math> of the form [[File:signfunc2.png|30px|]] and call it the '''activation function'''.
<br>The choice of this function <math>\displaystyle \sigma </math> is determined by the properties of the data and the assumed distribution of target variables, but for multiple binary classification problems the logistic function, also known as inverse-logit ([http://en.wikipedia.org/wiki/Sigmoid_function sigmoid function]), is often used:
<math>\sigma(a)=\frac {1}{1+e^{-a}}</math>


== Complexity Control October 30, 2009 ==
[[File:AF.jpg|300px|thumb|right|Figure: Graph of <math>\sigma(a)=\frac {1}{1+e^{-a}}</math>]]


[[File:overfitting-model.png|500px|thumb|right|Figure 2. The overfitting model passes through all the points of the training set, but has poor predictive power for new points.
There are some important properties for the activation function.
In exchange the line model has some error on the training points but has extracted the main characteristic of the training points, and has good predictive power.]]
There are [http://academicearth.org/lectures/underfitting-and-overfitting two issues] that we have to avoid in Machine Learning:
#[http://en.wikipedia.org/wiki/Overfitting Overfitting]
#Underfitting


Overfitting occurs when our model is heavily complex with so many degrees of freedom, that we can learn every detail of the training set. Such a model will have very high precision on the training set but will show very poor ability to predict outcomes of new instances, especially outside the domain of the training set.
# Activation function is nonlinear. It can be shown that if the activation function of the hidden units is linear, a three-layer neural network is equivalent to a two layer one.
# Activation function saturate, which means there are maximum and minimum output value. This property ensures that the weights are bounded and therefore the searching time is limited.
# Activation function is continuous and smooth.
# Activation function is monotonic. This property is not necessary, since we know that RBF networks is also a kind of power model.  


In a Neural Network if the depth is too much, the network will have many degrees of freedom and will learn every characteristic of the training data set. That means it will show a very precise outcome of the training set but will not be able to generalize the commonality of the training set to predict the outcome of new cases.
'''Note:''' A key difference between a perceptron and a neural network is that a neural network uses continuous nonlinearities in the units, for the purpose of differentiation, whereas the perceptron often uses a non-differentiable activation function. The neural network function is differentiable with respect to the network parameters so that a gradient descent method can be used in training. Moreover, a perceptron is a linear classifier, whereas a neural network,by introducting the nonlinear transformation <math>\ \sigma </math>,it greatly enlarges the class of linear models and by combining layers of perceptrons, neural network is able to classify non-linear problems through proper training.


Underfitting occurs when the model we picked to describe the data is not complex enough, and has high error rate on the training set.
By assigning some weights to the connectors in the neural network (see diagram above) we weigh the input that comes into the perceptron, to get an output that in turn acts as an input to the next layer of perceptrons, and so on for each layer(There are no cross-connections between units in the same layer and no backward connections from layers downstream. Typically, units in layer k provide input only to units in layer k +1). This type of neural network is called [http://en.wikipedia.org/wiki/Feedforward_neural_network Feed-Forward Neural Network]. Applications to Feed-Forward Neural Networks include data reduction, speech recognition, sensor signal processing, and ECG abnormality detection, to name a few. <ref>J. Annema, Feed-Forward Neural Networks, (Springer 1995), pp. 9 </ref>
There is always a trade-off. If our model is too simple, underfitting could occur and if it is too complex, overfitting can occur.


'''Example'''
===Back-propagation===
#Consider the example showed in the figure. We have a training set and we want to find a model which fits it the best. We can find a polynomial of high degree which almost passes through all the points in the training set. But, in fact the training set is coming from a line model. Now the problem is although the complex model has less error on the training set it diverges from the line in other ranges which we have no training point. Because of that the high degree polynomail has very poor predictive result on test cases. This is an example of overfitting model.
Introduction:
#Now consider a training set which comes from a polynomial of degree two model. If we model this training set with a polynomial of degree one, our model will have high error rate on the training set, and is not complex enough to describe the problem.
#Consider a simple classification example.  If our classification rule takes as input only the colour of a fruit and concludes that it is a banana, then it is not a good classifier.  The reason is that just because a fruit is a yellow, does not mean that it is a banana.  We can add complexity to our model to make it a better classifier by considering more features typical of bananas, such as size and shape.  If we continue to make our model more and more complex in order to improve our classifier, we will eventually reach a point where the quality of our classifier no longer improves, ie., we have overfit the data.  This occurs when we have considered so many features that we have perfectly described the existing bananas, but if presented with a new banana of slightly different shape than the existing bananas, for example, it cannot be detected.  This is the tradeoff; what is the right level of complexity?


== Complexity Control - Nov 2, 2009 ==
For a while, the Neural Network model was just an idea, since there were no algorithms for training the model until 1986, when Geoffrey Hinton <ref>
http://www.cs.toronto.edu/~hinton/backprop.html
</ref> devised an algorithm called '''back-propagation''' [http://en.wikipedia.org/wiki/Backpropagation#Algorithm]. After that, a number of other training algorithms and various configurations of neural networks were implemented.
Work procedure:
Each neuron receives a signal from previous neurons, and each of their signals is multiplied by a different weight value. Sum up these weighted inputs and passed through the activation function which scales the output to a fixed range of values. The output of the limiter is then broadcast to all of the neurons in the next layer i.e. we apply the input values to the inputs of the first layer, allow the signals to propagate through the network, and read the output values.


Overfitting occurs when the model becomes too complex and underfitting occurs when it is not complex enough, both of which are not desirable.  To control complexity, it is necessary to make assumptions for the model before fitting the data.  Assumptions that we can make for a model are with polynomials or a neural network. There are other ways as well.


[[File:Family_of_polynomials.jpg|200px|thumb|right|Figure 1: An example of a model with a family of polynomials]]
When we were talking about perceptrons, we applied a gradient descent algorithm for optimizing weights. Back-propagation uses this idea of gradient descent to train a neural network based on the chain rule in calculus.  
We do not want a model to get too complex, so we control it by making an assumption on the model. With complexity control, we want a model or a classifier with a low error rate.


=== '''How do we choose a good classifier?''' ===
Assume that the last output layer has only one unit, so we are working with a regression problem. Later we will see how this can be extended to more output layers and thus turn into a classificaiton problem.


Our goal is to find a classifier that minimizes the true error rate.  
For simplicity, there is only 1 unit at the end and assume for the moment we are doing regression.
Recall the empirical error rate


<math>\, L_{h}= \frac{1}{n} \sum_{i=1}^{n} I(h(x_{i}) \neq y_{i})</math>
[[File:backpropagation.png|300px|]]


<math>\,h</math> is a classifier and we want to minimize the error rate. So we apply <math>\displaystyle x_1</math> to <math>\displaystyle x_n</math> to <math>\displaystyle h</math>, and take the average to get the empirical true error rate estimation of probability that
Note that we make a distinction between the input weights <math>\displaystyle (w_i)</math> and hidden weights <math>\displaystyle (u_i)</math>.  
<math>h(x_{i}) \neq y_{i}</math>.
<br><br>Within each unit we have a function <math>\displaystyle z_i=\sigma(a_i)</math> that takes input <math>\displaystyle a_i</math> (linear sum of previous level) and outputs <math>\displaystyle z_i's</math>. The <math>\displaystyle z_i's</math> are the inputs into the final output of the model <math>\Rightarrow \hat y_i=\sum_{i=1}^p w_i z_i</math>


<span id="prediction-error">[[File:Prediction_Error.jpg|200px|thumb|right|Figure 2]]</span>
We can find the error of the neural network output by evaluating the squared difference between the true classification and the resulting classification output <math>\Rightarrow \displaystyle error=||y-\hat y ||^2 </math>
There is a downward bias to this estimate meaning that it is always less than the true error rate.


If there is a change in our complexity from low to high, our error rate is always decreasing. When we apply our model to the test data, our error rate will start to decrease to a point, but then it will increase since the model hasn't seen it before.  This can be explained since training error will decrease when we fit the model better by increasing its complexity, but as we have seen, this complex model will not generalize well, resulting in a larger test error.
<br>'''First find derivative of the model error with respect to output weights <math>\displaystyle w_i</math>'''<br><math>\frac{\partial err}{\partial w_i}=\frac{\partial err}{\partial \hat y} \cdot \frac{\partial \hat y}{\partial w_i}</math>
<br><math>\frac{\partial err}{\partial w_i}=2(y-\hat y) \cdot z_i</math>


We use our test data (from the test sample line shown on Figure 2) to get our empirical error rate.
<br>'''Now we need to find the derivative of the model error with respect to hidden weights <math>\displaystyle u_i's</math>'''
Right complexity is defined as where error rate of the test data is minimum; and this is one idea behind complexity control.
<br>Consider the following diagram that opens up the hidden layers of the neural network:


[[File:propagationhidden.png|300px|]]


''i j are reversed!''


[[File:Bias.jpg|200px|thumb|left|Figure 3]]
Notice that the weighted sum on the output of the perceptrons at layer <math>\displaystyle l</math> are the inputs into the perceptrons at layer <math>\displaystyle j</math> and so on for all hidden layers.  


We assume that we have samples <math>\,X_1, . . . ,X_n</math> that follow some (possibly unknown) distribution. We want to estimate a parameter <math>\,f</math> of the unknown distribution. This parameter may be the mean <math>\,E(X_i)</math>, the variance <math>\,var(X_i)</math> or some other quantity.
So, using the chain rule
<br><math>\frac{\partial err}{\partial u_{jl}}=\frac{\partial err}{\partial a_j} \cdot \frac{\partial a_j}{\partial u_{jl}}</math>
<br><math>\frac{\partial err}{\partial u_{jl}}=\delta_j \cdot z_l</math>


The unknown parameter <math>\,f</math> is a fixed real number <math>f\in R</math>. To estimate it, we use an estimator which is a
Note that a change in <math>\,a_j</math> causes changes in all <math>\,a_i</math> in the next layer on which the error is based, so we need to sum over i in the chain:
function of our observations, <math>\hat{f}(X_1,...,X_n)</math>.
<math>\delta_j = \frac{\partial err}{\partial a_j} = \sum_i \frac{\partial err}{\partial a_i} \cdot \frac{\partial a_i}{\partial a_j} =\sum_i \delta_i \cdot \frac{\partial a_i}{\partial a_j}</math>
<br><math>\,\frac{\partial a_i}{\partial a_j}=\frac{\partial a_i}{\partial z_j} \cdot \frac{\partial z_j}{\partial a_j}=u_{ij} \cdot \sigma'(a_j)</math> Using the activation function <math>\,\sigma(\cdot)</math>


<math>Bias (\hat{f}) = E(\hat{f}) - f</math>
So <math>\delta_j = \sum_i \delta_i \cdot u_{ij} \cdot \sigma'(a_j)</math>
<br><math>\delta_j = \sigma'(a_j)\sum_i \delta_i \cdot u_{ij}</math>


<math>MSE (\hat{f}) = E[(\hat{f} - f)^2]</math>
We can propagate the error calculated in the output back through the previous layers and adjust weights to minimize error.


<math>Variance (\hat{f}) = E[(\hat{f} - E(\hat{f}))^2]</math>


One property we desire of the estimator is that it is correct on average, that is, it is unbiased. <math>Bias (\hat{f}) = E(\hat{f}) - f=0</math>.
Back-Propagation neural network is a good method for following situations:
However, there is a more important property for an estimator than just being unbiased: the mean squared error. It reflects the error that the estimator makes.  
*The problem is very complex and the number of input or output data points is very large, and have no idea to relate the input to the output.
*The solution varies over time within the bounds of the given input and output data, or the output is not easy to measure.


Hence, our goal is to minimize <math>MSE (\hat{f})</math>.
==Neural Networks (NN) - October 30, 2009 ==
<math>\sum_{i=1}^k\ g_{ji}\cdot\cfrac{\partial u_j}{\partial a_i}</math>
=== Back-propagation ===
The idea is that we first feed an input (we can normalize the data before feeding) from the training set to the Neural Network, then find the error rate at the output and then we propagate the error to previous layers and for each edge of weight <math>\,u_{ij}</math> we find <math>\frac{\partial \mathrm{err}}{\partial u_{ij}}</math>. Having the error rates at hand we adjust the weight of each edge by taking steps proportional to the negative of the gradient to decrease the error at output. The next step is to apply the next input from the training set and go through the described adjustment procedure.
The overview of Back-propagation algorithm:
#Feed a point <math>\,x</math> in the training set to the network, and find the output of all the nodes.
#Evaluate <math>\,\delta_k=y_k-\hat{y_k}</math> for all output units, where <math>y_k</math> is the expected output and <math>\hat{y_k}</math> is the real output.
#By propagating to the previous layers evaluate all <math>\,\delta_j</math>s for hidden units: <math>\,\delta_j=\sigma'(a_j)\sum_i \delta_i u_{ij}</math> where <math>i</math> is associated to the previous layer.
#Using <math>\frac{\partial \mathrm{err}}{\partial u_{jl}} = \delta_j\cdot z_l</math> find all the derivatives.
#Adjust each weight by taking steps proportional to the negative of the gradient: <math>u_{jl}^{\mathrm{new}} \leftarrow u_{jl}^{\mathrm{old}} -\rho \frac{\partial \mathrm{err}}{u_{ij}}</math>
#Feed the next point in the training set and repeat the above steps.


From figure 3, we can see that the relationship of the three parameters is:
Advantage of Back propageation:<br />
<math>MSE (\hat{f})=Variance (\hat{f})+Bias ^2(\hat{f}) </math>. Thus given the Mean Squared Error (MSE), if we have a low bias, then we will have a high variance and vice versa.
*Reduce the cost of computing derivatives by a factor of the number of derivatives to be calculated when minimizing the error.<br />
*Allow higher degrees of nonlinearity and precision to be applied to problems.


A Test error is a good estimation on MSE. We want to have a somewhat balanced bias and variance (not high on bias or variance), although it will have some bias.
==== How to initialize the weights ====
This still leaves the question of how to initialize the weights <math>\,u_{ij}, w_i</math>.  The method of choosing weights mentioned in class was to randomize the weights before the first step. This is not likely to be near the optimal solution in every case, but is simple to implement. To be more specific, random values near zero will be a good choice for the initial weights(usually from [-1,1]). In this case, the model evolves from a nearly linear one to a nonlinear one as we desired. An alternative is to use an orthogonal least squares method to find the initial weights <ref>http://www.mitpressjournals.org/doi/abs/10.1162/neco.1995.7.5.982</ref>.  Regression is performed on the weights and output by using a linear approximation of <math>\,\sigma(a_i)</math>, and finds optimal weights in the linear model.  Back propagation is used afterward to find the optimal solution, since the NN is non-linear.
 
Why all initial weights should be randomized and small?<br />
*Since the error back propagated through the network is proportional to the value of the weights. If all the weights are the same, then the back propagated errors will be the same as well and causing all of the weights will be updated by the same amount. Thus, same initial weights will prevent the network from learning.<br />
*Since the weights updates in the Back Prop algorithm are proportional to the derivative of activation function, it is important to consider how the net input affects its value. The derivative is a maximum when the activation function is equal to 0.5 and approaches its minimum as the activation function approaches 0 or 1, then its associated weights will vary very little. Thus, if we choose small initial weights, we will have the activation function close to the maximal weight change.


==== How to set learning rates ====
The learning rate <math>\,\rho</math> is usually a constant.


Referring to Figure 2, overfitting happens after the point where training data (training sample line) starts to decrease and test data (test sample line) starts to increase. There are 2 main approaches to avoid overfitting:
If we use On-line learning, as a form of stochastic approximation process, <math>\,\rho</math> should decrease as the iteration increase.


1. Estimating error rate
In typical feedforwad NNs with hidden units, the objective function has many local and global optimal values, so the optimal learning rate often changes dramatically during the training process.
The larger the learning rate the larger the the weight changes on each epoch, and the quicker the network learns.However, the size of the learning rate can also influence whether the network achieves a stable solution. Choosing too large learning rate may cause the unstability of the system and make the weights and objective function diverge, while the too small learning rate may lead to a very slow convergence rate(very long time in learning phase). However, the advantage of small learning rate is that it can guarantee the convergence. Thus, generally, it is better to choose a relatively small learning rate to ensure the stability. Usually, choose <math>\,\rho</math> between 0.01 and 0.7.


<math>\hookrightarrow</math> Empirical training error is not a good estimation
If the learning rate is appropriate, the algorithm is guaranteed to converge to a local minimum, but not a global minimum which is better. Furthermore, there can exist many local minimum values.


<math>\hookrightarrow</math> Empirical test error is a better estimation
==== How to determine the number of hidden units ====


<math>\hookrightarrow</math> Cross-Validation is fast
Here we will mainly discuss how to estimate the number of hidden units at very beginning. Obviously, we should adjust it to be more precise using CV, LOO or other complexity control methods.


<math>\hookrightarrow</math> Computing error bound (analytically) using some probability inequality.
Basically, if the patterns are well separated, few hidden units are fairly enough. If the patterns are drawn from some highly complicated mixture models, more hidden units are really needed.  


We will not discuss computing the error bound in class; however, a popular method for doing this computation is called VC Dimension (short for Vapnik–Chervonenkis Dimension). Information can be found from [http://www.autonlab.org/tutorials/vcdim.html Andrew Moore] and [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.10.7171&rep=rep1&type=pdf Steve Gunn].
Actually, the number of hidden units determines the size of the model, and therefore the total number of the weights in the model. Typically speaking, the number of weights should not be larger than the number of training data, say N. Thus, sometimes, N/10 is a good choice. However, in pratice, many well performed models will use more hidden units.


2. Regularization
=== Dimensionality reduction application ===
[[File:NN-bottelneck.png|350px|thumb|right|Figure 1: Bottleneck configuration for applying dimensionality reduction.]]
One possible application of Neural Networks is to perform dimensionality reduction, like other techniques, e.g., PCA, MDS, LLE and Isomap.


<math>\hookrightarrow</math> Use of shrinkage method
Consider the following configuration as shown in figure 1:
As we go forward in layers of this Neural Network, the number of nodes is reduced, until we reach a layer with the number of nodes representing the desired dimensionality. However, note that at the very first few layers the number of nodes may not be strictly decreasing, as long as finally it can reach a layer with less nodes. From now on in the Neural Network
the previous layers are mirrored. So at the output layer we have the same number of states as we have in the input layer. Now note that if we feed the network with each point and get an output approximately equal to the fed input, that means at the output the same input is reconstructed from the middle layer units. So the output of the middle layer units can represent the input with less dimensions.


<math>\hookrightarrow</math> Decrease the chance of overfitting by controlling the weights
To train this Neural Network, we feed the network with a training point and through back propagation we adjust the network weights based on the error between the input layer and the reconstruction at the output layer. Our low dimensional mapping will be the observed output from the middle layer. Data reconstruction consists of putting the low dimensional data through the second half of the network.


=== '''Example of under and overfitting in R''' ===
=== Deep Neural Network ===
Back-propagation in practice may not work well when there are too many hidden layers, since the <math>\,\delta</math> may become negligible and the errors vanish. This is a numerical problem, where it is difficult to estimate the errors. So in practice configuring a
Neural Network with Back-propagation faces some subtleties.
Deep Neural Networks became popular two or three years ago, when introduced by Bradford Nill in his PhD thesis. Deep Neural Network training algorithm deals with the training of a Neural Network with a large number of layers.


To give further intuition of over and underfitting, consider this example.  A simple quadratic data set with some random noise is generated, and then polynomials of varying degrees are fitted. The errors for the training set and a test set are calculated.
The approach of training the deep network is to assume the network has only two layers first and train these two layers. After that we train the next two layers, so on and so forth.
[[File:Curvefitting-rex2.png|250px|thumb|right|Polynomial fits to curved data set.]]


  >> x <- rnorm(200,0,1)
Although we know the input and we expect a particular output, we do not know the correct output of the hidden layers, and this will be the issue that the algorithm mainly deals with.
  >> y <- x^2-0.5*x+rnorm(200,0,0.3)
There are two major techniques to resolve this problem: using Boltzman machine to minimize the energy function, which is inspired from the theory in atom physics concerning the most stable condition; or somehow finding out what output of the second layer is most likely to lead us to the expected output at the output layer.
  >> xtest <- rnorm(50,1,1)
  >> ytest <- xtest^2-0.5*xtest+rnorm(50,0,0.3)
  >> p1 <- lm(y~x)
  >> p2 <- lm(y ~ poly(x,2))
  >> pn <- lm(y ~ poly(x,10))
  >> psi <- lm(y~I(sin(x))+I(cos(x)))


: <code>x</code> values for the training set are based on a <math>\,N(0,1)</math> distribution, while the test set has a <math>\,N(1,1)</math> distribution. <code>y</code> values are determined by <math>\,y = x^2 + x + N(0,0.3)</math>, a quadratic function with some random variation. Polynomial least square fits of degree 1, 2, and 10 are calculated, as well as a fit of <math>\,sin(x)+cos(x)</math>.
==== Difficulties of training deep architecture <ref>{{Cite journal | title = Exploring Strategies for Training Deep Neural Networks | url = http://jmlr.csail.mit.edu/papers/volume10/larochelle09a/larochelle09a.pdf | year = 2009 | journal = Journal of Machine Learning Research | page = 1-40 | volume = 10 | last1 = Larochelle | first1 =  H. | last2 =  Bengio | first2 =  Y. | last3 = Louradour | first3 = J. | last4 = Lamblin | first4 = P. }}</ref> ====


  >> > # calculate the mean squared error of degree 1 poly
Given a particular task, a natural way to train a deep network is to frame it as an optimization
  >> > sum((y-predict(p1,data.frame(x)))^2)/length(y)
problem by specifying a supervised cost function on the output layer with respect to the desired
  >> [1] 1.576042
target and use a gradient-based optimization algorithm in order to adjust the weights and biases
  >> > sum((ytest-predict(p1,data.frame(x=xtest)))^2)/length(ytest)
of the network so that its output has low cost on samples in the training set. Unfortunately, deep
  >> [1] 7.727615
networks trained in that manner have generally been found to perform worse than neural networks
: Training and test mean squared errors for the linear fit.  These are both quite high - and since the data is non-linear, the different mean value of the test data increases the error quite a bit.
with one or two hidden layers.
  >> > # calculate the mean squared error of degree 2 poly
  >> > sum((y-predict(p2,data.frame(x)))^2)/length(y)
  >> [1] 0.08608467
  >> > sum((ytest-predict(p2,data.frame(x=xtest)))^2)/length(ytest)
  >> [1] 0.08407432
: This fit is far better - and there is not much difference between the training and test error, either.
  >> > # calculate the mean squared error of degree 10 poly
  >> > sum((y-predict(pn,data.frame(x)))^2)/length(y)
  >> [1] 0.07967558
  >> > sum((ytest-predict(pn,data.frame(x=xtest)))^2)/length(ytest)
  >> [1] 156.7139
: With a high-degree polynomial, the training error continues to decrease, but not by much - and the test set error has risen again.  The overfitting makes it a poor predictor.  As the degree of the polynomial rises further, the accuracy of the computer becomes an issue - and a good fit is not even consistently produced for the training data.
  >> > # calculate mse of sin/cos fit
  >> > sum((y-predict(psi,data.frame(x)))^2)/length(y)
  >> [1] 0.1105446
  >> > sum((ytest-predict(psi,data.frame(x=xtest)))^2)/length(ytest)
  >> [1] 1.320404
: Fitting a function of the form sin(x)+cos(x) works pretty well on the training set, but because it is not the real underlying function, it fails on test data which doesn't lie on the same domain.


== ''' Cross-Validation (CV) - Introduction ''' ==
We discuss two hypotheses that may explain this difficulty. The first one is that gradient descent
can easily get stuck in poor local minima (Auer et al., 1996) or plateaus of the non-convex training
criterion. The number and quality of these local minima and plateaus (Fukumizu and Amari, 2000)
clearly also influence the chances for random initialization to be in the basin of attraction (via
gradient descent) of a poor solution. It may be that with more layers, the number or the width
of such poor basins increases. To reduce the difficulty, it has been suggested to train a neural
network in a constructive manner in order to divide the hard optimization problem into several
greedy but simpler ones, either by adding one neuron (e.g., see Fahlman and Lebiere, 1990) or one
layer (e.g., see Lengell´e and Denoeux, 1996) at a time. These two approaches have demonstrated to
be very effective for learning particularly complex functions, such as a very non-linear classification
problem in 2 dimensions. However, these are exceptionally hard problems, and for learning tasks
usually found in practice, this approach commonly overfits.


[[File:Cv.jpg|200px|thumb|right|Figure 1: Illustration of Cross-Validation]]
This observation leads to a second hypothesis. For high capacity and highly flexible deep networks,
Cross-Validation is used to estimate the error rate of a classifier with respect to test data rather than data used in the model. Here is a general introduction to CV:
there actually exists many basins of attraction in its parameter space (i.e., yielding different
solutions with gradient descent) that can give low training error but that can have very different generalization
errors. So even when gradient descent is able to find a (possibly local) good minimum
in terms of training error, there are no guarantees that the associated parameter configuration will
provide good generalization. Of course, model selection (e.g., by cross-validation) will partly correct
this issue, but if the number of good generalization configurations is very small in comparison
to good training configurations, as seems to be the case in practice, then it is likely that the training
procedure will not find any of them. But, as we will see in this paper, it appears that the type of
unsupervised initialization discussed here can help to select basins of attraction (for the supervised
fine-tuning optimization phase) from which learning good solutions is easier both from the point of
view of the training set and of a test set.


<math>\hookrightarrow</math> We have a set of collected data for which we know the proper labels
===Neural Networks in Practice===
Now that we know so much about Neural Networks, what are suitable real world applications? Neural Networks have already been successfully applied in many industries.


<math>\hookrightarrow</math> We divide it into 2 parts, Training data (T) and Validation data (V)
Since neural networks are good at identifying patterns or trends in data, they are well suited for prediction or forecasting needs, such as customer research, sales forecasting, risk management and so on.


<math>\hookrightarrow</math> For our calculation, we pretend that we do not know the label of V and we use data in T to train the classifier
Take a specific marketing case for example. A feedforward neural network was trained using back-propagation to assist the marketing control of airline seat allocations. The neural approach was adaptive to the rule. The system is used to monitor and recommend booking advice for each departure.


<math>\hookrightarrow</math> We estimate an empirical error rate on V since the model hasn't seen V yet and we know the proper label of all elements in V to know how many were misclassified.
=== Issues with Neural Network ===
When Neural Networks was first introduced they were thought to be modeling human brains, hence they were given the fancy name "Neural Network". But now we know that they are just logistic regression layers on top of each other but have nothing to do with the real function principle in the brain.


CV has different implementations which can reduce the variance of the calculated error rate, but sometimes with a tradeoff of a higher calculation time.
We do not know why deep networks turn out to work quite well in practice. Some people claim that they mimic the human brains, but this is unfounded. As a result of these kinds of claims it is important to keep the right perspective on what this field of study is trying to accomplish. For example, the goal of machine learning may be to mimic the 'learning' function of the brain, but necessarily the processes the brain uses to learn.


== ''' Complexity Control - Nov 4, 2009''' ==
As for the algorithm, since it does not have a convex form, we still face the problem of local minimum, although people have devised other techniques to avoid this dilemma.


== Cross-validation ==
In sum, Neural Network lacks a strong learning theory to back up its "success", thus it's hard for people to wisely apply and adjust it. Having said that, it is not an active research area in machine learning. NN still has wide applications in the engineering field such as in control.
[[File:Cross-validation.png|350px|thumb|right|Figure 1: Classical/Standard cross-validation]]
Cross-validation is the simplest and most widely used method to estimate the true error. It comes from the observation that although training error always decreases with the increasing complexity of the model, the test error starts to increase from a certain point, which is noted as overfitting (see [[#prediction-error|figure 2]] above). Since test error estimates MSE (mean square error) best, people came up with the idea to randomly separate the data set into a training set and a validation set, which is used to simulate a test set.  


Then, we only use the section of our data marked as the "training set" to train our algorithm, while keeping the section of our data marked as the "validation set" untouched. As a result, the validation set will be totally unknown to the trained model. The error rate is then estimated by:


<math>\hat L(h) = \frac{1}{|\nu|}\sum_{X_i \in \nu}(h(x_i) \neq y_i)</math>, where <math>\,\nu</math> is the cardinality of the validation set.
===BUSINESS APPLICATIONS OF NEURAL NETWORKS===


When we change the complexity, the error generated by the validation set will have the same behavior as the test set, so we are able to choose the best parameters to get the lowest error.
Neural networks are increasingly being used in real-world business applications and, in some cases, such as fraud detection, they have already become the method of choice. Their use for risk assessment is also growing and they have been employed to visualize complex databases for marketing segmentation. This method covers a wide range of business interests — from finance management, through forecasting, to production. The combination of statistical, neural and fuzzy methods now enables direct quantitative studies to be carried out without the need for rocket-science expertise.


* On the Use of Neural Networks for Analysis Travel Preference Data
* Extracting Rules Concerning Market Segmentation from Artificial Neural Networks
* Characterization and Segmenting the Business-to-Consumer E-Commerce Market Using Neural Networks
* A Neurofuzzy Model for Predicting Business Bankruptcy
* Neural Networks for Analysis of Financial Statements
* Developments in Accurate Consumer Risk Assessment Technology
* Strategies for Exploiting Neural Networks in Retail Finance
* Novel Techniques for Profiling and Fraud Detection in Mobile Telecommunications
* Detecting Payment Card Fraud with Neural Networks
* Money Laundering Detection with a Neural-Network
* Utilizing Fuzzy Logic and Neurofuzzy for Business Advantage


=== K-fold Cross-validation ===
== Complexity Control October 30, 2009 ==
[[File:k-fold.png|350px|thumb|right|Figure 2: K-fold cross-validation]]
Above is the simplest form of cross-validation. However, in reality, data is hard to collect and we usually suffer from the curse of dimensionality, which requires an even bigger data set. Consequently, we may not afford to sacrifice part of the limited resources. To address this problem, we divide the data set into <math>\,K</math> subsets roughly equal in size. The usual choice is <math>\,K = 10</math>.


Generally, how to choose <math>\,K</math>:
[[File:overfitting-model.png|500px|thumb|right|Figure 2. The overfitting model passes through all the points of the training set, but has poor predictive power for new points.
In exchange the line model has some error on the training points but has extracted the main characteristic of the training points, and has good predictive power.]]
There are [http://academicearth.org/lectures/underfitting-and-overfitting two issues] that we have to avoid in Machine Learning:
#[http://en.wikipedia.org/wiki/Overfitting Overfitting]
#Underfitting


if <math>\,K=n</math>, leave one out, low bias, high variance.


if <math>\,K=2</math>, say 2-fold, 5-fold, high bias, low variance.  
Overfitting occurs when our model is heavily complex with so many degrees of freedom, that we can learn every detail of the training set. Such a model will have very high precision on the training set but will show very poor ability to predict outcomes of new instances, especially outside the domain of the training set.Dangerous for the overfitting:it will easily lead the predictions to the range that is far beyond the training data, and produce wild predictions in multilayer perceptrons even with noise-free data.The best way to avoid overfitting is to use lots of training data.  


For every <math>\,k</math>th <math>( \,k \in [ 1, K ] )</math> part, we use the other <math>\,K-1</math> parts to fit the model and test on the <math>\,k</math>th part to estimate the prediction error <math>\hat L_k</math>, where


<math>\hat L(h) = \frac{1}{K}\sum_{k=1}^K\hat L_k</math>
In a Neural Network if the depth is too much, the network will have many degrees of freedom and will learn every characteristic of the training data set. That means it will show a very precise outcome of the training set but will not be able to generalize the commonality of the training set to predict the outcome of new cases.


For example, suppose we want to fit a polynomial model to the data set and split the set into four equal subsets as shown in Figure 2. First we choose the degree to be 1, i.e. a lineal model. Next we use the first three sets as training sets and the last as validation set, then the 1st, 2nd, 4th subsets as training set and the 3rd as validation set, so on and so forth until all the subsets have been the validation set once (all observations are used for both training and validation). After we get <math>\hat L_1, \hat L_2, \hat L_3, \hat L_4</math>, we can calculate the average <math>\hat L</math> for degree 1 model. Similarly, we can estimate the error for n degree model and generate a simulating curve. Now we are able to choose the right degree which corresponds to the minimum error. Also, we can use this method to find the optimal unit number of hidden layers of neural networks. We can begin with 1 unit number, then 2, 3 and so on and so forth. Then find the unit number of hidden layers with lowest average error.
Underfitting occurs when the model we picked to describe the data is not complex enough, and has high error rate on the training set.
There is always a trade-off. If our model is too simple, underfitting could occur and if it is too complex, overfitting can occur.


=== Leave-one-out Cross-validation ===
'''Example'''
Leave-one-out cross-validation involves using all but one data point in the original training data set to train our model, then using the data point that we initially left out as a means to estimate true error. But repeating this process for every data point in our original data set, we can obtain a good estimation of true error.
#Consider the example showed in the figure. We have a training set and we want to find a model which fits it the best. We can find a polynomial of high degree which almost passes through all the points in the training set. But, in fact the training set is coming from a line model. Now the problem is although the complex model has less error on the training set it diverges from the line in other ranges which we have no training point. Because of that the high degree polynomail has very poor predictive result on test cases. This is an example of overfitting model.
#Now consider a training set which comes from a polynomial of degree two model. If we model this training set with a polynomial of degree one, our model will have high error rate on the training set, and is not complex enough to describe the problem.
#Consider a simple classification example.  If our classification rule takes as input only the colour of a fruit and concludes that it is a banana, then it is not a good classifier.  The reason is that just because a fruit is a yellow, does not mean that it is a banana.  We can add complexity to our model to make it a better classifier by considering more features typical of bananas, such as size and shape.   If we continue to make our model more and more complex in order to improve our classifier, we will eventually reach a point where the quality of our classifier no longer improves, ie., we have overfit the data.  This occurs when we have considered so many features that we have perfectly described the existing bananas, but if presented with a new banana of slightly different shape than the existing bananas, for example, it cannot be detected. This is the tradeoff; what is the right level of complexity?


In other words, leave-one-out cross-validation is like k-fold cross-validation in which we set the subset number <math>\,K</math> to be the cardinality of the whole data set.
== Complexity Control - Nov 2, 2009 ==


In the above example, we can see that k-fold cross-validation can be computationally expensive: for every possible value of the parameter, we must train the model <math>\,K</math> times. This deficiency is even more obvious in leave-one-out cross-validation, where we must train the model <math>\,n</math> times, where <math>\,n</math> is the number of data point in the data set.
Overfitting occurs when the model becomes too complex and underfitting occurs when it is not complex enough, both of which are not desirable.  To control complexity, it is necessary to make assumptions for the model before fitting the data. Assumptions that we can make for a model are with polynomials or a neural network. There are other ways as well.


Fortunately, when adding data points to the classifier is reversible, calculating the difference between two classifiers is computationally more efficient than calculating the two classifiers separately. So, if the classifier on all the data points is known, we simply undo the changes from a data point <math>\,K</math> times to calculate the leave-one-out cross-validation error rate.
[[File:Family_of_polynomials.jpg|200px|thumb|right|Figure 1: An example of a model with a family of polynomials]]
We do not want a model to get too complex, so we control it by making an assumption on the model. With complexity control, we want a model or a classifier with a low error rate. The lecture will explain the [http://academicearth.org/lectures/bias-variance-tradeoff tradeoff between Bias and variance] for model complexity control.


== Regularization for Neural Network — Weight Decay ==
=== '''How do we choose a good classifier?''' ===
[[File:figure 2.png|350px|thumb|right|Figure 1: activation function]]
Weight decay training is suggested as an implementation for achieving a robust neural network which is insensitive to noise. Since the number of hidden layers in NN is usually decided by certain domain knowledge, it may easily get into the problem of overfitting.


It can be seen from Figure 1 that when the weight is in the vicinity of zero, the operative part of the activation function shows linear behavior. The NN then collapses to an approximately linear model. Note that a linear model is the simplest model, we can avoid overfitting by constraining the weights to be small. This gives us a hint to initialize the random weights to be close to zero.
Our goal is to find a classifier that minimizes the true error rate<math>\ L(h)</math>.  


Formally, we penalize nonlinear weights by adding a penalty term in the error function. Now the regularized error function becomes:
<math>\ L(h)=Pr\{h(x)\neq y\}</math>


<math>\,REG = err + \lambda(\sum_{i}|w_i|^2 + \sum_{jk}|u_{jk}|^2)</math>, where <math>\,err</math> is the original error in back-propagation; <math>\,w_i</math> is the weights of the output layer; <math>\,u_{jk}</math> is the weights of the hidden layers.
Recall the empirical error rate


As in back-propagation, we take partial derivative with respect to the weights:
<math>\ \hat L_{h}= \frac{1}{n} \sum_{i=1}^{n} I(h(x_{i}) \neq y_{i})</math>


<math>\frac{\partial REG}{\partial w_i} = \frac{\partial err}{\partial w_i} + 2\lambda w_i</math>
<math>\,h</math> is a classifier and we want to minimize the error rate. So we apply <math>\displaystyle x_1</math> to <math>\displaystyle x_n</math> to <math>\displaystyle h</math>, and take the average to get the empirical true error rate estimation of probability that
<math>h(x_{i}) \neq y_{i}</math>.


<math>\frac{\partial REG}{\partial u_{jk}} = \frac{\partial err}{\partial u_{jk}} + 2\lambda u_{jk}</math>
<span id="prediction-error">[[File:Prediction_Error.jpg|200px|thumb|right|Figure 2]]</span>
There is a downward bias to this estimate meaning that it is always less than the true error rate.


<math>w^{new} \leftarrow w^{old} - \rho\left(\frac{\partial err}{\partial w} + 2\lambda w\right)</math>
If there is a change in our complexity from low to high, our error rate is always decreasing. When we apply our model to the test data, our error rate will start to decrease to a point, but then it will increase since the model hasn't seen it before.  This can be explained since training error will decrease when we fit the model better by increasing its complexity, but as we have seen, this complex model will not generalize well, resulting in a larger test error.


<math>u^{new} \leftarrow u^{old} - \rho\left(\frac{\partial err}{\partial u} + 2\lambda w\right)</math>
We use our test data (from the test sample line shown on Figure 2) to get our empirical error rate.
Right complexity is defined as where error rate of the test data is minimum; and this is one idea behind complexity control.


Note that here <math>\,\lambda</math> serves as a trade-off parameter, tuning between the error rate and the linearity. Actually, we may also set <math>\,\lambda</math> by cross-validation.  The tuning parameter is important since weights of zero will lead to zero derivatives and the algorithm will not change.  On the other hand, starting with weights that are too large means starting with a nonlinear model which can often lead to poor solutions. <ref>Trevor Hastie, Robert Tibshirani, Jerome Friedman, Elements of Statistical Learning (Springer 2009) pp.398</ref>


== Radial Basis Function (RBF) Networks - November 6, 2009 ==


[[File:Rbf_net.png|350px|thumb|right|Figure 1: Radial Basis Function Network]]
[[File:Bias.jpg|200px|thumb|left|Figure 3]]
 
We assume that we have samples <math>\,X_1, . . . ,X_n</math> that follow some (possibly unknown) distribution. We want to estimate a parameter <math>\,f</math> of the unknown distribution. This parameter may be the mean <math>\,E(X_i)</math>, the variance <math>\,var(X_i)</math> or some other quantity.
 
The unknown parameter <math>\,f</math> is a fixed real number <math>f\in R</math>. To estimate it, we use an estimator which is a
function of our observations, <math>\hat{f}(X_1,...,X_n)</math>.
 
<math>Bias (\hat{f}) = E(\hat{f}) - f</math>
 
<math>MSE (\hat{f}) = E[(\hat{f} - f)^2]=Varince (\hat f)+Bias^2(\hat f )</math>
 
<math>Variance (\hat{f}) = E[(\hat{f} - E(\hat{f}))^2]</math>
 
One property we desire of the estimator is that it is correct on average, that is, it is unbiased. <math>Bias (\hat{f}) = E(\hat{f}) - f=0</math>.
However, there is a more important property for an estimator than just being unbiased: the mean squared error. In statistics, there are problems for which it may be good to use an estimator with a small bias. In some cases, an estimator with a small bias may have lesser mean squared error or be median-unbiased (rather than mean-unbiased, the standard unbiasedness property). The property of median-unbiasedness is invariant under transformations while the property of mean-unbiasedness may be lost under nonlinear transformations. For example, while using an unbiased estimator with large mean square error to estimate the parameter, we highly risk a big error. In contrast, a biased estimator with small mean square error will well improve the precision of our prediction.
 
Hence, our goal is to minimize <math>MSE (\hat{f})</math>.
 
From figure 3, we can see that the relationship of the three parameters is:
<math>MSE (\hat{f})=Variance (\hat{f})+Bias ^2(\hat{f}) </math>. Thus given the Mean Squared Error (MSE), if we have a low bias, then we will have a high variance and vice versa.
 
A Test error is a good estimation on MSE. We want to have a somewhat balanced bias and variance (not high on bias or variance), although it will have some bias.
 
 
Referring to Figure 2, overfitting happens after the point where training data (training sample line) starts to decrease and test data (test sample line) starts to increase. There are 2 main approaches to avoid overfitting:
 
1. Estimating error rate
 
<math>\hookrightarrow</math> Empirical training error is not a good estimation
 
<math>\hookrightarrow</math> Empirical test error is a better estimation
 
<math>\hookrightarrow</math> Cross-Validation is fast
 
<math>\hookrightarrow</math> Computing error bound (analytically) using some probability inequality.
 
We will not discuss computing the error bound in class; however, a popular method for doing this computation is called VC Dimension (short for Vapnik–Chervonenkis Dimension). Information can be found from [http://www.autonlab.org/tutorials/vcdim.html Andrew Moore] and [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.10.7171&rep=rep1&type=pdf Steve Gunn].
 
2. Regularization
 
<math>\hookrightarrow</math> Use of shrinkage method
 
<math>\hookrightarrow</math> Decrease the chance of overfitting by controlling the weights
 
=== '''Example of under and overfitting in R''' ===
 
To give further intuition of over and underfitting, consider this example.  A simple quadratic data set with some random noise is generated, and then polynomials of varying degrees are fitted.  The errors for the training set and a test set are calculated.
[[File:Curvefitting-rex2.png|250px|thumb|right|Polynomial fits to curved data set.]]
 
  >> x <- rnorm(200,0,1)
  >> y <- x^2-0.5*x+rnorm(200,0,0.3)
  >> xtest <- rnorm(50,1,1)
  >> ytest <- xtest^2-0.5*xtest+rnorm(50,0,0.3)
  >> p1 <- lm(y~x)
  >> p2 <- lm(y ~ poly(x,2))
  >> pn <- lm(y ~ poly(x,10))
  >> psi <- lm(y~I(sin(x))+I(cos(x)))
 
: <code>x</code> values for the training set are based on a <math>\,N(0,1)</math> distribution, while the test set has a <math>\,N(1,1)</math> distribution.  <code>y</code> values are determined by <math>\,y = x^2 + x + N(0,0.3)</math>, a quadratic function with some random variation.  Polynomial least square fits of degree 1, 2, and 10 are calculated, as well as a fit of <math>\,sin(x)+cos(x)</math>.
 
  >> > # calculate the mean squared error of degree 1 poly
  >> > sum((y-predict(p1,data.frame(x)))^2)/length(y)
  >> [1] 1.576042
  >> > sum((ytest-predict(p1,data.frame(x=xtest)))^2)/length(ytest)
  >> [1] 7.727615
: Training and test mean squared errors for the linear fit.  These are both quite high - and since the data is non-linear, the different mean value of the test data increases the error quite a bit.
  >> > # calculate the mean squared error of degree 2 poly
  >> > sum((y-predict(p2,data.frame(x)))^2)/length(y)
  >> [1] 0.08608467
  >> > sum((ytest-predict(p2,data.frame(x=xtest)))^2)/length(ytest)
  >> [1] 0.08407432
: This fit is far better - and there is not much difference between the training and test error, either.
  >> > # calculate the mean squared error of degree 10 poly
  >> > sum((y-predict(pn,data.frame(x)))^2)/length(y)
  >> [1] 0.07967558
  >> > sum((ytest-predict(pn,data.frame(x=xtest)))^2)/length(ytest)
  >> [1] 156.7139
: With a high-degree polynomial, the training error continues to decrease, but not by much - and the test set error has risen again.  The overfitting makes it a poor predictor.  As the degree of the polynomial rises further, the accuracy of the computer becomes an issue - and a good fit is not even consistently produced for the training data.
  >> > # calculate mse of sin/cos fit
  >> > sum((y-predict(psi,data.frame(x)))^2)/length(y)
  >> [1] 0.1105446
  >> > sum((ytest-predict(psi,data.frame(x=xtest)))^2)/length(ytest)
  >> [1] 1.320404
: Fitting a function of the form sin(x)+cos(x) works pretty well on the training set, but because it is not the real underlying function, it fails on test data which doesn't lie on the same domain.
 
== ''' Cross-Validation (CV) - Introduction ''' ==
 
[[File:Cv.jpg|200px|thumb|right|Figure 1: Illustration of Cross-Validation]]
[http://en.wikipedia.org/wiki/Cross-validation_%28statistics%29 Cross-Validation] is used to estimate the error rate of a classifier with respect to test data rather than data used in the model. Here is a general introduction to CV:
 
<math>\hookrightarrow</math> We have a set of collected data for which we know the proper labels
 
<math>\hookrightarrow</math> We divide it into 2 parts, Training data (T) and Validation data (V)
 
<math>\hookrightarrow</math> For our calculation, we pretend that we do not know the label of V and we use data in T to train the classifier
 
<math>\hookrightarrow</math> We estimate an empirical error rate on V since the model hasn't seen V yet and we know the proper label of all elements in V to know how many were misclassified.
 
CV has different implementations which can reduce the variance of the calculated error rate, but sometimes with a tradeoff of a higher calculation time.
 
== ''' Complexity Control - Nov 4, 2009''' ==
 
== Cross-validation ==
[[File:Cross-validation.png|350px|thumb|right|Figure 1: Classical/Standard cross-validation]]
[http://en.wikipedia.org/wiki/Cross-validation_(statistics) Cross-validation] is the simplest and most widely used method to estimate the true error. It comes from the observation that although training error always decreases with the increasing complexity of the model, the test error starts to increase from a certain point, which is noted as overfitting (see [[#prediction-error|figure 2]] above). Since test error estimates MSE (mean square error) best, people came up with the idea to divide the data set into three parts: training set, validation set, and test set. training set is used to build the model, validation set is used to deside the parameters and the optimal model, and the test set is used to estimate the performance of the chosen model. A classical division is 50% for training set, and 25% each for validation set and test set. All of them are randomly selected from the original data set. <br />
Training set: a set of examples used for learning: to fit the parameters of the classifier.<br />
Validation set: a set of examples used to tune the parameters of a classifier.<br />
Test set: a set of examples used only to assess the performance of a fully trained classifier.
 
Then, we only use the part of our data marked as the "training set" to train our algorithm, while keeping the remaining marked as the "validation set" untouched. As a result, the validation set will be totally unknown to the trained model. The error rate is then estimated by:
 
<math>\hat L(h) = \frac{1}{|\nu|}\sum_{X_i \in \nu}(h(x_i) \neq y_i)</math>, where <math>\,|\nu|</math> is the cardinality of the validation set.
 
When we change the complexity, the error generated by the validation set will have the same behavior as the test set, so we are able to choose the best parameters to get the lowest error.
 
 
=== K-fold Cross-validation ===
[[File:k-fold.png|350px|thumb|right|Figure 2: K-fold cross-validation]]
Above is the simplest form of complexity control. However, in reality, it may be hard to collect data ??and we usually suffer from the curse of dimensionality??, and a larger data set may be hard to come by. Consequently, we may not be able to afford to sacrifice part of the limited resources. In this case we use another method that addresses this problem, K-fold cross-validation.The advantage of K-Fold Cross validation is that all the examples in the
dataset are eventually used for both training and testing. We divide the data set into <math>\,K</math> subsets roughly equal in size. The usual choice is <math>\,K = 10</math>.
 
Generally, how to choose <math>\,K</math>:
 
if <math>\,K=n</math>, leave one out, low bias, high variance.  Each subset contains a single element, so the model is trained with all except one point, and then validated using that point.
 
if <math>\,K=2</math>, say 2-fold, 5-fold, high bias, low variance.  Each subset contains approximately <math>\,\frac{1}{2}</math> or <math>\,\frac{1}{5}</math> of the data.
 
For every <math>\,k</math>th <math>( \,k \in [ 1, K ] )</math> part, we use the other <math>\,K-1</math> parts to fit the model and test on the <math>\,k</math>th part to estimate the prediction error <math>\hat L_k</math>, where
 
<math>\hat L(h) = \frac{1}{K}\sum_{k=1}^K\hat L_k</math>
 
For example, suppose we want to fit a polynomial model to the data set and split the set into four equal subsets as shown in Figure 2. First we choose the degree to be 1, i.e. a linear model. Next we use the first three sets as training sets and the last as validation set, then the 1st, 2nd, 4th subsets as training set and the 3rd as validation set, so on and so forth until all the subsets have been the validation set once (all observations are used for both training and validation). After we get <math>\hat L_1, \hat L_2, \hat L_3, \hat L_4</math>, we can calculate the average <math>\hat L</math> for degree 1 model. Similarly, we can estimate the error for n degree model and generate a simulating curve. Now we are able to choose the right degree which corresponds to the minimum error. Also, we can use this method to find the optimal unit number of hidden layers of neural networks. We can begin with 1 unit number, then 2, 3 and so on and so forth. Then find the unit number of hidden layers with lowest average error.
 
=== Generalized Cross-validation ===
If the vector of observed values is denoted by <math>\mathbf{y}</math>,  and the vector of fitted values by <math>\hat\mathbf{y}</math>.
 
<math>\mathbf{y} = \mathbf{H}\hat\mathbf{y}</math>, 
 
where the hat matrix is given by
 
<math>\mathbf{H} = \mathbf{X}( \mathbf{X}^{T} \mathbf{X})^{-1}\mathbf{X}^{T}</math>,
 
<math> \frac{1}{N}\sum_{i=1}^{N}[y_{i} - \hat f^{-i}(\mathbf{x}_{i})]^{2}=\frac{1}{N}\sum_{i=1}^{N}[\frac{y_{i}-\hat f(x_{i})}{1-\mathbf{H}_{ii}}]^{2}</math>,
 
Then the GCV approximation is given by
 
<math> GCV(\hat f) = \frac{1}{N}\sum_{i=1}^{N}[\frac{y_{i}-\hat f(x_{i})}{1-trace(\mathbf{H})/N}]^{2}</math>,
 
Thus, one of the biggest advantages of the GCV is that the trace is more easily computed.
 
=== Leave-one-out Cross-validation ===
Leave-one-out cross-validation involves using all but one data point in the original training data set to train our model, then using the data point that we initially left out as a means to estimate true error. But repeating this process for every data point in our original data set, we can obtain a good estimation of true error.
 
In other words, leave-one-out cross-validation is k-fold cross-validation in which we set the subset number <math>\,K</math> to be the cardinality of the whole data set.
 
In the above example, we can see that k-fold cross-validation can be computationally expensive: for every possible value of the parameter, we must train the model <math>\,K</math> times. This deficiency is even more obvious in leave-one-out cross-validation, where we must train the model <math>\,n</math> times, where <math>\,n</math> is the number of data point in the data set.
 
Fortunately, when adding data points to the classifier is reversible, calculating the difference between two classifiers is computationally more efficient than calculating the two classifiers separately.  So, if the classifier on all the data points is known, we simply undo the changes from a data point <math>\,K</math> times to calculate the leave-one-out cross-validation error rate.
 
How to decide the number of folds?
For a large number of folds, the bias of the true error rate estimator will be small, the variance of it and the computing time will be large. For a small number of folds, everything will be opposite. When the datasets is large, 3-fold cross validation will be enough, but if the datasets is very sparse we prefer to use leave-one-out.
 
== Regularization for Neural Network — Weight Decay ==
[[File:figure 2.png|350px|thumb|right|Figure 1: activation function]]
Weight decay training is suggested as an implementation for achieving a robust neural network which is insensitive to noise. Since the number of hidden layers in NN is usually decided by certain domain knowledge, it may easily get into the problem of overfitting.
 
It can be seen from Figure 1 that when the weight is in the vicinity of zero, the operative part of the activation function shows linear behavior. The NN then collapses to an approximately linear model. Note that a linear model is the simplest model, we can avoid overfitting by constraining the weights to be small. This gives us a hint to initialize the random weights to be close to zero.
 
Formally, we penalize nonlinear weights by adding a penalty term in the error function. Now the regularized error function becomes:
 
<math>\,REG = err + \lambda(\sum_{i}|w_i|^2 + \sum_{jk}|u_{jk}|^2)</math>, where <math>\,err</math> is the original error in back-propagation; <math>\,w_i</math> is the weights of the output layer; <math>\,u_{jk}</math> is the weights of the hidden layers.
 
Usually, too large <math>\,\lambda</math> will make the weights <math>\,w_i</math> and <math>\,u_{jk}</math> too small. We can use cross validation to estimate <math>\,\lambda</math>.Another approach to choosing the <math>\,\lambda</math> is to train several networks with different amounts of decay and estimates the generalization error for each; then choose the <math>\,\lambda</math> that minimizes the estimated generalization error.
 
 
A similar penalty, weight elimination, is given by,
 
<math>\,REG = err + \lambda(\sum_{i}\frac{|w_i|^2}{1 + |w_i|^2} + \sum_{jk}\frac{|u_{jk}|^2}{1+|u_{jk}|^2})</math>.
 
As in back-propagation, we take partial derivative with respect to the weights:
 
<math>\frac{\partial REG}{\partial w_i} = \frac{\partial err}{\partial w_i} + 2\lambda w_i</math>
 
<math>\frac{\partial REG}{\partial u_{jk}} = \frac{\partial err}{\partial u_{jk}} + 2\lambda u_{jk}</math>
 
<math>w^{new} \leftarrow w^{old} - \rho\left(\frac{\partial err}{\partial w} + 2\lambda w\right)</math>
 
<math>u^{new} \leftarrow u^{old} - \rho\left(\frac{\partial err}{\partial u} + 2\lambda u\right)</math>
 
Note:<br />
here <math>\,\lambda</math> serves as a trade-off parameter, tuning between the error rate and the linearity. Actually, we may also set <math>\,\lambda</math> by cross-validation.  The tuning parameter is important since weights of zero will lead to zero derivatives and the algorithm will not change.  On the other hand, starting with weights that are too large means starting with a nonlinear model which can often lead to poor solutions. <ref>Trevor Hastie, Robert Tibshirani, Jerome Friedman, Elements of Statistical Learning (Springer 2009) pp.398</ref><br />
We can standardize or normalize the inputs and targets, or adjust the penalty term for the standard deviations of all the inputs and targets in order to omit the biases and get good result from weight decay.<br />
<math>\,\lambda</math>is different for different types of weights in the NN. We can have different <math>\,\lambda</math> for input-to-hidden, hidden-to-hidden, and hidden-to-output weights.
 
== Radial Basis Function (RBF) Networks - November 6, 2009 ==
 
[[File:Rbf_net.png|350px|thumb|right|Figure 1: Radial Basis Function Network]]
 
=== Introduction ===
 
A Radial Basis Function (RBF) network [http://en.wikipedia.org/wiki/Radial_basis_function_network] is a type of artificial neural network with an output layer and a single hidden layer, with  weights from the hidden layer to the output layer, and can be trained without back propagation since it has a closed-form solution. The neurons in the hidden layer contain basis functions. One choice that has been widely used is that of radial basis functions, which have the property that each basis function depends only on the radial distance (typically Euclidean) from a center <math>\displaystyle\mu_{j}</math>, so that <math>\phi_{j}(x)= h({\Vert x - \mu_{j}\Vert})</math>.
 
RBFN were first used in solving multivariate interpolation problems and numerical analysis.  Their prospect is similar in neural network applications, where the training and query targets are rather continuous.
RBFN are artificial neural networks and it can be applied to Regression, Classification and Time series prediction.
 
<math>\ x_1 \cdot \cdot \cdot x_d</math>: input layer of d dimension of training patterns<br />
<math>\ \phi_1 \cdot \cdot \cdot \phi_m </math>: hidden layer of up to m locally tuned neurons centered over receptive fields<br />
<math>\ y_1\cdot \cdot \cdot y_k</math>: output layer that provides the response of the network<br />
 
The output of an RBF network can be expressed as a weighted sum of its radial basis functions as follows:
 
<math>\hat y_{k} = \sum_{j=1}^M\phi_{j}(x) w_{jk}</math>
 
The radial basis function is:
 
<math>\phi_{j}(x) = e^{\frac{-\Vert x - \mu_{j}\Vert ^2}{2\sigma_{j}^2}}</math><br />
(Gaussian without a normalization constant)<br /><br />
'''note:'''The hidden layer has a variable number of neurons (the optimal number is determined by the training process). As usual the more neurons in the hidden layer, the higher the model complexity. Each neuron consists of a radial basis function centered on a point with the same dimensions as the input data. The radii of the RBF functions may be different. The centers and radii can be determined through clustering or an EM algorithm. When the x vector is given from the input layer, the hidden neuron computes the radial distance from the neuron’s center point and then applies RBF function to this distance. The resulting value is passed to the the output layer and weighed together to form the output.
 
<math>\,y_{k}</math> can be expressed in matrix form as:
 
<math>\hat Y = \Phi W </math>
 
where
 
:<math>\hat{Y}_{n,k} = \left[ \begin{matrix}
\hat{y}_{1,1} & \hat{y}_{1,2} & \cdots & \hat{y}_{1,k} \\
\hat{y}_{2,1} & \hat{y}_{2,2} & \cdots & \hat{y}_{2,k} \\
\vdots &\vdots & \ddots & \vdots \\
\hat{y}_{n,1} & \hat{y}_{n,2} & \cdots & \hat{y}_{n,k}
\end{matrix}\right] </math> is the matrix of output variables.
 
:<math>\Phi_{n,m} = \left[ \begin{matrix}
\phi_{1,1} & \phi_{1,2} & \cdots & \phi_{1,m} \\
\phi_{2,1} & \phi_{2,2} & \cdots & \phi_{2,m} \\
\vdots & \vdots & \ddots & \vdots \\
\phi_{n,1} & \phi_{n,2} & \cdots & \phi_{n,m}
\end{matrix}\right] </math> is the matrix of Radial Basis Functions.
 
:<math>W_{m,k} = \left[ \begin{matrix}
w_{1,1} & w_{1,2} & \cdots & w_{1,k} \\
w_{2,1} & w_{2,2} & \cdots & w_{2,k} \\
\vdots & \vdots & \ddots & \vdots \\
w_{m,1} & w_{m,2} & \cdots & w_{m,k}
\end{matrix}\right] </math> is the matrix of weights.
 
Here, <math>k</math> is the number of outputs, <math>n</math> is the number of data points, and <math>m</math> is the number of hidden units.  If <math>k = 1</math>, <math>\hat Y</math> and <math>W</math> are column vectors.
 
''related reading'':
 
Introduction of the Radial Basis Function (RBF) Networks [http://axiom.anu.edu.au/~daa/courses/GSAC6017/rbf.pdf]
 
Paper about the BBFN for multi-task learning [http://books.nips.cc/papers/files/nips18/NIPS2005_0628.pdf]
 
Radial Basis Function (RBF) Networks [http://documents.wolfram.com/applications/neuralnetworks/index6.html] [http://lcn.epfl.ch/tutorial/english/rbf/html/index.html]
[http://www.dtreg.com/rbf.htm]
 
Advantage of RBFN:
1. First, it can model any nonlinear function using a single hidden layer, which removes some design-decisions about numbers of layers.
2.Second, the simple linear transformation in the output layer can be optimized fully using traditional linear modeling techniques
 
=== Estimation of weight matrix W ===
 
We minimize the training error, <math>\Vert Y - \hat{Y}\Vert^2</math> in order to find <math>\,W</math>.<br /><br />
From a previous result in linear algebra we know that
 
<math>\Vert A \Vert^2 = Tr(A^{T}A)</math>
 
Thus we have a problem similar to linear regression:
<math>\ err = \Vert Y - \Phi W\Vert^{2} = Tr[(Y - \Phi W)^{T}(Y - \Phi W)]</math>
 
<math>\ err = Tr[Y^{T}Y - Y^{T}\Phi W - W^{T} \Phi^{T} Y + W^{T}\Phi^{T} \Phi W]</math>
 
==== Useful properties of matrix differentiation ====
 
 
<math>\frac{\partial Tr(AX)}{\partial X} = A^{T}</math>
 
<math>\frac{\partial Tr(X^{T}A)}{\partial X} = A</math>
 
<math>\frac{\partial Tr(X^{T}AX)}{\partial X} = (A^{T} + A)X</math>
 
==== Solving for W ====
 
We find the minimum over <math>\,W</math> by setting <math>\frac{\partial err}{\partial W}</math> equal to zero and using the aforementioned properties of matrix differentiation.
 
<math>\frac{\partial err}{\partial W} = 0</math>
 
<math>\ 0 - \Phi^{T}Y - \Phi^{T}Y + 2\Phi^{T}\Phi W = 0</math>
 
<math>\ -2 \Phi^{T}Y + 2\Phi^{T}\Phi W = 0</math>
 
<math>\ W = (\Phi^{T}\Phi)^{-1}\Phi^{T}Y</math>
 
<math>\hat{Y} = \Phi W = \Phi(\Phi^{T}\Phi)^{-1}\Phi^{T}Y = HY</math>
 
where <math>\ H = \Phi(\Phi^{T}\Phi)^{-1}\Phi^{T}</math>
 
<math>\,H</math> is the [http://en.wikipedia.org/wiki/Hat_matrix hat matrix] for this model. This gives us a nice results since the solution has a closed form and we do not have to worry about convexity problems in this case.
 
=== Including an additional bias  ===
 
<math>\,y_{k}</math> can be expressed in matrix form as:
 
<math>\hat Y = \Phi W </math>
 
where
 
:<math>\hat Y = \left[ \begin{matrix}
y_{11} & y_{12} & \cdots & y_{1k} \\
y_{21} & y_{22} & \cdots & y_{2k} \\
\vdots & & \ddots & \vdots \\
y_{n1} & y_{n2} & \cdots & y_{nk}
\end{matrix}\right] </math> is the matrix(n by k) of output variables.
 
:<math>\Phi = \left[ \begin{matrix}
\phi_{10} &\phi_{11} & \phi_{12} & \cdots & \phi_{1M} \\
\phi_{20} & \phi_{21} & \phi_{22} & \cdots & \phi_{2M} \\
\vdots & & \ddots & \vdots \\
\phi_{n0} &\phi_{n1} & \phi_{n2} & \cdots & \phi_{nM}
\end{matrix}\right] </math> is the matrix(n by M+1) of Radial Basis Functions.
 
:<math>W = \left[ \begin{matrix}
w_{01} & w_{02} & \cdots & w_{0k} \\
w_{11} & w_{12} & \cdots & w_{1k} \\
w_{21} & w_{22} & \cdots & w_{2k} \\
\vdots & & \ddots & \vdots \\
w_{M1} & w_{M2} & \cdots & w_{Mk}
\end{matrix}\right] </math> is the matrix(M+1 by k) of weights.
 
where the extra basis function <math>\Phi_{0}</math> is set to 1.
 
==== Normalized RBF ====
 
In addition to the above unnormalized architecture, the normalized RBF can be represented as:
 
<math>\hat{y}_{k}(X) = \frac{\sum_{j=1}^{M} w_{jk}\Phi_{j}(X)}{\sum_{r=1}^{M}\Phi_{r}(X)}</math><br /><br />
 
 
Actually, <math>\Phi^{\ast}_{j}(X) = \frac{\Phi_{j}(X)}{\sum_{r=1}^{M}\Phi_{r}(X)}</math> is known as a normalized radial basis function. Giving the familiar form,<br />
 
<math>\hat{y}_{k}(X) = \sum_{j=1}^{M} w_{jk}\Phi^{\ast}_{j}(X)</math><br /><br />
 
=== Conceptualizing RBF networks ===
 
In the past, we have classified data using models that were explicitly linear, quadratic, or otherwise definite. In RBF networks, like in Neural Networks, we can fit an arbitrary model. How can we do this without changing the equations being used?
 
Recall a [[#Trick:_Using_LDA_to_do_QDA_-_October_7.2C_2009|trick]] that was discussed in the October 7 lecture: if we add new features to our original data set, we can project into higher dimensions, use a linear algorithm, and get a quadratic result by collapsing to a lower dimension afterward. In RBF networks, something similar can happen.
 
Think of <math>\,\Phi</math>, our matrix of radial basis functions, as a feature space of the input. Each hidden unit, then, can be thought to represent a feature; we can see that, if there are more hidden units than input units, we can essentially project to a higher-dimensional space, as we did in our earlier trick. However, this does not mean that an RBF network will actually do this, it is merely a way to convince yourself that RBF networks (and neural networks) can fit arbitrary models. Nevertheless, it is also noticed that just because of such great power, the problem of overfitting appears more important. We have to control its complexity so that it does not fit to an arbitrary training model but to a general one.
 
=== RBF networks for classification -- a probabilistic paradigm ===
 
[[File:Rbf_graphical_model.png|350px|thumb|left|Figure 1: RBF graphical model]]
 
An RBF network is akin to fitting a Gaussian mixture model to data.  We assume that each class can be modelled by a single function <math>\,\phi</math> and data is generated by a mixture model. According to Bayes Rule,
 
<math>Pr(Y = y_{k} | X = x) = \frac {Pr(x|y_{k})*Pr(y_{k})}{Pr(x)}</math>
 
While all classifiers that we have seen thus far in the course have been in discriminative form, the RBF network is a generative model that can be represented using a directed graph.
 
We can replace the class conditional density in the above conditional probability expression by marginalizing <math>\,x</math> over <math>\,j</math>:
<math>\Pr(x|y_{k}) = \sum_{j} Pr(x|j)*Pr(j|y_{k})</math>
 
 
 
<br/><br/>
*'''Note''' We made the assumption that each class can be modelled by a single function <math>\displaystyle\Phi</math> and that the data was generated by a mixture model.  The Gaussian mixture model has the form:
<math>f(x)=\sum_{m=1}^M \alpha_m \phi(x;\mu_m,\Sigma_m)</math> where <math>\displaystyle\alpha_m</math> are mixing proportions, <math>\displaystyle\sum_m \alpha_m=1</math>, and <math>\displaystyle\mu_m</math> and <math>\displaystyle\Sigma_m</math> are the mean and covariance of each Gaussian density respectively. <ref>H. Trevor, R. Tibshirani, J. Friedman, The Elements of Statistical Learning (Springer 2009), pp. 214. </ref> The generative model in Figure 1 shows graphically how each Gaussian in the mixture model is chosen to sample from.
 
== '''Radial Basis Function (RBF) Networks - November 9th, 2009''' ==
 
=== RBF Network for classification (A probabilistic point of view) ===
Using RBF Network[http://en.wikipedia.org/wiki/Radial_basis_function_network] to do classification, we usually treat it as regression problem. We want to set a threshold to decide what the data’s class membership is. However, to find some insight to the classification problem of what we are doing in terms of RBF Network, we often think of mixture models and make certain assumptions. Before we mainly using deterministic model to describe data, which means a given input always generate the same output, now we are going to consider generative model of data. In this case, some hidden variables are incorporated and joint probability is assigned between the nodes so that we can derive results through the bayes rule.
 
[[File:RBF.png|350px|thumb|right|Figure 26.1: RBF Network Classification Demo]]
 
We assume, as we can see in the graph on the right hand side, that we have three random variables, <math>\displaystyle y_k</math>, <math>\displaystyle j</math>, and <math>\displaystyle x</math> where <math>\displaystyle y_k</math> denotes class <math>\,k</math>, <math>\displaystyle x</math> is what we observed here, and <math>\displaystyle j</math> is some hidden random variable. There is a process that there are different classes, and each class can trigger a different hidden random variable <math>\displaystyle j</math>. To understand this, we can assume that, for instance, this random variable <math>\displaystyle j</math> has a Gaussian distribution (it could have any other distribution as well) and that all the <math>\displaystyle j</math>’s have the same distribution (Gaussian), but with different parameters. From each Gaussian distribution triggered by each class, we are going to sample some data points. Therefore, in the end, we are going to get a set of data, which are not strictly Gaussian, but are actually a mixture of Gaussians.
 
Again, we look at the posterior distribution from  [http://en.wikipedia.org/wiki/Bayes'_theorem Bayes' Rule].
 
<math>Pr(Y = y_{k} | X = x) = \frac {Pr(X = x | Y = y_{k})*Pr(Y = y_{k})}{Pr(X = x)}</math>
 
Since we made the assumption that the data has been generated from a mixture model, we can estimate this conditional probability by
 
<math>\Pr(X = x | Y = y_{k}) = \sum_{j} Pr(X = x | j)*Pr(j | Y = y_{k})</math>,
 
which is the class conditional distribution (or probability) of the mixture model. Note, here, if we only have a simple model from <math>\displaystyle y_k</math> to <math>\displaystyle x</math>, then we won’t have this summation.
 
We can substitute this class conditional distribution into Bayes' formula.  We can see that the posterior of class <math>\displaystyle k</math> is the summation over <math>\displaystyle j</math> of the probability of <math>\displaystyle x</math> given <math>\displaystyle j</math> times the probability of <math>\displaystyle j</math> given <math>\displaystyle y_k</math>, times the prior distribution of class <math>\displaystyle k</math>, and lastly divided by the marginal probability of <math>\displaystyle x</math>. That is,
 
<math>\Pr(y_k | x) = \frac {\sum_{j} Pr(x | j)*Pr(j | y_{k})*Pr(y_{k})}{Pr(x)}</math>.
 
Since, the prior probability of class <math>\displaystyle k</math>, <math>\displaystyle Pr(y_{k})</math>, does not have an index of <math>\displaystyle j</math>, it can be taken out of the summation.  This yields,
 
<math>\Pr(y_k | x) = \frac {Pr(y_{k})\sum_{j} Pr(x | j)*Pr(j | y_{k})}{Pr(x)}</math>.
 
We multiply this by <math>\displaystyle 1 = \frac {Pr(j)}{Pr(j)}</math>. Then, it becomes,
 
<math>\Pr(y_k | x) = \frac {Pr(y_{k})\sum_{j} Pr(x | j)*Pr(j | y_{k})}{Pr(x)} * \frac {Pr(j)}{Pr(j)}</math>.
 
Next, note that <math>\displaystyle Pr(j | x) = \frac {Pr(x | j)*Pr(j)}{Pr(x)}</math>, and <math>\displaystyle Pr(y_k | j) = \frac {Pr(j | y_k)*Pr(y_k)}{Pr(j)}</math>.  Then rearranging the terms, we finally have the posterior:
 
<math>\displaystyle Pr(y_k | x) = \sum_{j} Pr(j | x)Pr(y_k | j)</math>.
 
where <math>\displaystyle Pr(j | x) </math> is the probability of future given data, <math>\displaystyle Pr(y_k | j) </math> is the probability of class membership given a future.
 
Interestingly, this is just the product of the posterior of the two functions that are summed.
 
==== Interpretation of RBF Network classification ====
 
[[File:2.png|350px|thumb|right|Figure 26.1.2(2): RBF Nerwork ]]
 
We want to relate the results that we derived above to our RBF Network. In a RBF Network, as we can see on the right hand side, we have a set of data, <math>\displaystyle x_1</math> to <math>\displaystyle x_d</math>, and the hidden basis function, <math>\displaystyle \phi_{1}</math> to <math>\displaystyle \phi_{M}</math>, and then we have some output, <math>\displaystyle y_1</math> to <math>\displaystyle y_k</math>. Also, we have weights from the hidden layer to output layer. The output is just the linear sum of <math>\displaystyle \phi</math>’s.
 
Now consider probability of <math>\displaystyle j</math> given <math>\displaystyle x</math> to be <math>\displaystyle \phi</math>, and the probability of <math>\displaystyle y_k</math> given <math>\displaystyle j</math> to be the weights <math>\displaystyle w_{jk}</math>, then the posterior can be written as,
 
<math>\displaystyle Pr(y_k | x) = \sum_{j} \phi_{j}(x)*w_{jk}</math>.
[[File:3.png|350px|thumb|left|Figure 26.1.2(1): Gaussian mixture ]]
 
Now, let us look at an example in one dimensional case. Suppose,
 
<math>\phi_{j}(x) = e^{\frac{-\Vert x - \mu_{j}\Vert ^2}{2\sigma_{j}^2}}</math>, and <math>\displaystyle j</math> is from 1 to 2.
 
We know that <math>\displaystyle \phi</math> is a radial basis function. It's as if we put some Gaussian over data. And for each Gaussian, we consider the center <math>\displaystyle \mu</math>. Then, what <math>\displaystyle \phi</math> computes is the similarity of any data point to the center.
 
We can see the graph on the left which plots the density of <math>\displaystyle \phi_{1}</math> and <math>\displaystyle \phi_{2}</math>. Take <math>\displaystyle \phi_{1}</math> for instance, if the point gets far from the center <math>\displaystyle \mu_{1}</math>, then it will reduce <math>\displaystyle \phi_{1}</math> to become nearly zero. Remember that, we can usually find a non-linear regression or classification of input space by doing a linear one in some extended space or some feature space (more details in Aside). Here, the <math>\displaystyle \phi</math>’s actually produce that feature space.
 
So, one way to look at this is that this <math>\displaystyle \phi</math> is telling us that given an input, how likely the probability of presence of a particular feature is. Say, for example, we define the features as the centers of these Gaussian distributions. Then, this <math>\displaystyle \phi</math> function somehow computes the possibility given certain data points, of this kind of feature appearing. If the data point is right at the center, then the value of that <math>\displaystyle \phi</math> would be one, i.e. the probability is 1. If the point is far from the center, then the probability (<math>\displaystyle \phi</math> function value) will be close to zero, that is, it’s less likely. Therefore, we can treat <math>\displaystyle Pr(j | x)</math> as the probability of a particular feature given data.
 
When we have those features, then <math>\displaystyle y</math> is the linear combination of the features. Hence, any of the weights <math>\displaystyle w</math>, which is equal to <math>\displaystyle Pr(y_k | j)</math>, tells us how likely this particular <math>\displaystyle y</math>  will appear given those features. Therefore, the weight <math>\displaystyle w_{jk}</math> shows the probability of class membership given feature.
 
Hence, we have found a probabilistic point of view to look at RBF Network!
 
*'''Note''' There are some inconsistencies with this probabilistic point of view. There are no restrictions that force <math>\displaystyle Pr(y_k | x) = \sum_{j} \phi_{j}(x)*w_{jk}</math> to be between 0 and 1.  So if least squares is used to solve this, <math>\displaystyle w_{jk}</math> cannot be interpreted as a probability. 
 
 
''' Aside '''
*Feature Space:
:One way to produce a feature space is LDA
:Suppose, we have n data points <math>\mathbf{x}_1</math> to <math>\mathbf{x}_n </math>. Each data point has d features. And these n data points consist of the <math>X</math> matrix,
:<math>X = \left[ \begin{matrix}
x_{11} & x_{21} & \cdots & x_{n1} \\
x_{12} & x_{22} & \cdots & x_{n2} \\
\vdots & & \ddots & \vdots \\
x_{1d} & x_{2d} & \cdots & x_{nd}
\end{matrix}\right] </math>
:Also, we have feature space,
:<math>\Phi^{T} = \left[ \begin{matrix}
\phi_{1}(\mathbf{x_1}) & \phi_{1}(\mathbf{x_2})& \cdots & \phi_{1}(\mathbf{x_n})\\
\phi_{2}(\mathbf{x_1})& \phi_{2}(\mathbf{x_2})& \cdots & \phi_{2}(\mathbf{x_n}) \\
\vdots & & \ddots & \vdots \\
\phi_{M}(\mathbf{x_1}) & \phi_{M}(\mathbf{x_2}) & \cdots & \phi_{M}(\mathbf{x_n})
\end{matrix}\right] </math>
:If we want to solve a regression problem for the input data, we don’t perform Least Square on this <math>\displaystyle X</math> matrix, we do Least Square on the feature space, i.e. on the <math>\displaystyle \Phi^{T}</math> matrix. The dimensionality of <math>\displaystyle \Phi^{T}</math> is M by n. We can add <math>\ \Phi_0=1 </math> which is not any function of <math>\ x_1 \cdot \cdot \cdot x_j </math>
:Now, we still have n data points, but we define these n data points in terms of a new set of features. So, originally, we define our data points by d features, but now, we define them by M features. And what are those M features telling us?
:Let us look at the first column of  <math>\displaystyle \Phi^{T}</math> matrix. The first entry is <math>\displaystyle \phi_1</math> applied to <math>\mathbf{x_1}</math>, and so on, until the last entry is <math>\displaystyle \phi_M</math> applied to <math>\mathbf{x_1}</math>. Suppose each of these <math>\displaystyle \phi_j</math> is defined by
:<math>\phi_{j}(x) = e^{\frac{-\Vert x - \mu_{j}\Vert ^2}{2\sigma_{j}^2}}</math>.
:Then, each <math>\displaystyle \phi_j</math> checks the similarity of the data point with its center. Hence, the new set of features are actually representing M centers in our data set, and for each data point, its new features check how this point is similar to the first center; how it is similar to the second center; and how it is similar to the <math>\displaystyle M^{th}</math> center. And this checking process will apply to all data points. Therefore, feature space gives another representation of our data set.
 
</noinclude>
 
Methords for selecting center <math>\ \mu </math>: <br />
*[http://en.wikipedia.org/wiki/Sampling_(statistics) Sub-sampling]: Randomly-chosen training points are copied to the radial units. Since they are randomly selected, they will represent the distribution of the training data in a statistical sense.<br />
*[http://en.wikipedia.org/wiki/K-means_clustering K-Means algorithm]: Given K radial units, it adjusts the positions of the centers so that: Each training point belongs to a cluster center, and is nearer to this center than to any other center; Each cluster center is the centroid of the training points that belong to it. <br />
 
The size of the deviation (smoothing factor) determines how spiky the Gaussian functions are. Deviations should typically be chosen so that Gaussians overlap with a few nearby centers.
 
Methods for choosing deviation are: <br />
*Choose the deviation by oursleves.<br />
*Select the deviation to reflect the number of centers and the volume of space they occupy <br />
*[http://en.wikipedia.org/wiki/K-nearest_neighbor_algorithm K-Nearest Neighbor algorithm]: Each unit's deviation is individually set to the mean distance to its K nearest neighbors. <br />
 
If the Gaussians are too spiky, the network will not interpolate between known points, and the network loses the ability to generalize. If the Gaussians are very broad, the network loses fine detail.
 
 
useful resouces:
 
1.some examples, advantages & disadvantages:[http://www.computing.surrey.ac.uk/courses/csm10/NeuralNetworks/RBFNetworks.ppt#256,1,Radial Basis Function (RBF) Networks]
 
[http://www.isac.cnr.it/~telerile/sito/baraldi/presentazioni/RbfTwoStage.ppt#256,1,RBF TWO-STAGE LEARNING NETWORKS: EXPLOITATION OF SUPERVISED DATA IN THE SELECTION OF HIDDEN UNIT PARAMETERS]
 
2.comparision between BP & RBF:[http://nlpr-web.ia.ac.cn/2006papers/gjhy/gh93.pdf]
 
=== Model selection or complexity control for RBF Network - a brief introduction ===
In order to obtain a better fit for the training data, we often want to increase the complexity of our RBF Network. By its construction, we know that to change the complexity of a RBF Network, the only way is to add or decrease the number of basis functions. A large number of basis function yields a more complex network. In theory, if we add enough basis functions, the RFB Network would work for any training; however, it doesn't mean this RBF Network model can generalize well. Therefore, for the purpose of avoiding overfitting problem (see Notes below), we only want to increase the number of basis function to certain point, i.e. its optimal level.
 
For the model selection, what we usually do is estimate the training error. After working through the training error, we’ll see that the training error in fact can be decomposed, and one component of training error is called Mean Squared Error (MSE). In the later notes, we will find that our final goal is to get a good estimate of MSE. Moreover, in order to find an optimal model for our data, we select the model with the smallest MSE.
 
Now, let us introduce some notations that we will use in the analysis:
*<math>\hat f</math> -- the prediction model estimated by a RBF network from the training data
*<math>\displaystyle f</math> -- the real model (not null), and ideally, we want <math>\hat f</math> to be close to <math>\displaystyle f</math>
*<math>\displaystyle err</math> -- the training error
*<math>\displaystyle Err</math> -- the testing error
*<math>\displaystyle MSE</math> -- the Mean Squared Error
 
''' Notes '''
 
[[File:overfitting.png|350px|thumb|left|Figure 26.2: Overfitting]]
 
*Being more complex isn’t always a good thing. Sometime, [http://en.wikipedia.org/wiki/Overfitting overfitting] causes the model to lose its generality. For example in the graph on left hand side, the data points are sampled from the model <math>\displaystyle y_i= f(x_i)+\epsilon_i</math>, where <math>\displaystyle f(x_i)</math> is a linear function, which is shown by the blue line, and <math>\displaystyle \epsilon_i</math>is additive Gaussian noise from <math>~N(0,\sigma^2)</math>. The red curve displayed in the graph shows the over-fitted model. Clearly, this over-fitted model only works for any training data, and is useless for any further prediction when new data points are introduced.
 
> n<-20;
> x<-seq(1,10,length=n);
> alpha<-2.5;
> beta<-1.75;
> y<-alpha+beta*x+rnorm(n);
> plot(y~x, pch=16, lwd=3, cex=0.5, main='Overfitting');
> abline(alpha, beta, col='blue');
> lines(spline(x, y), col = 2);
 
*More details on this topic later on.
 
 
 
</noinclude>
 
 
 
 
 
 
== '''Model Selection(Stein's Unbiased Risk Estimate)- November 11th, 2009''' ==
 
===Model Selection===
 
[http://en.wikipedia.org/wiki/Model_selection Model selection] is a task of selecting a model of optimal complexity for a given data. Learning a radial basis function network from data is a parameter estimation problem. One difficulty with this problem is selecting parameters that show good performance on both training and testing data. In principle, a model is selected to have parameters associated with the best observed performance on training data, although our goal really is to achieve good performance on unseen testing data. Not surprisingly, a model selected on the basis of training data does not necessarily exhibit comparable performance on the testing data. When squared error is used as the performance index, a zero-error model on the training data can always be achieved by using a sufficient number of basis functions.
 
 
But, training error and testing error do not demonstrate a linear relationship. In particular, a smaller training error does do not necessarily result in a smaller testing error. In practice, one often observes that, up to a certain point, the model error on testing data tends to decrease as the training error decreases. However, if one attempts to decrease the training error too far by increasing model complexity, the testing error often can take a dramatic increase.
 
 
The basic reason behind this phenomenon is that in the process of minimizing training error, after a certain point, the model begins to over-fit the training set. Over-fitting in this context means fitting the model to training data at the expense of losing generality. In the extreme form, a set of <math>\displaystyle N</math> training data points can be modeled exactly with <math>\displaystyle N</math> radial basis functions. Such a model follows the training data perfectly. However, the model is not representative features of the true underlying data source, and this is why it fails to correctly model new data points.
 
 
In general, the training error rate will be less than the testing error on the new data. A model typically adapts to the training data, and hence the training error will be an overly optimistic estimate of the testing error. An obvious way to well estimate testing err is to  add a penalty term to the training error to compensate. Actually, SURE is developed based on this.
 
===Stein's unbiased risk estimate (SURE)===
 
 
====Important Notation[http://en.wikipedia.org/wiki/Stein's_unbiased_risk_estimate]====
 
Let:
*<math>\hat f(X)</math> denote the ''prediction model'', which is estimated from a training sample by the RBF neural network model.
*<math>\displaystyle f(X)</math> denote the ''true model''.
*<math>\displaystyle err=\sum_{i=1}^N (\hat y_i-y_i)^2 </math> denote the ''training error'',which is the average loss over the training sample.
*<math>\displaystyle Err=\sum_{i=1}^M (\hat y_i-y_i)^2 </math> denote the ''test error'', which is the expected prediction error on an independent test sample.
*<math>\displaystyle MSE=E(\hat f-f)^2</math> denote the ''mean squared error'', where <math>\hat f(X)</math> is the estimated model and <math>\displaystyle f(X)</math> is the true model.
 
The Bias-Variance Decomposition:
 
:<math>
\begin{align}
\displaystyle MSE = E(\hat f-f)^2 &= E[(\hat f-E(\ hat f))+(E(\hat f)-f)]^2\\
&= E[(\hat f-E(\hat f))^2+2*(\hat f-E(\hat f))*(E(\hat f)-f)+(E(\hat f)-f)^2]\\
&= E[(\hat f-E(\hat f))^2]+E[2*(\hat f-E(\hat f))*(E(\hat f)-f)]+E[(E(\hat f)-f)^2]\\
&= Var(\hat f)+Bias^2(\hat f)
\end{align}
</math>
 
Since, <math>\displaystyle E[2*(\hat f-E(\hat f))*(E(\hat f)-f)]=2*Cov[E(\hat f)-f, \hat f-E(\hat f)]</math>, which is equal to zero.
 
Suppose the observations <math>\displaystyle y_i= f(x_i)+\epsilon_i</math>, where <math>\displaystyle \epsilon_i</math>is additive Gaussian noise <math>~N(0,\sigma^2)</math>.We need to estimate <math>\hat f</math> from training data set <math>T=\{(x_i,y_i)\}^2_{i=1}</math>. Let <math>\hat f_i=\hat f_(x_i)</math> and <math>\displaystyle f_i= f(x_i)</math>, then
 
<math>\displaystyle E[(\hat y_i-y_i)^2 ]=E[(\hat f_i-f_i-\epsilon_i)^2]</math><math>=E[(\hat f_i-f_i)^2]+E[\epsilon_i^2]-2E[\epsilon_i(\hat f_i-f_i)]</math>
 
<math>\displaystyle E[(\hat y_i-y_i)^2 ]=E[(\hat f_i-f_i)^2]+\sigma^2-2E[\epsilon_i(\hat f_i-f_i)]</math>    <math>\displaystyle (1)</math>
 
The last term can be written as:
 
<math>\displaystyle E[\epsilon_i(\hat f_i-f_i)]=E[(y_i-f_i)(\hat f_i-f_i)]=cov(y_i,\hat f)</math>, where<math>\displaystyle y_i</math> and <math>\hat f_i</math> both have same mean <math>\displaystyle f_i</math>.
 
====[http://en.wikipedia.org/wiki/Stein%27s_lemma Stein's Lemma]====
 
If <math>\,Z</math> is <math>\,N(\mu,\sigma^2)</math> and if <math>\displaystyle g(Z)</math> is weakly differentiable,such that<math>\displaystyle E[\vert g'(Z)\vert]<\infty</math>, then <math>\displaystyle E[g(Z)(Z-\mu)]=\sigma^2E(g'(Z))</math>.
 
 
According to Stein's Lemma, the last cross term of <math>\displaystyle (1)</math>, <math>\displaystyle E[\epsilon_i(\hat f_i-f_i)]</math> can be written as <math>\sigma^2 E[\frac {\partial \hat f}{\partial y_i}]</math>. The derivation is as follows.
 
<math>\displaystyle Proof</math>:  Let <math>\,Z = \epsilon</math>.  Then <math>g(Z) = \hat f-f</math>, since <math>\hat y = f + \epsilon</math>, and <math>\,f</math> is a constant.  So <math>\,\mu = 0</math> and <math>\,\sigma^2</math> is the variance in <math>\,\epsilon</math>.
<math>\displaystyle E[g(Z)(Z-\mu)]=E[(\hat f-f)\epsilon]=\sigma^2E(g'(Z))=\sigma^2 E[\frac {\partial (\hat f-f)}{\partial y_i}]=\sigma^2 E[\frac {\partial \hat f}{\partial y_i}-\frac {\partial f}{\partial y_i}]</math>
 
 
Since <math>\displaystyle f</math> is the true model, not the function of the observations <math>\displaystyle y_i</math>, then <math>\frac {\partial f}{\partial y_i}=0</math>.
 
So,<math>\displaystyle E[\epsilon_i(\hat f_i-f_i)]=\sigma^2 E[\frac {\partial \hat f}{\partial y_i}]</math> <math>\displaystyle (2)</math>
 
====Two Different Cases====
SURE in RBF,
[http://www.cs.ualberta.ca/~papersdb/uploaded_files/801/paper_automatic-basis-selection-for.pdf Automatic basis selection for RBF networks using Stein’s unbiased risk estimator,Ali Ghodsi Dale Schuurmans]
 
 
=====''Case 1''=====
 
Consider the case in which a new data point has been introduced to the estimated model, i.e. <math>(x_i,y_i)\not\in\tau</math>; this new point belong to the validation set <math>\displaystyle \nu</math>,i.e.<math>(x_i,y_i)\in\nu</math>.  Since <math>\displaystyle y_i</math> is a new point, <math>\hat f</math> and <math>\displaystyle y_i</math> are independent. Therefore <math>\displaystyle cov(y_i,\hat f)=0</math> (or think about <math>\frac{\partial \hat f}{\partial y_i}</math>, when <math>\,y_i</math>is a new point, then it has nothing with <math>\hat f</math> because the estimation of <math>\hat f</math> is from the training data, so <math>\frac{\partial \hat f}{\partial y_i}=0</math> ) and <math>\displaystyle (1)</math> in this case can be written as:
 
<math>\displaystyle E[(\hat y_i-y_i)^2 ]=E[(\hat f_i-f_i)^2]+\sigma^2</math>.
 
This expectation means <math>\frac {1}{m}\sum_{i=1}^m (\hat y_i-y_i)^2 = \frac {1}{m}\sum_{i=1}^m (\hat f_i-f_i)^2+ \sigma^2</math>.
 
<math>\sum_{i=1}^m (\hat y_i-y_i)^2 = \sum_{i=1}^m (\hat f_i-f_i)^2+ m\sigma^2</math>
 
Based on the notation we denote above, then we obtain:
<math>\displaystyle MSE=Err-m\sigma^2</math>
 
 
 
This is the justification behind the technique of cross validation. since <math>\displaystyle \sigma^2</math> is constant, to minimize <math>\displaystyle MSE</math> is equal to minimize the test err <math>\displaystyle Err</math>. In cross vaildation to avoid overfitting or underfitting, a validation data set is independent from the estimated model.
 
 
=====''Case 2''=====
 
A more interesting case is the case in which we do not use new data points to assess the performance of the estimated model. and the training data is used for both estimating and assessing a model <math>\hat f_i</math>. In this case the cross term in <math>\displaystyle (1)</math> cannot be ignored because <math>\hat f_i</math> and <math>\displaystyle y_i</math> are not independent. Therefore the cross term can be estimated by Stein's lemma, which was originally proposed to estimated the mean of a Guassian distribution.
 
 
Suppose <math>(x_i,y_i)\in\tau</math>, then by applying Stein's lemma, we obtain <math>\displaystyle (2)</math> proved above.
 
<math>\displaystyle E[(\hat y_i-y_i)^2 ]=E[(\hat f_i-f_i)^2]+\sigma^2-2\sigma^2E[\frac {\partial \hat f}{\partial y_i}]</math>.
 
This expectation means <math>\frac {1}{N}\sum_{i=1}^N (\hat y_i-y_i)^2 = \frac {1}{N}\sum_{i=1}^N (\hat f_i-f_i)^2+ \sigma^2-\frac {2}{N}\sum_{i=1}^N \frac {\partial \hat f}{\partial y_i} </math>.
 
 
<math>\sum_{i=1}^N (\hat y_i-y_i)^2 = \sum_{i=1}^N (\hat f_i-f_i)^2+ N\sigma^2-2\sigma^2\sum_{i=1}^N \frac {\partial \hat f}{\partial y_i} </math>.
 
<math>\displaystyle err=MSE+N\sigma^2-2\sigma^2\sum_{i=1}^N \frac {\partial \hat f}{\partial y_i}</math>
 
<math>\displaystyle MSE=err-N\sigma^2+2\sigma^2\sum_{i=1}^N \frac {\partial \hat f}{\partial y_i}</math>  <math>\displaystyle (3)</math>
 
In statistics, this is known as [http://www.reference.com/browse/Stein%27s+unbiased+risk+estimate Stein's unbiased risk estimate (SURE)] is an unbiased estimator of the mean-squared error of a given estimator, in a deterministic  estimation scenario. In other words, it provides an indication of the accuracy of a given estimator. This is important since, in deterministic estimation, the true mean-squared error of an estimator generally depends on the value of the unknown parameter, and thus cannot be determined completely.
 
===SURE for RBF Network===
 
Based on SURE, the optimum number of basis functions should be assigned to have the minimum generalization err <math>\displaystyle err</math>. Based on the Radial Basis Function Network, by setting  <math>\frac{\partial err}{\partial W}</math> equal to zero , we get the least squared solution of<math>\ W = (\Phi^{T}\Phi)^{-1}\Phi^{T}Y</math>.Then we have  <math>\hat{Y} = \Phi W = \Phi(\Phi^{T}\Phi)^{-1})\Phi^{T}Y = HY</math>,where <math>\ H = \Phi(\Phi^{T}\Phi)^{-1})\Phi^{T}</math>,  <math>\,H</math> is the hat matrix for this model.
 
 
<math>\hat f=\,H_{i1}y_2+\,H_{i2}y_2+\cdots+\,H_{in}y_n</math>  <math>\displaystyle (3)</math>
 
where <math>\,H</math> depends on the input vector <math>\displaystyle x_i</math> but not on <math>\displaystyle y_i</math>.
 
By taking the derivative of <math>\hat f_i</math> with respect to <math>\displaystyle y_i</math>, we can easily obtain:
 
<math>\sum_{i=1}^N \frac {\partial \hat f}{\partial y_i}=\sum_{i=1}^N \,H_{ii}</math>
 
Now, substituing this into<math>\displaystyle (3)</math> ,we get
 
<math>\displaystyle MSE=err-N\sigma^2+2\sigma^2\sum_{i=1}^N \,H_{ii}</math>
 
Here, we can tell that <math>\sum_{i=1}^N \,H_{ii}= \,Trace(H)</math>, the sum of the diagonal elements of <math>\,H</math>. Thus, we can obtain the further simplification that <math>\,Trace(H)= Trace(\Phi(\Phi^{T}\Phi)^{-1})\Phi^{T})= Trace(\Phi^{T}\Phi(\Phi^{T}\Phi)^{-1})=d</math>, where<math>\displaystyle d</math> is the dimension of <math>\displaystyle \Phi</math>. Since <math>\displaystyle \Phi</math> is a projection of input matrix <math>\,X</math> onto a basis set spanned by <math>\,M</math>, the number of basis functions. If considering intercept, then <math>\,Trace(H)= M+1</math>.
 
Then,<math>\displaystyle MSE=err-N\sigma^2+2\sigma^2(M+1)</math>.
 
===SURE Algorithm===
 
 
[[File:27.1.jpg|350px|thumb|right|Figure 27.1]]
 
We use this method to find the optimum number of basis function by choosing the model with smallest MSE over the set of models considered. Given a set of models <math>\hat f_M(x)</math> indexed by the number of basis functions, <math>\displaystyle err(M)</math>.
 
Then, <math>\displaystyle MSE(M)=err(M)-N\sigma^2+2\sigma^2(M+1)</math>
 
where <math>\displaystyle N</math> is the number of training samples and the noise,<math>\sigma^2</math>, can be estimated from the training data as
 
<math>\hat \sigma^2=\frac {1}{N-1}\sum_{i=1}^N (\hat y-y)^2</math>.
 
 
By applying SURE algorithm to SPECT Heart data, we get the optimal number of basis functions is <math>\displaystyle M=4</math>.
 
 
Pls take a look at figure 27.1 on the right, which shows that<math>\displaystyle MSE</math> is smallest when <math>\displaystyle M=4</math>.
 
 
Calculating the SURE value is easy if you have access to <math>\,\sigma</math>.
 
sure_Err = error - num_data_point * sigma .^ 2 + 2 * sigma .^2 * (num_basis_functions + 1);
 
If <math>\,\sigma</math> is not known, it can be estimated using the error.
 
error = (output - expected_output) .^ 2;
sigma = (dot(err, ones(1, num_data_point)) / (num_data_point)) / (num_data_point - 1);
sure_Err = error - num_data_point * sigma .^ 2 + 2 * sigma .^2 * (num_basis_functions + 1);
 
=='''SURE for RBF network & Support Vector Machine - November 13th, 2009'''==
 
===SURE for RBF network===
 
====Minimizing MSE====
 
By Stein's unbiased risk estimate (SURE) for Radial Basis Function (RBF) Network
we get:
 
<math>\displaystyle MSE=err-N\sigma^2+2\sigma^2(M+1)    </math>  (28.1)
 
*<math>\displaystyle MSE</math>(mean square error)= <math>\sum_{i=1}^N (\hat y_i-y_i)^2 </math>
*<math>\displaystyle err</math>(training error)= <math>\sum_{i=1}^N (\hat f_i-f_i)^2 </math>
*<math>\displaystyle (M+1) </math>( number of hidden units)= <math>\sum_{i=1}^N \frac {\partial \hat f}{\partial y_i} </math>
 
 
'''Goal''': To minimize MSE
 
1. If <math>\displaystyle \sigma </math> is known, then it is no impact (i.e. a constant),
and we can ignore it. Only need to minimize <math>\displaystyle MSE=err +2\sigma^2(M+1)</math>.
 
2. In reality, we do not know <math>\displaystyle \sigma</math>, and the estimate <math>\,\hat \sigma</math> changes when <math>\displaystyle (M+1) </math> changes. However, we can estimate <math>\displaystyle \sigma </math>.
 
<math>\displaystyle y_i= f(x_i)+\epsilon_i</math>, where <math>\displaystyle \epsilon_i</math>is additive Gaussian noise <math>~N(0,\sigma^2)</math>. Suppose we do not know the variance of <math>\displaystyle \epsilon</math>. Then,
 
<math>\displaystyle \sigma^2=\frac{1}{N-1}\sum_{i=1}^N (\hat y_i-y_i)^2 =\frac{1}{N-1}err</math>  (28.2)
 
Substitute (28.2) into (28.1), get
 
<math>\displaystyle MSE=err-N\frac{1}{N-1}err+2\frac{1}{N-1}err(M+1)</math>
 
<math>\displaystyle MSE=err(1-\frac{N}{N-1}+\frac{2(M+1)}{N-1})</math>
 
<math>\displaystyle  MSE=err(\frac{N-1-N+2M+2}{N-1})</math>
 
<math>\displaystyle MSE=err(\frac{2M+1}{N-1})  </math>  (28.3)
 
 
[[File:28.1.jpg|350px|thumb|Figure 28.1: MSE vs err]]
 
Figure 28.1: the training error will decrease and the MSE will increase when increasing the number of hidden units (i.e. the model is more complex).
 
 
When the number of hidden units gets larger and larger, the training error will decrease until it approaches to <math>\displaystyle 0 </math>. If training error equals <math>\displaystyle 0 </math>, then no matter how large <math>\displaystyle (M+1) </math> is, from (28.3) we can see the estimate of MSE will approach <math>\displaystyle 0 </math> as well. However, in fact it does not happen since when training error is close to <math>\displaystyle 0 </math>  [http://en.wikipedia.org/wiki/Overfitting overfitting] happens, and MSE should increase instead of being close to <math>\displaystyle 0 </math>. We can see it from the Figure 28.1.
 
 
We can see <math>\displaystyle \sigma^2 </math> is the average of <math>\displaystyle err </math>. In order to deal with this problem, we can take the average for <math>\displaystyle err</math> of each hidden unit. For example: we can first take 1 hidden unit, and take 10 hidden units in the next.  Since in reality the value of <math>\, \sigma^2</math> is a constant adjustment to the data points, and doesn't depend on <math>\,M+1</math>, using the average <math>\,\sigma^2</math> value for 1 to 10 hidden units has a firm theoretical basis.
 
We can also see that unlike the classical Cross Validation (CV) or Leave one out (LOO) techniques, the SURE technique does not need to do the validation to find the optimal model. Hence, SURE technique uses less data than CV or LOO. It is suitable for the case that there is not enough data for validation. However, to implement SURE we need to find <math>\frac {\partial \hat f}{\partial y_i}</math>, which may not be trivial for models that do not have a closed-form solution.
 
====Kmeans Clustering====
 
Description:<br /> [http://en.wikipedia.org/wiki/K-means_clustering Kmeans clustering] is a method of cluster analysis which aims to partition n observations into k clusters in which each observation belongs to the cluster with the nearest mean.
 
*The number of hidden units is equal to the number of clusters, which is <math>\displaystyle \phi </math>
 
*<math>\displaystyle \phi_{j}(x) = e^{\frac{-\Vert x - \mu_{j}\Vert ^2}{2\sigma_{j}^2}}</math>, we set it equal for all clusters.
 
The basic details for <math>K</math>-means clustering are given:
 
The <math>K</math> initial centers are randomly chosen from the training data.
 
Then the following two steps are iterated alternately until convergence.
 
1. For each existing center, we re-identify its cluster (every point in this cluster should be closer to this center than to others).
 
2. Compute the mean for each cluster and make it as the new center for each cluster.
 
 
=====Kmeans Clustering algorithm=====
 
*For a given cluster assignment <math>\displaystyle C</math>, the total cluster variance is minimized with respect to <math>\displaystyle \lbrace m_1,m_2,...,m_k \rbrace</math> yielding the means of the currently assigned clusters.
 
*Given a current set of means<math>\displaystyle \lbrace m_1,m_2,...,m_k \rbrace</math>, <math>\displaystyle min \sum_{k=1}^K \N_k \sum_{C(i)=k} \|x_i-m_k  \|^2</math> is minimized by assigning each observation to the closet (current) cluster mean. That is, <math>\C(i)=argmin \Vert x_i-m_k \Vert ^2</math>.
 
*Steps 1 and 2 are iterated until the assignments do not change.
 
=====Example:<br />=====
Partition data into 2 clusters (2 hidden values)
 
 
    >> X=rand(30,80);               
    >> [IDX,C,sumD,D]=kmeans(X,2);   
    >> size(IDX)                   
    >>  30    1
    >> size(C)                   
    >>    2    80
    >> size(sumD)                               
    >>    2    1
    >> c1=sum(IDX==1)
    >>    14
    >> c2=sum(IDX==2)
    >>    16
    >> sumD
    >>  85.6643
    >>  101.0419
    >> v1=sumD(1,1)/c1             
    >>  6.1189
    >> v2=sumD(2,1)/c2             
    >>  6.3151     
 
 
 
Comments:
 
We create <math>X</math> randomly as a training set with 80 data points and 30 dimensions, and then apply “kmeans” method to separate X into 2 clusters. IDX is a vector contains 1 or 2 which indicates 2 clusters, and its size is 30*1. <math>\displaystyle C </math> is the center (mean) of each cluster with size 2*80; sumD is sum of the square distance between the data points and center of its cluster. The <math>\displaystyle c1 </math> and <math>\displaystyle c2 </math> indicate the number of data points in cluster 1 and 2. <math>\displaystyle V1 </math> is the variance of the first cluster <math>\displaystyle (v1=\sigma_1)</math>; <math>\displaystyle V2 </math> is the variance of the second cluster <math>\displaystyle (v2=\sigma_2)</math>. Now we can get <math>\displaystyle \phi </math>, <math>\displaystyle w </math>, hat matrix <math>\displaystyle H </math> and <math>\displaystyle \hat Y </math> by following equations. Finally, we will get the <math>\displaystyle MSE </math> and predict the test set.
 
<math>\displaystyle \phi_{j}(x) = e^{\frac{-\Vert x - \mu_{j}\Vert ^2}{2\sigma_{j}^2}}</math>
 
<math>\displaystyle W = (\phi^{T}\Phi)^{-1}\phi^{T}Y</math>
 
<math>\displaystyle H = \phi(\Phi^{T}\Phi)^{-1}\phi^{T}</math>
 
<math>\displaystyle \hat{Y} = \phi W = \phi(\phi^{T}\phi)^{-1}\phi^{T}Y = HY</math>
 
 
 
Aside:
 
Similar in spirit to <math>K</math>-means, there is [http://en.wikipedia.org/wiki/Expectation-maximization_algorithm EM algorithm] with respect to Gaussian mixture model. Generally speaking, the Gaussian mixture model is referred to as a soft clustering while <math>K</math>-means is hard clustering.
 
Similar to <math>K</math>-means, the following two steps are iterated alternately until convergernce.
 
E-step, each point is assigned a weight for each cluster based on the likelihood of each of the corresponding Gaussians. Observations is assigned 1 for one cluster if they are closer to the center of that cluster, and is assigned 0 for other clusters.
 
M-step, compute the weighted means and covariances and make them as the new means and covariances for every cluster.
 
    >>[P,mu,phi,1Pxtr]=mdgEM(X,2,200,0);
 
===Support Vector Machine===
 
====Introduction====
We have seen that linear discriminant analysis and logistic regression both estimate linear decision boundaries in similar but slightly different ways. Separating hyperplane classifiers provide the basis for the support vector machine (SVM). An SVM constructs linear decision boundaries that explicitly try to separate the data into different classes while maximizing the margin of separation. The techniques that make the extensions to the non-separable case, where the classes overlap, are generalized to what is known as the support vector machine. It produces nonlinear boundaries by constructing a linear boundary in a high-dimensional, transformed version of the feature space.  It is also calculated based on only a fraction of the data rather than a calculation on every point in the data, much like the difference between the median and the mean.
 
The original basis for SVM was published in the 1960s by [http://en.wikipedia.org/wiki/Vapnik Vapnik], Chervonenkis et al., however the ideas did not gain any attention until strong results were shown in the early 1990s.
 
Definition: <br />
[http://en.wikipedia.org/wiki/Support_vector_machine Support Vector Machines (SVM)] are a set of related supervised learning methods used for classification and regression. A support vector machine constructs a maximum margin hyperplane or set of hyperplanes in a higher or infinite dimensional space. The set of points near the class boundaries, support vectors, define the model which can be used for classification, regression or other tasks.
 
====Optimal Seperating Hyperplane====
 
[[File:28.2.jpg|350px|thumb|right|Figure 28.2]]
 
Figure 28.2 An example with two classes separated by a hyperplane. The blue line is the least squares solution, which misclassifies one of the training points. Also shown are the black separating hyperplanes found by the [http://en.wikipedia.org/wiki/Perceptron perceptron] learning algorithm with different random starts.<br />
 
We can see the data points can be separated by a linear boundary are in two classes in <math>\displaystyle \mathbb{R}^{2} </math>. Suppose a dataset is indeed linearly separable, then there exits infinitely many possible separating hyperplanes including the black lines in the figure as two of them for training data. However, which solution is the best when we introduce the new data. <br />
 
Aside: <br />
The blue line is the least squares solution to the problem,obtained by regressing the <math>\displaystyle -1/+1 </math> response <math>\displaystyle Y </math> on <math>\displaystyle X </math> (with intercept); the line is given by
<math>\displaystyle {X:\hat\beta_0+\hat\beta_1X_1+\hat\beta_2X_2=0}</math>.
This least squares solution does not do a perfect job in separating the points, and makes one error. This is the same boundary found by linear discriminant analysis, in light of its equivalence with linear regression in the two-class case.
 
Classifiers such as (28.4) that compute a linear combination of the input features and return the sign were called ''perceptrons'' in the engineering literature in the late 1950s.
 
 
Identifications:
 
*Hyperplane: separate two classes 
 
<math>\displaystyle x^{T}\beta+\beta_0=0</math>
 
*Margin: the distance between the hyperplane and the closest point.
 
<math>\displaystyle d_i=x_i^{T}\beta+\beta_0 </math> where <math>\displaystyle i=1,....,N</math>
 
Note: since distance is positive, if the data is on <math>\displaystyle +1 </math> side the distance is <math>\displaystyle d_i(+1)</math>. If the data is on the <math>\displaystyle -1 </math> side the distance is <math>\displaystyle d_i(-1)</math>.
 
*Data points: <math>\displaystyle y_i\in\{-1,+1\}</math>we can classify points as<math>\displaystyle sign\{d_i\}</math> if <math>\displaystyle \beta,\beta_0 </math> is known.<br />
 
====Maximum Margin Classifiers in the Linearly separable case====
Choose the line farthest from both classes or choose the line which has the maximum distance from the closest point. (i.e maximize the margin)<br />
 
<math>\displaystyle Margin=min\{y_id_i\}</math> <math>\displaystyle i=1,2,....,N </math> 
where <math>\displaystyle y_i </math>  is label and <math>\displaystyle d_i </math>  is distance<br />
 
[[File:28.3.jpg|350px|thumb|right|Figure 28.3 The linear algebra of a hyperplane]]
 
 
 
Fiqure 28.3 depicts a hyperplane defined by the equation <math>\displaystyle x^{T}\beta+\beta_0=0</math>. Since they are in <math>\displaystyle  \mathbb{R}^{2} </math>, the hyperplane is a line.<br />
 
 
Let us rewrite <math>\displaystyle Margin=min\{y_id_i\}</math> by using the following properties:<br />
 
1. <math>\displaystyle \beta </math> is orthogonal to the hyperplane <br />
 
Two points<math>\displaystyle x_1,x_2</math> lying on the hyperplane.
 
<math>\displaystyle \beta^{T}x_1+\beta_0=0</math>
 
<math>\displaystyle \beta^{T}x_2+\beta_0=0</math>
 
<math>\displaystyle \beta^{T}x_1+\beta_0 - (\beta^{T}x_2+\beta_0)=0</math>
 
<math>\displaystyle \beta^{T}(x_1-x_2)=0</math>
 
Hence,<math>\displaystyle \beta </math> is orthogonal to <math>\displaystyle  (x_1-x_2)</math>, and<math>\displaystyle \beta^*=\frac{\beta}{\|\beta\|} </math> is the vector normal to the hyperplane.<br />
 
2. For any point <math>\displaystyle  x_1 </math> on the hyperplane,
 
<math>\displaystyle \beta^{T}x_1+\beta_0=0</math>
 
<math>\displaystyle \beta^{T}x_1=-\beta_0</math>
For any point on the hyperplane, multiplying by <math>\displaystyle \beta^{T}</math> gives negative value of the intercept of the hyperplane. <br/>
 
 
3. The signed distance for any point <math>\displaystyle x </math> to the hyperplane is <math>\displaystyle d_i=\beta^{T}(x_i-x_0)</math>. <br/>Since the length of <math>\displaystyle \beta </math> is not known, let it be unit vector.
 
<math>\displaystyle d_i=\frac{\beta^{T}(x_i-x_0)}{\|\beta\|}  </math> <math>\displaystyle i=1,2,....,N </math>
 
<math>\displaystyle d_i=\frac{\beta^{T}x_i-\beta^{T}x_0}{\|\beta\|} </math>
 
by property 2
 
<math>\displaystyle d_i=\frac{\beta^{T}x_i+\beta_0}{\|\beta\|} </math>
 
 
 
[[File:4.jpg|350px|thumb|right|Figure 28.4]]
 
 
We had <math>\displaystyle  Margin=min(y_id_i)</math>      <math>\displaystyle i=1,2,....,N </math>, and since we now know how to compute <math>\displaystyle  d_i \Rightarrow</math>
 
<math>\displaystyle  Margin=min\{y_i\frac{\beta^{T}x_i+\beta_0}{\|\beta\|}\} </math>
 
Suppose <math>\displaystyle x_i </math> is not on the hyperplane
 
<math>\displaystyle y_i(\beta^{T}x_i+\beta_0)>0 </math>
 
<math>\displaystyle y_i(\beta^{T}x_i+\beta_0)\geq c </math>for <math>\displaystyle c>0 </math>
 
 
<math>\displaystyle y_i(\frac{\beta^{T}x_i}{c}+\frac{\beta_0}{c})\geq1</math>
 
This is known as the canonical representation of the decision hyperplane.
 
For <math>\displaystyle \beta^{T}  </math> only the direction is important, so <math>\displaystyle \frac{\beta^{T}}{c}  </math> does not change its direction, the hyperplane will be the same.
 
<math>\displaystyle y_i(\beta^{T}x_i+\beta_0)\geq1 </math>
 
<math>\displaystyle  y_i\frac{\beta^{T}x_i+\beta_0}{\|\beta\|}\geq\frac{1}{\|\beta\|} </math>
 
<math>\displaystyle  Margin=min(\frac{1}{\|\beta\|} )</math>
 
which is equivalent as  <math>\displaystyle  Margin=max(\|\beta\| )</math>
 
Reference:<br />
Hastie,T.,Tibshirani,R., Friedman,J.,(2008).The Elements of Statistical Learning:129-130
 
====Extension--Multi-class SVM[http://en.wikipedia.org/wiki/Support_vector_machine#Multiclass_SVM]====
 
SVM is only directly applicable for two-class case. We want to generalize this algorithm to multi-class tasks.
 
Multiclass SVM aims to assign labels to instances by using support vector machines, where the labels are drawn from a finite set of several elements. The dominating approach for doing so is to reduce the single multiclass problem into multiple binary problems. Each of the problems yields a binary classifier, which is assumed to produce an output function that gives relatively large values for examples from the positive class and relatively small values for examples belonging to the negative class. Two common methods to build such binary classifiers are where each classifier distinguishes between (i) one of the labels to the rest (one-versus-all) or (ii) between every pair of classes (one-versus-one). Classification of new instances for one-versus-all case is done by a winner-takes-all strategy, in which the classifier with the highest output function assigns the class (it is important that the output functions be calibrated to produce comparable scores). For the one-versus-one approach, classification is done by a max-wins voting strategy, in which every classifier assigns the instance to one of the two classes, then the vote for the assigned class is increased by one vote, and finally the class with most votes determines the instance classification.
 
=='''Optimizing The Support Vector Machine - November 16th, 2009'''==
We currently derive Support Vector Machine for the case where two classes are separable in the given feature space.  This margin can be written as <math>\,min\{y_id_i\}</math>, or the distance of each point from the hyperplane, where <math>\,d_i</math> is the distance and <math>\,y_i</math> is used as the sign.
===Margin Maximizing Problem for the Support Vector Machine===
<math>\,Margin=min\{y_id_i\}</math> can be rewritten as <math>\,min\left\{\frac{y_i\left(\beta^Tx_i+\beta_0\right)}{|\beta|}\right\}</math>. 
<br\>Note that the term <math>\,y_i\left(\beta^Tx_i+\beta_0\right) = 0</math> if <math>\,x_i</math> is on the hyperplane, but <math>\,y_i\left(\beta^Tx_i+\beta_0\right) > 0</math> if <math>\,x_i</math> is not on the hyperplane.
 
This implies <math>\,\exists C>0</math> such that <math>\,y_i\left(\beta^Tx_i+\beta_0\right) \geq C</math>.
 
Divide through by C to produce <math>\,y_i\left(\frac{\beta^T}{C}x_i + \frac{\beta_0}{C}\right) \geq 1</math>. 
 
<math>\,\beta, \beta_0</math> compose a hyperplane that can have different values but we care about the direction, dividng through by a constant does not change the direction of the hyperplane. Thus, by assuming scaled values for <math>\,\beta, \beta_0</math> we eliminate C, so that <math>\,y_i\left(\beta^Tx_i+\beta_0\right) \geq 1</math>. Implying that the lower bound on <math>\,y_i\left(\beta^Tx_i+\beta_0\right)</math> is <math>\displaystyle 1</math>
 
Now in order to maximize the margin, we simply need to find <math>\,min\left\{\frac{1}{|\beta|}\right\}</math>. 
 
In other words, our optimization problem is now to find maximum <math>\,|\beta|</math>, under the constraint that <math>\,min_i\{y_i(\beta^Tx_i+\beta_0)\} = 1</math>.
 
Note that we're dealing with the norm of <math>\,\beta</math>.  There are many different choices of possible norms, in general [http://en.wikipedia.org/wiki/P-norm#p-norm p-norm]. The 1-norm of a vector is simply the sum of the absolute value of each element (also known as the taxicab or Manhattan distance), and is apparently more accurate, but also has a discontinuity in the derivative.  2-norm or the Euclidean norm (the intuitive measure of the length of a vector), is easier to work with - that is <math>\,\|\beta\|_2 = (\beta^T\beta)^{1/2}</math>.  For convenience, we will maximize <math>\,\frac{1}{2}\|\beta\|^2 = \frac{1}{2}\beta^T\beta</math> where the constant 1/2 has been added for simplification and that the maximizing the function is the same as maximizing the square root of that function.
 
This is an example of a quadratic programming problem and we will minimize a quadratic function subject to linear inequality constraints.
 
 
====Writing Lagrangian Form of Support Vector Machine====
The Lagrangian form is introduced to ensure that the optimization conditions are satisfied, as well as finding an optimal solution (the optimal saddle point of the Lagrangian for the classic quadratic optimization). The problem will be solved in dual space by introducing  <math>\,\alpha_i</math> as dual constraints, this is in contrast to solving the problem in primal space as function of the betas.  A [http://www.cs.wisc.edu/dmi/lsvm/ simple algorithm] for iteratively solving the Lagrangian has been found to run well on very large data sets, making SVM more usable.  Note that this algorithm is intended to solve Support Vector Machines with some tolerance for errors - not all points are necessarily classified correctly.  Several papers by Mangasarian explore different algorithms for solving SVM.
 
<math>\,L(\beta,\beta_0,\alpha) = \frac{1}{2}\|\beta\|^2 - \sum_{i=1}^n{\alpha_i\left(y_i(\beta^Tx_i+\beta_0)-1\right)}</math>.  To find the optimal value, set the derivative equal to zero.
 
<math>\,\frac{\partial L}{\partial \beta} = 0</math>, <math>\,\frac{\partial L}{\partial \beta_0} = 0</math>.  Note that <math>\,\frac{\partial L}{\partial \alpha_i}</math> is equivalent to the constraints <math>\left(y_i(\beta^Tx_i+\beta_0)-1\right) \geq 0, \,\forall\, i</math>
 
First, <math>\,\frac{\partial L}{\partial \beta} = \frac{\partial}{\partial \beta}\frac{1}{2}\|\beta\|^2 - \sum_{i=1}^n{\left\{\frac{\partial}{\partial \beta}(\alpha_iy_i\beta^Tx_i)+\frac{\partial}{\partial \beta}\alpha_iy_i\beta_0-\frac{\partial}{\partial \beta}\alpha_iy_i\right\}}</math>
 
: <math>\frac{\partial}{\partial \beta}\frac{1}{2}\|\beta\|^2 = \beta</math>.
 
: <math>\,\frac{\partial}{\partial \beta}(\alpha_iy_i\beta^Tx_i) = \alpha_iy_ix_i</math>
 
: <math>\,\frac{\partial}{\partial \beta}\alpha_iy_i\beta_0 = 0</math>.
 
: <math>\,\frac{\partial}{\partial \beta}\alpha_iy_i = 0</math>.
 
So this simplifies to <math>\,\frac{\partial L}{\partial \beta} = \beta - \sum_{i=1}^n{\alpha_iy_ix_i} = 0</math>.  In other words,
 
<math>\,\beta = \sum_{i=1}^n{\alpha_iy_ix_i}</math>, <math>\,\beta^T = \sum_{i=1}^n{\alpha_iy_ix_i^T}</math>
 
Similarly, <math>\,\frac{\partial L}{\partial \beta_0} = -\sum_{i=1}^n{\alpha_iy_i} = 0</math>.
 
This allows us to rewrite the Lagrangian without <math>\,\beta</math>.
 
<math>\,\frac{1}{2}\sum_{i=1}^n{\sum_{j=1}^n{\alpha_i\alpha_jy_iy_jx_i^Tx_j}} - \sum_{i=1}^n{\alpha_iy_i(\sum_{j=1}^n{\alpha_jy_jx_j^Tx_i} + \beta_0) - 1}</math>. 
 
Because <math>\,\sum_{i=1}^n{\alpha_iy_i} = 0</math>, and <math>\,\beta_0</math> is constant, <math>\,\sum_{i=1}^n{\alpha_iy_i\beta_0} = 0</math>.  So this simplifies further, to
 
<math>L(\alpha) = \,-\frac{1}{2}\sum_{i=1}^n{\sum_{j=1}^n{\alpha_i\alpha_jy_iy_jx_i^Tx_j}} + \sum_{i=1}^n{\alpha_i}</math>
A dual representation of the maximum margin.
 
Because <math>\,\alpha_i</math> is the Lagrange multiplier, <math>\,\alpha_i \geq 0 \forall i</math>.
 
This is a much simpler optimization problem
 
====Extension:Global Optimization of Support Vector Machines(Using Genetic Algorithms for Bankruptcy Prediction)====
 
One of the most important research issues in finance is building accurate corporate bankruptcy prediction models since they are essential for the
risk management of financial institutions. Thus, researchers have applied various data-driven approaches to enhance prediction performance including statistical and artificial intelligence techniques. Recently, support vector machines(SVMs) are becoming popular because they use a risk function consisting of the empirical error and a regularized term which is derived from the structural risk minimization principle. In addition, they don’t require huge training samples and have little possibility of overfitting. However, in order to use SVM, a user should determine several factors such as the parameters of a kernel function, appropriate feature subset, and proper instance subset by heuristics, which hinders accurate prediction results when using SVM.
 
There is a paper [[http://zoe.bme.gatech.edu/~klee7/docs/iconip_Simultaneous_Opt_of_SVM_GA_for_data_prediction.pdf]]proposing a novel approach to enhance the prediction performance of SVM for the prediction of financial distress. Their suggestion is the simultaneous optimization of the feature selection and the instance selection as well as the parameters of a kernel function for SVM by using genetic algorithms (GAs). They apply their model to a real-world case. Experimental results show that the prediction accuracy of conventional SVM may be improved significantly by using their model.
 
====Extension: Finding Optimal Parameter Values ====
 
The accuracy of an SVM model dependents on the selection of the model parameters. DTREG provides two methods for finding optimal parameter values.
Grid search: tries values of each parameter across the specified search range using geometric steps.
Pattern search/compass search/line search:starts at the center of the search range and makes trial steps in each direction for each parameter.
 
If the fit of the model improves, the search center moves to the new point and the process is repeated. If no improvement is found, the step size is reduced and the search is tried again. The pattern search stops when the search step size is reduced to a specified tolerance.
 
=== Positives and Negatives When Optimizing SVM[http://www.cse.unr.edu/~bebis/MathMethods/SVM/lecture.pdf]===
 
* (Pos) Appears to avoid overfitting in high dimensional spaces and generalize well using a small training set (the complexity of SVM is characterized by the number of support vectors rather than the dimensionality of the transformed space -- no formal theory to justify this).
 
* (Pos) Global optimization method, no local optima (SVM are based on exact optimization, not approximate methods).
 
* (Neg) Applying trained classifiers can be expensive.
 
=='''The Support Vector Machine algorithm - November 18, 2009'''==
===[http://en.wikipedia.org/wiki/Lagrange_multipliers Lagrange Duality]===
In convex optimization, consider the primal optimization problem:
 
<math>\,\min_\beta</math>  <math>\,f(\omega)</math>
 
<math>\,s.t.</math>  <math>\,g_i(\omega) \leq 0</math>
 
<math>\,h_i(\omega) = 0</math>
 
Define the generalized Lagrangian to be
 
<math>L(\omega,\alpha,\beta) = f(\omega)+\sum_{i=1}^k\alpha_ig_i(\omega)+\sum_{i=1}^k\beta_ih_i(\omega)</math>
 
Then the dual optimization problem is
 
<math>\ \max{\alpha,\beta} </math> <math>\ \min_{\omega}</math> <math>\ L(\omega,\alpha,\beta) </math>
 
Now instead of solving the primal problem, we can solve the dual problem without changing the solution as long as it subjects to the Karush-Kuhn-Tucker (KKT) conditions:
 
<math>\frac{\partial}{\partial\omega_i}L(\omega,\alpha,\beta)=0</math>
 
<math>\frac{\partial}{\partial\beta_i}L(\omega,\alpha,\beta)=0</math>
 
<math>\,\alpha_ig_i(\omega)=0</math>
 
<math>g_i(\omega) \leq 0</math>
 
<math>\,\alpha_i \geq 0</math>
 
We are interested in the dual form because it gives bound on the primal problem and in some cases is easier to solve. For more information about convex optimization, see the book by Boyd:
[http://www.stanford.edu/~boyd/cvxbook/]
 
===Solving the Lagrangian===
 
Continuing from the above derivation, we now have the equation that we need to minimize, as well as two constraints.
 
The Support Vector Machine problem boils down to:
 
<math>\min_{\alpha} L(\alpha) = \sum_{i=1}^n{\alpha_i} - \frac{1}{2}\sum_{i=1}^n{\sum_{j=1}^n{\alpha_i\alpha_jy_iy_jx_i^Tx_j}}</math>
:such that <math>\alpha_i \geq 0</math>
:and <math>\sum_{i=1}^n{\alpha_i y_i} = 0</math>
 
We are looking to minimize <math>\,\alpha</math>, which is our only unknown. Once we know <math>\,\alpha</math>, we can easily find <math>\,\beta</math> and <math>\,\beta_0</math> (see the Support Vector algorithm below for complete details).
 
If we examine the Lagrangian equation, we can see that <math>\,\alpha</math> is multiplied by itself; that is, the Lagrangian is quadratic with respect to <math>\,\alpha</math>. Our constraints are linear. This is therefore a problem that can be solved through [http://en.wikipedia.org/wiki/Quadratic_programming quadratic programming] techniques. We will examine how to do this in Matlab shortly.
 
We can write the Lagrangian equation in matrix form:
 
<math>L(\alpha) = \underline{\alpha}^T\underline{1} - \frac{1}{2}\underline{\alpha}^TS\underline{\alpha}</math>
:such that <math>\underline{\alpha} \geq \underline{0}</math>
:and <math>\underline{\alpha}^T\underline{y} = 0</math>
 
Where:
* <math>\underline{\alpha}</math> denotes an <math>\,n \times 1</math> vector; <math>\underline{\alpha}^T = [\alpha_1, ..., \alpha_n]</math>
* Matrix <math>S = y_iy_jx_i^Tx_j = (y_ix_i)^T(y_ix_i)</math>
* <math>\,\underline{0}</math> and <math>\,\underline{1}</math> are vectors containing all 0s or all 1s respectively
 
Using this matrix notation, we can use Matlab's built in quadratic programming routine, [http://www.mathworks.com/access/helpdesk/help/toolbox/optim/ug/quadprog.html quadprog].
 
===Quadprog example===
 
Let's use <code>quadprog</code> to find the solution to <math>\,L(\alpha)</math>.
 
Matlab's <code>quadprog</code> function minimizes an equation of the following form:
:<math>\min_x\frac{1}{2}x^THx+f^Tx</math>
:such that: <math>\,A \cdot x \leq b</math>, <math>\,A_{eq} \cdot x = b_{eq}</math> and <math>\,lb \leq x \leq ub</math>
 
We can now see why we kept the <math>\frac{1}{2}</math> constant in the original derivation of the equation.
 
The function is called as such: <code>x = quadprog(H,f,A,b,Aeq,beq,lb,ub)</code>. The variables correspond to values in the equation above.
 
We can now try to find the solution to <math>\,L(\alpha)</math> (though, it should be noted, that in <math>\,L(\alpha)</math> we subtract the first term rather than add it; I'm not sure if this difference would change how we call <code>quadprog</code>).
 
We'll use a simple one-dimensional data set, which is essentially y = -1 or 1 + Gaussian noise. (Note: you could easily put the values straight into the quadprog equation; they are separated for clarity.)
 
x = [mvnrnd([-1],[0.01],100); mvnrnd([1],[0.01],100)]';
y = [-ones(100,1); ones(100,1)];
S = (y*x)' * (y*x);
f = ones(200,1);
A = -ones(1,200);
b = 0;
Aeq = y';
beq = 0;
lb = 0;
ub = []; % There is no upper bound
alpha = quadprog(S,f,A,b,Aeq,beq,lb,ub);
 
This gives us the optimal <math>\,\alpha</math>... or at least I think it should, but it does not appear to work for me (that is, despite setting the lower boundary of 0, a number of values are still negative. Whether this is just the nature of <code>quadprog</code> or an error on my part is an exercise for the reader).
 
===Examining K.K.T. conditions===
 
[http://en.wikipedia.org/wiki/Karush%E2%80%93Kuhn%E2%80%93Tucker_conditions Karush-Kuhn-Tucker conditions] ([http://webrum.uni-mannheim.de/mokuhn/public/KarushKuhnTucker.pdf more info]) give us a closer look into the Lagrangian equation and the associated conditions.
 
Suppose we are looking to minimize <math>\,f(x)</math> such that <math>\,g_i(x) \geq 0, \forall{x}</math>. If <math>\,f</math> and <math>\,g</math> are differentiable, then the ''necessary'' conditions for <math>\hat{x}</math> to be a local minimum are:
 
# At the optimal point, <math>\frac{\partial L}{\partial \hat{x}} = 0</math>; i.e. <math>f'(\hat{x}) - \sum{\alpha_ig'(\hat{x})}=0</math>
# <math>\alpha_i \geq 0</math>. (Dual Feasibility)
# <math>\alpha_ig_i(\hat{x}) = 0, \forall{i}</math> (Complementary Slackness)
# <math>g_i(\hat{x}) \geq 0</math> (Primal Feasibility)
 
If any of these conditions are violated, then the problem is deemed not feasible.
 
These are all trivial except for condition 3. Let's examine it further in our support vector machine problem.
 
===Support Vectors===
 
Support vectors are the training points that determine the optimal separating hyperplane that we seek. Also, they are the most difficult points to classify or the most informative for the classification.
 
In our case, the <math>g_i(\hat{x})</math> function is:
:<math>\,g_i(x) = y_i(\beta^Tx_i+\beta_0)-1</math>
 
Substituting <math>\,g_i</math> into KKT condition 3, we get <math>\,\alpha_i[y_i(\beta^Tx_i+\beta_0)-1] = 0</math>. <br\>In order for this condition to be satisfied either <br/><math>\,\alpha_i= 0</math> or <br/><math>\,y_i(\beta^Tx_i+\beta_0)=1</math>
 
All points <math>\,x_i</math> will be either 1 or greater than 1 distance away from the hyperplane.
 
'''Case 1: a point <math>\displaystyle x_i > 1</math> away from the margin'''
 
If <math>\,y_i(\beta^Tx_i+\beta_0) > 1 \Rightarrow \alpha_i = 0</math>.
 
If point <math>\, x_i</math> is not on the margin, then the corresponding <math>\,\alpha_i=0</math>.
 
'''Case 2: a point <math>\displaystyle x_i = 1</math> away from the margin'''
 
If <math>\,\alpha_i > 0 \Rightarrow y_i(\beta^Tx_i+\beta_0) = 1</math>
<br\>If point <math>\, x_i</math> is on the margin, then the corresponding <math>\,\alpha_i>0</math>.
 
 
Points on the margin, with corresponding <math>\,\alpha_i > 0</math>, are called '''''support vectors'''''.
 
===Using support vectors===
 
Support vectors are important because they allow the support vector machine algorithm to be insensitive to outliers. If <math>\,\alpha = 0</math>, then the cost function is also 0, and won't contribute to the solution of the SVM problem; only points on the margin — support vectors — contribute. Hence the model given by SVM in entirely defined by the set of support vectors, a subset of the entire training set. This is interesting because in the NN methods(and can be generalize to classical statistical learning) previous to this the configuration of the network needed to be specified. In this case we have a data-driven or 'nonparametric' model in which is the training set and algorithm will determine the support vectors, instead of fitting a set of parameters using CV or other error minimization functions.
 
References:
Wang, L, 2005. Support Vector Machines: Theory and Applications, Springer, 3
 
====The support vector machine algorithm====
 
# Solve the quadratic programming problem:<math>\min_{\alpha} L(\alpha) = \sum_{i=1}^n{\alpha_i} - \frac{1}{2}\sum_{i=1}^n{\sum_{j=1}^n{\alpha_i\alpha_jy_iy_jx_i^Tx_j}}</math> such that <math>\alpha_i \geq 0</math> and <math>\sum_{i=1}^n{\alpha_i y_i} = 0</math>
## Use Matlab's quadprog to find the optimal <math>\,\underline{\alpha}</math>
# Find <math>\beta = \sum_{i=1}^n{\alpha_iy_i\underline{x_i}}</math>
# Find <math>\,\beta_0</math> by choosing a support vector (a point with <math>\,\alpha_i > 0</math>) and solving <math>\,y_i(\beta^Tx_i+\beta_0) = 1</math>
 
===Example in Matlab===
 
The following code, taken verbatim from the lecture, shows how to use Matlab built-in SVM routines (found in the Bioinformatics toolkit) to do classification through support vector machines.
 
load 2_3;
[U,Y] = princomp(X');
data = Y(:,1:2);
l = [-ones(1,200) ones(1,200)];
[train,test] = crossvalind('holdOut',400);
% Gives indices of train and test; so, train is a matrix of 0 or 1, 1 where the point should be used as part of the training set
svmStruct = svmtrain(data(train,:), l(train), 'showPlot', true);
 
[[File:Svm1.png|frame|center|The plot produced by training on some of the 2_3 data's first two features.]]
 
yh = svmclassify(svmStruct, data(test,:), 'showPlot', true);
 
[[File:Svm2.png|frame|center|The plot produced by testing some of the 2_3 data.]]
 
===SVM in Gene Selection===
DNA micro-arrays now permit scientists to screen thousands of genes simultaneously and determine whether those genes are active, hyperactive or silent in normal or cancerous tissue. Because these new micro-array devices generate bewildering amounts of raw data, new analytical methods must be developed to sort out whether cancer tissues have distinctive signatures of gene expression over normal tissues or other types of cancer tissues.
 
====Extention:Support Vector Machines for Pattern Recognition====
[http://research.microsoft.com/en-us/um/people/cburges/papers/svmtutorial.pdf]
This paper talks about linear Support Vector Machines for separable and non-separable data by working through a non-trivial example in detail, and also it describes a mechanical analog and when SVM solutions are unique and when they are global. From this paper we can know support vector training can be practically implemented, and the kernel mapping technique which is used to construct
SVM solutions which are nonlinear in the data.
 
Results of some experiments which were inspired by these arguments are also presented.
The writer gives numerous examples and proofs of most of the key theorems, he hopes the people can find old material is cast in a fresh light since the paper includes some new material.
 
=== Limitation of SVM algorithm [http://www.cse.unr.edu/~bebis/MathMethods/SVM/lecture.pdf]===
 
* The biggest limitation of SVM lies in the choice of the kernel (the best choice of kernel for a given problem is still a research problem).
 
* A second limitation is speed and size (mostly in training - for large training sets, it typically selects a small number of support vectors, thereby minimizing the computational requirements during testing).
 
=='''Non-linear hypersurfaces and Non-Separable classes - November 20, 2009'''==
==='''[http://en.wikipedia.org/wiki/Kernel_trick Kernel Trick]'''===
We talked about the curse of dimensionality at the beginning of this course, however, we now turn to the power of high dimensions in order to find a linearly separable hyperplane between two classes of data points. To understand this, imagine a two dimensional prison where a two dimensional person is constrained. Suppose magically we give the person a third dimension, then he can escape from the prison. In other words, the prison and the person are linearly separable now with respect to the third dimension. The intuition behind the "kernel trick" is basically to map data to a higher dimension so that they are linearly separable by a hyperplane.
 
[[File:Point_2d.png|200px|thumb|right|Imagine the point is a person.  They're stuck.]]
[[File:Point_3d.png|200px|thumb|right|Escape through the third dimension!]]
[[File:Unsep.png|200px|thumb|right|It's not possible to put a hyperplane through these points.]]
[[File:Sep2.png|200px|thumb|right|After a simple transformation, a perfect classification plane can be found.]]
 
The original optimal hyperplane algorithm proposed by [http://en.wikipedia.org/wiki/Vladimir_Vapnik Vladimir Vapnik] in 1963 was a linear classifier. However, in 1992, Bernhard Boser, Isabelle Guyon and Vapnik suggested a way to create non-linear classifiers by applying the kernel trick to maximum-margin hyperplanes. The algorithm is very similar, except that every dot product is replaced by a non-linear kernel function as below. This allows the algorithm to fit the maximum-margin hyperplane in a transformed feature space. We have seen SVM as a linear classification problem that finds the maximum margin hyperplane in the given input space. However, for many real world problems a more complex decision boundary is required. The following simple method was devised in order to solve the same linear classification problem but in a higher dimensional space, a 'feature space', under which the maximum margin hyperplane is better suited.
 
Let <math>\,\phi</math> be a mapping,
 
<math>\phi:\mathbb{R}^d \rightarrow \mathbb{R}^D </math><br /><br />
 
We wish to find a <math>\,\phi</math> such that our data will be suited for separation by a hyperplane. Given this function, we are lead to solve the previous constrained quadratic optimization on the transformed dataset,<br /><br />
 
<math>\min_{\alpha} L(\alpha) = \sum_{i=1}^n{\alpha_i} - \frac{1}{2}\sum_{i=1}^n{\sum_{j=1}^n{\alpha_i\alpha_jy_iy_j\phi(x_i)^T\phi(x_j)}}</math> such that <math>\alpha_i \geq 0</math> and <math>\sum_{i=1}^n{\alpha_i y_i} = 0</math><br /><br />
 
The solution to this optimization problem is now well known; however a workable <math>\,\phi</math> must be determined. Possibly the largest drawback in this method is that we must compute the inner product of two vectors in the high dimensional space. As the number of dimensions in the initial data set increases, the inner product becomes computationally intensive or impossible.
 
However, we have a very useful result that says that there exists a class of functions, <math>\,\Phi</math>, which satisfy the above requirements and that for any function <math>\,\phi \in \Phi</math>,
 
<math>\,\phi(x_i)^T\phi(x_j) = K(x_i,x_j) </math><br /><br />
 
Where K is the kernel function in the input space satisfying [http://en.wikipedia.org/wiki/Mercer%27s_condition Mercer's condition] (to guarantee that it indeed corresponds to certain mapping function <math>\,\phi</math>).  As a result, if the objective function depends on inner products but not on coordinates, we can always use the kernel function to implicitly calculate in the feature space without storing the huge data. Not only does this solve the computation problems but it no longer requires us to explicitly determine a specific mapping function in order to use this method. In fact, it is now possible to use an infinite dimensional feature space in SVM without even knowing the function <math>\,\phi</math>.
 
 
====Three popular kernel choices in the SVM====
 
The SVM only relies on the inner-product between vectors <math>\ x_ix_j  </math>
If every data point is mapped into high-dimensional space via some transformation <math>\ \phi </math>, the inner-product becomes:
<math>\ K(x_i,x_j)= \phi(x_i)\phi (x_j)</math>
<math>\ K(x_i,x_j) </math>  is called the kernel function.
For SVM, we only need specify the kernel <math>\ K(x_i,x_j) </math>, without need to know the corresponding non-linear mapping, <math>\ \phi (x)</math>.
 
There are many types of kernels that can be used in Support Vector Machines models. These include linear, polynomial, radial basis function (RBF) and sigmoid functions.
 
Linear: <math>\ K(\underline{x}_{i},\underline{x}_{j})= \underline{x}_{i}^T\underline{x}_{j}</math>,
 
Polynomial: <math>\ K(\underline{x}_{i},\underline{x}_{j})= (\gamma\underline{x}_{i}^T\underline{x}_{j}+r)^{d}, \gamma > 0</math>,
 
Radial Basis: <math>\ K(\underline{x}_{i},\underline{x}_{j})= exp(-\gamma \|\underline{x}_i - \underline{x}_j\|^{2}), \gamma > 0</math>,
 
Gaussian kernel: <math>\ K(x_i,x_j)=exp(\frac{-||x_i-x_j||^2}{2\sigma^2 })</math>,
 
Two-layer perceptron: <math>\ K(x_i,x_j)=tanh(\alpha x_ix_j+\beta)</math>,
 
Sigmoid: <math>\ K(\underline{x}_{i},\underline{x}_{j})= tanh(\gamma\underline{x}_{i}^T\underline{x}_{j}+r)</math>.
 
Here, <math>\ \gamma </math>, <math>\ r</math>, and <math>\ d</math> are all kernel parameters.
 
The RBF is by far the most popular choice of kernel types used in Support Vector Machines. This is mainly because of their localized and finite responses across the entire range of the real x-axis.The art of flexible modeling using basis expansions consists of picking an appropriate family of basis functions, and then controlling the complexity of the representation by selection, regularization, or both. Some of the families of basis functions have elements that are defined locally; for example, <math>\displaystyle B</math>-splines are defined locally in <math>\displaystyle R</math>. If more flexibility is desired in a particular region, then that region needs to be represented by more basis functions(which in the case of <math>\displaystyle B</math>-splines translates to more knots).Kernel methods achieve flexibility by fitting simple models in a region local to the target point <math>\displaystyle x_0</math>. Localization is achieved via a weighting kernl <math>\displaystyle K</math> and individual observations receive weights <math>\displaystyle K(x_0,x_i)</math>. RBF combine these ideas, by treating the kernel functions as basis functions.
 
 
Once we have chosen the Kernel function, we don't need to figure out what <math>\,\phi</math> is, just use <math>\,\phi(\underline{x}_i)^T\phi(\underline{x}_j) = K(\underline{x}_i,\underline{x}_j) </math> to replace <math>\,\underline{x}_i^T\underline{x}_j</math>
 
Since the transformation chosen is dependent on the shape of the data, the only automated way to choose an appropriate kernel is by trial and error. Otherwise it is chosen manually.
 
==='''Mercer's Theorem in detail'''===
Let <math>\,\phi</math> be a mapping to a high dimensional [http://en.wikipedia.org/wiki/Hilbert_space Hibert Space]  <math>\,H</math><br />
 
 
<math>\phi:x \in \mathbb{R}^d \rightarrow H </math><br /><br />
 
The transformed coordinates can be defined as,<br />
 
<math>\phi_1(x)\dots\phi_d(x)\dots  </math><br /><br />
 
By Hilbert - Schmidt theory we can represent an inner product in Hilbert space as,<br /><br />
 
<math>\,\phi(x_i)^T\phi(x_j) = \sum_{r=1}^{\infty}a_k\phi_r(x_i)\phi(x_j) \Leftrightarrow K(x_i,x_j), \ a_r \ge 0 </math><br /><br />
where K is symmetric, then Mercer's theorem gives necessary and sufficient conditions on K for it to satisfy the above relation.<br><br>
 
'''[http://en.wikipedia.org/wiki/Mercer%27s_theorem Mercer's Theorem]'''
 
Let C be a compact subset of <math>\mathbb{R}^d</math> and K a function <math> \in L^2(C) </math>, if<br /><br />
 
<math>\, \int_C\int_C K(u,v)g(u)g(v)dudv \ge 0, \ \forall g \in L^2(C)</math> <br /><br />
 
then,<br /><br />
 
<math>\sum_{r=1}^{\infty}a_k\phi_r(u)\phi(v)</math> converges absolutely and uniformly to a symmetric function <math>\,K(u,v)</math>
 
References:
Vapnik, V., 1998. Statistical Learning Theory. John Wiley & Sons, {423}<br />
 
Mercer, J., 1909. Functions of positive and negative type and their connection
with the theory of integral equations. Philos. Trans. Roy. Soc. London, A
209:415{446}
 
==='''Kernel Functions'''===
There are various kernel functions, for example:
 
* Linear kernel: <math>\,k(x,y)=x \cdot y</math>
* Polynomial kernel: <math>\,k(x,y)=(x \cdot y)^d</math>
* Gaussian kernel: <math>e^{-\frac{|x-y|^2}{2\sigma^2}}</math>
 
If <math>\,X</math> is a <math>\,d \times n</math> matrix in the original space, and <math>\,\phi(X)</math> is a <math>\,D \times n</math> matrix in the [http://en.wikipedia.org/wiki/Hilbert_space Hilbert space] (good explanation video: [http://www.youtube.com/watch?v=V2pBdH7YzX0 part 1] [http://www.youtube.com/watch?v=YRY5xlk3TC0 part 2]), then <math>\,\phi^T(X) \cdot \phi(X)</math> is an <math>\,n \times n</math> matrix.
The inner product is also illustrated as correlation, which measures the similarity between data points. This gives us some insight in how to choose the kernel. The choice depends on certain prior knowledge of the problem and on how we believe the similarity of our data should be measured. In practice, the Gaussian (RBF) kernel usually works best. Besides the most common kernel functions mentioned above, many novel kernels are also suggested for different problem domains like text classification, gene classification and so on.
 
These kernel functions can be applied to many algorithms to derive the "kernel version". For example, kernel PCA, kernel LDA, etc.
 
==='''SVM: non-separable case'''===
We have seen how SVMs are able to find an optimally separating hyperplane of two separable classes of data, in which case the margin contains no data points. However, in the real world, data of different classes are usually mixed together at the boundary and it's hard to find a perfect boundary to totally separate them. To address this problem, we slacken the classification rule to allow data cross the margin. Mathematically the problem becomes,
:<math>\min_{\beta, \beta_0} \frac{1}{2}|\beta|^2</math>
:<math>\,y_i(\beta^Tx_i+\beta_0) \geq 1-\xi_i</math>
:<math>\xi_i \geq 0</math>
 
Now each data point can have some error <math>\,\xi_i</math>. However, we only want data to cross the boundary when they have to and make the minimum sacrifice; thus, a penalty term is added correspondingly in the objective function to constrain the number of points that cross the margin. The optimization problem now becomes:
[[File:non-separable.JPG|350px|thumb|right|Figure non-separable case]]
 
:<math>\min_{\alpha} \frac{1}{2}|\beta|^2+\gamma\sum_{i=1}^n{\xi_i}</math>
:<math>\,s.t.</math>  <math>y_i(\beta^Tx+\beta_0) \geq 1-\xi_i</math>
:<math>\xi_i \geq 0</math>
 
<br\>Note that <math>\,\xi_i</math> is not necessarily smaller than one, which means data can not only enter the margin but can also cross the separating hyperplane.
 
<br\>Note that <math>\,\gamma \Rightarrow \infty </math> is feasible in the separable case, as all <math>\,\xi_i = 0</math>.  In general, for higher <math>\,\gamma</math>, the sets are more separable.
 
Aside: More information about SVM and Kernal
 
SVMs are currently among the best performers for many benchmark datasets and have been extended to a number of tasks such as regression. It seems the kernel trick is the most attracting site of SVMs. This idea has now been applied to many other learning models where the inner-product is concerned, and they are called ‘kernel’ methods. Tuning SVMs remains to be the main research focus:  how to an optimal kernel? Kernel should match the smooth structure of the data.
 
==Support Vector Machine algorithm for non-separable cases - November 23, 2009==
 
With the program formulation above, we can form the lagrangian, apply KKT conditions, and come up with a new function to optimize. As we will see, the equation that we will attempt to optimize in the SVM algorithm for non-separable data sets is the same as the optimization for the separable case, with slightly different conditions.
 
===Forming the Lagrangian===
In this case we have have two constraints in the [http://en.wikipedia.org/wiki/Lagrangian Lagrangian] and therefore we optimize with respect to two dual variables <math>\,\alpha</math> and <math>\,\lambda</math>,<br>
:<math>L: \frac{1}{2} |\beta|^2 + \gamma \sum_{i=1}^n \xi_i - \sum_{i=1}^n \alpha_i[y_i(\beta^T x_i+\beta_0)-1+\xi_i]-\sum_{i=1}^n \lambda_i \xi_i</math>
:<math>\alpha_i \geq 0, \lambda_i \geq 0</math>
 
===Applying KKT conditions[http://en.wikipedia.org/wiki/Karush%E2%80%93Kuhn%E2%80%93Tucker_conditions]===
# <math>\frac{\partial L}{\partial p} = 0</math> at an optimal solution <math>\, \hat p</math>, for each primal variable <math>\,p = \{\beta, \beta_0, \xi\}</math><br><math>\frac{\partial L}{\partial \beta}=\beta - \sum_{i=1}^n \alpha_i y_i x_i = 0 \Rightarrow \beta=\sum_{i=1}^n\alpha_i y_i x_i</math> <br\><math>\frac{\partial L}{\partial \beta_0}=-\sum_{i=1}^n \alpha_i y_i =0 \Rightarrow \sum_{i=1}^n \alpha_i y_i =0</math> since the sign does not make a difference<br><math>\frac{\partial L}{\partial \xi_i}=\gamma - \alpha_i - \lambda_i \Rightarrow \gamma = \alpha_i+\lambda_i</math>.  This is the only new condition added here
#<math>\,\alpha_i \geq 0, \lambda_i \geq 0</math>, dual feasibility
#<math>\alpha_i g_i(\hat p) = 0</math>, complementary slackness: <math>\,\alpha_i[y_i(\beta^T x_i+\beta_0)-1+\xi_i]=0</math> and <math>\,\alpha_i \xi_i=0</math>
#<math>\, g_i(\hat p) \geq 0</math>, primal feasibility. <math>\,y_i( \beta^T x_i+ \beta_0)-1+ \xi_i \geq 0</math>
 
===Putting it all together===
 
With our KKT conditions and the Lagrangian equation, we can now use quadratic programming to find <math>\,\alpha</math>. <br\> Similar to what we did for the separable case after apply KKT conditions, replace the primal variables in terms of dual variables into the Lagrangian equations and simplify.
 
 
In matrix form, we want to solve the following optimization:
:<math>L(\alpha) = \underline{\alpha}^T\underline{1} - \frac{1}{2}\underline{\alpha}^TS\underline{\alpha}</math>
:<math>\,s.t.</math> <math>\underline{0} \leq \underline{\alpha} \leq \gamma</math>, <math>\underline{\alpha}^T\underline{y} = 0</math>
 
Solving this gives us <math>\,\underline{\alpha}</math>, which we can use to find <math>\,\underline{\beta}</math> as before:
:<math>\,\underline{\beta} = \sum{\alpha_i y_i \underline{x_i}}</math>
 
However, we cannot find <math>\,\beta_0</math> in the same way as before, even if we choose a point with <math>\,\alpha_i > 0</math>, because we do not know the value of <math>\,\xi_i</math> in the equation
:<math>\,y_i(\underline{\beta}^Tx_i + \beta_0) - 1 + \xi_i = 0</math>
 
From our discussion on the KKT conditions, we know that <math>\,\lambda_i \xi_i = 0</math> and <math>\,\gamma = \alpha_i + \lambda_i</math>.
 
So, if <math>\,\alpha_i < \gamma</math> then <math>\,\lambda_i > 0</math> and consequently <math>\,\xi_i = 0</math>.
 
Therefore, we can solve for <math>\,\beta_0</math> if we choose a point where:
:<math>\,0 < \alpha_i < \gamma</math>
 
'''Note'''
* When <math>\,0 < \alpha_i < \gamma</math>, we are considering a point that is on the margin.
* If <math>\,0 < \alpha_i = \gamma</math> then <math>\,\lambda_i = 0 \Rightarrow \xi_i > 0</math> and we're dealing with a point that has crossed the margin.
* In this case, the local minimization is also the global minimization. Since <math>\,S</math> is positive semidefinite, then <math>\,L(\alpha)</math> is convex.
 
====The SVM algorithm for non-separable data sets====
 
The algorithm, then, for non-separable data sets is:
 
# Use <code>quadprog</code> (or another quadratic programming technique) to solve the above optimization and find <math>\,\alpha</math>
# Find <math>\,\underline{\beta}</math> by solving <math>\,\underline{\beta} = \sum{\alpha_i y_i x_i}</math>
# Find <math>\,\beta_0</math> by choosing a point where <math>\,0 < \alpha_i < \gamma</math> and then solving <math>\,y_i(\underline{\beta}^Tx_i + \beta_0) - 1 = 0</math>
 
====Potential drawbacks====
 
Potential drawbacks of the SVM are the following two aspects:
 
1.Uncalibrated Class membership probabilities[http://en.wikipedia.org/wiki/Class_membership_probabilities]
 
2.The SVM is only directly applicable for two-class tasks. Therefore, algorithms that reduce the multi-class task to several binary problems have to be applied, see the Multi-class SVM section.
 
3.How to select the kernel function parameters - for Gaussian kernels the width parameter <math>\,\sigma </math>, and the value of <math>\,\epsilon </math> in the <math>\,\epsilon </math>-insensitive loss function have not entirely solved yet.
 
 
''some resouces'':
 
1. introduction of SVM[http://www.fml.tuebingen.mpg.de/raetsch/lectures/ismb09tutorial/images/WhatIsASVM.pdf]
 
2. SVM in computational biology[http://noble.gs.washington.edu/papers/noble_support.pdf]
 
==Finishing up SVM - November 25, 2009==
 
===Does SVM find a global minimum?===
 
When we discussed KKT conditions, we listed the ''necessary'' conditions for <math>\hat{x}</math> to be a local minimum. However, it would be ideal if we could show that SVM finds a global minimum (unlike, say, neural networks that find a local minimum).
 
Recall that our conditions, for the non-separable case, are <math>\,0 \leq \underline{\alpha} \leq \gamma</math> and <math>\,\underline{\alpha}^T\underline{y} = 0</math>. These are both convex.
 
Our lagrangian equation is <math>L(\alpha) = \underline{\alpha}^T\underline{1} - \frac{1}{2}\underline{\alpha}^TS\underline{\alpha}</math>. Since this is quadratic, it might be convex, but it also may not be; it depends on the matrix <math>\,S</math>. If <math>\,S</math> is a positive semi-definite matrix, then the lagrangian equation is convex.
 
Recall that <math>\,S</math> is the product of <math>\,L^TL</math>, where <math>L_{d\times n} = \begin{bmatrix}
y_1x_{11}& & y_nx_{1n} \\
\vdots&\cdots& \vdots\\
y_1x_{d1}& & y_nx_{dn} \\
\end{bmatrix}</math>. Similar to the notion that squaring any number will give us a positive number in the end, a matrix that is the product of a matrix transposed times itself will result in a positive semi-definite matrix.
 
So, we know that <math>\,S</math> is positive semi-definite. The lagrangian equation is convex, and therefore, the SVM algorithm finds a global minimum.
 
== Naive Bayes, Decision Trees, K Nearest Neighbours, Boosting, and Bagging - November 25, 2009 ==
 
Now that we've covered a number of more advanced classification algorithms, we can look at some of the simpler classification algorithms that are usually discussed at the beginning of a discussion on classification.
 
=== [http://en.wikipedia.org/wiki/Naive_Bayes_classifier Naive Bayes Classifiers] ===
 
Recall that one of the major drawbacks of the Bayes classifier was the difficulty in estimating a joint density in a multidimensional space.  Naive Bayes classifiers are one possible solution to the problem.  They are especially popular for problems with high-dimension feature problems.
 
A naive Bayes classifier applies a strong independence assumption to the class density <math>\,f_{k}(x)</math>.  It assumes that inputs within each class are conditionally independent.  In other words, it assumes that the value of one feature in a class is unrelated to that of any other feature. 
 
<math>\ f_{k}(x) = \prod_{j=1}^d f_{jk}(x_{j})</math>
 
Each of the d marginal densities can be estimated separately using one-dimensional density estimates.  If one of the components <math>\,x_{j}</math> is discrete then its density can be estimated using a histogram.  We can thus mix discrete and continuous variables in a naive Bayes classifier.
 
Naive Bayes classifiers often perform extremely well in practice despite these 'naive' and seemingly optimistic assumptions.  This is because while individual class density estimates could be biased, the bias does not carry through to the posterior probabilities. 
 
It is also possible to train naive Bayes classifiers using maximum likelihood estimation.
 
=== Decision Trees ===
 
Decision trees[http://en.wikipedia.org/wiki/Decision_tree] are highly intuitive learning methods that can be thought of as partitioning the feature space into a number of rectangles.  Trees can be used for classification, regression, or both.  Trees map features of a decision problem onto a conclusion, or label.
 
We fit a tree model by minimizing some measure of impurity.  For a single covariate <math>\,X_{1}</math> we choose a point t on the real line that splits the real line into two sets R1 = <math>(-\infty,t]</math>, R2 = <math>[t,\infty)</math> in a way that minimizes impurity.
 
We denote by <math> \hat p_{s}(j) </math> the proportion of observations in <math>\ R_{s}</math> that <math>\ Y_{i} = j</math>.
 
 
<math> \hat p_{s}(j) = \frac{\sum_{i = 1}^{n} I(Y_{i} = j,X_{i} \in R_{s})}{\sum_{i = 1}^{n} I(X_{i} \in R_{s})}</math>
 
Extension:
[http://www.mindtools.com/dectree.html Decision Tree Analysis Decision Trees from Mind Tools]
''useful link'':
 
Algorithm, Overfitting, Examples:[http://www.cs.cmu.edu/afs/cs.cmu.edu/project/theo-20/www/mlbook/ch3.pdf],[http://robotics.stanford.edu/people/nilsson/MLDraftBook/ch6-ml.pdf],[http://www.autonlab.org/tutorials/dtree18.pdf]
 
==== Common Node Impurity Measures ====
 
Some common node impurity measures are:
 
* Misclassification error:
 
 
<math> 1 - \hat p_{s}(j) </math>
 
 
* Gini Index:
 
 
<math> \sum_{j \neq i} \hat p_{s}(j)\hat p_{s}(i)</math>
 
 
* Cross-entropy:
 
 
<math> - \sum_{j = 1}^{K} \hat p_{s}(j) log(\hat p_{s}(j))</math>
 
=== [http://en.wikipedia.org/wiki/K-nearest_neighbor_algorithm K-Nearest Neighbours Classification] ===
 
 
<math>K</math>-nearest neighbours is a very simple algorithm that classifies points based on a majority vote of the <math>\ k</math> nearest points in the feature space, with the object being assigned to the class most common among its <math>\ k</math> nearest neighbors. <math>\ k</math> is a positive integer, typically small which is chosen by cross validation. If <math>\ k=1</math>, then the object is simply assigned to the class of its nearest neighbor.
 
1. Ties are broken at random.
 
2. If we assume the features are real, we can use the Euclidean distance in feature space.
 
3. Since the features are measured in different units, we can standardize the features to have mean zero and variance 1.
 
====Property[http://en.wikipedia.org/wiki/K-nearest_neighbor_algorithm#Properties]====
 
K-mearest neighbor algorithm has some strong results. As the number of data points goes infinity, the algorithm is guaranteed to yield an error rate no worse than twice the Bayes error rate (the minimum achievable error rate given the distribution of the data). K-nearest neighbor is guaranteed to approach the Bayes error rate, for some value of k (where k increases as a function of the number of data points).
 
=== Boosting ===
 
[http://en.wikipedia.org/wiki/Boosting Boosting] algorithms are a class of machine learning meta-algorithms that can improve weak classifiers. If we have a weak classifier which slightly does better than random classification, then by assigning larger weights to points which are misclassified and trying to minimize the new cost function,
we probably can get a new classifier which classifies with less error. This procedure can be repeated for a finite number of times and then
a new classifier which is a weighed aggregation of the generated classifiers will be used as the boosted classifier. The better each generated classifier is the more its weight is in the final classifier.
 
[http://www.site.uottawa.ca/~stan/csi5387/boost-tut-ppr.pdf Paper about Booting]:
Boosting is a general method for improving the accuracy of any given learning algorithm.
This paper introduces the boosting algorithm AdaBoost, and explains the underlying theory of boosting, including an explanation of why boosting often does not suffer
from overfitting as well as boosting’s relationship to support-vector machines. Finally, this paper gives some examples of applications of boosting recently.
 
==== [http://en.wikipedia.org/wiki/AdaBoost AdaBoost Algorithm] ====
Let's first look at the original boosting algorithm:
#Set all the weights of all points equal <math>w_i\leftarrow \frac{1}{n}</math> where we have <math>\,n</math> points.
#For <math>j=1,\dots, J</math>
## Find <math>h_j:X\rightarrow \{-1,+1\}</math> that minimizes the weighted error <math>\,L_j</math><br><math>h_j=\mbox{argmin}_{h_j} L_j </math> where <math>L_j=\frac{\sum_{i=1}^n w_i I[y_i\neq h_j(x_i)]}{\sum_{i=1}^n w_i}</math>
## Let <math>\alpha_j\leftarrow\log(\frac{1-L_j}{L_j})</math>
## Update the weights: <math>w_i\leftarrow w_i e^{a_j I[y_j\neq h_j(x_i)]}</math>
#The final classifier is <math>h(x)=\mbox{sign}\left(\sum_{j=1}^J \alpha_j h_j(x)\right)</math>
 
When applying boosting to different classifiers, the first step in 2 may be different since we can define the most proper misclassification error according to the problem. However, the major idea is to give higher weight to misclassified examples, which does not change across classifiers.
 
Boosting works very well in practice, and there are a lot of research and published works on why it works this well. One possible explanation is that it actually maximizes the margin of classifiers.
 
We can see that in AdaBoost if training points are accurately classified, then their weights of being used in the next classifier is kept unchanged, while if points are not accurately classified, their weights of being used again is raised.  At a result easier examples get classified in the very first few classifiers and hard examples are learned later with increasing emphasis. Finally, all the classifiers are combined through a majority vote, which is also weighted by their accuracy, taking consideration of both the easy and hard points. Thus the AdaBoost focuses on the more informative or difficult points.
 
==== AnyBoost ====


=== Introduction ===
Many boosting algorithms belong to a class called AnyBoost which are gradient descent algorithms for choosing linear combinations of elements of an inner product space in order to minimize some cost functional.


A Radial Basis Function (RBF) network is a type of artificial neural network with an output layer, only one hidden layer, only have weight from hidden layer to output layer, and no backpropagation.  The neurons in the hidden layer contain basis function. The most common form of basis function is  radial basis function or Gaussian functions.
We are primarily interested in voted combinations of classifiers <math>H(x) = sgn(\sum_{j=1}^J \alpha_j h_j(x))</math>


We want to find H such that the cost functional <math>C(F) = \frac{1}{m}\sum_{i=1}^m c(y_i F(x_i))</math> is minimized for a suitable cost function <math>c</math>


The output of an RBF network can be expressed as a weighted sum of its radial basis functions as follows:
<math>h_j:X\rightarrow \{-1,+1\}</math> are weak base classifiers from some class <math>\ H</math> and <math> \alpha_j</math> are classifier weights.  The margin of an example <math>(x_i,y_i)</math> is defined by <math>y_i H(x_i)</math>.


<math>\hat y_{k} = \sum_{j=1}^M\phi_{j}(x) w_{jk}</math>
The base hypotheses h and their linear combinations H can be considered to be elements of an inner product function space <math>(S,\langle,\rangle)</math>.


The radial basis function is:
We define the inner product as <math>\langle F,G \rangle = \frac{1}{m}\sum_{i=1}^m F(x_i) G(x_i)</math> but the AnyBoost algorithm is valid for any cost function and inner product.  We have a function <math>H</math> as a linear combination of base classifiers and wish to add a base classifier h to H so that cost <math>\ C(H + \epsilon h)</math> decreases for arbitrarily small <math> \epsilon</math>.  The direction we seek is found by maximizing <math>-\langle\nabla C(H),h\rangle</math>


<math>\phi_{j}(x) = e^{\frac{-\Vert x - \mu_{j}\Vert ^2}{2\sigma_{j}^2}}</math>
'''note:'''The hidden layer has a variable number of neurons (the optimal number is determined by the training process). The more the number of neurons, the more complex the model is. Each neuron consists of a radial basis function centered on a point with the same dimensions as the input data. The radius of the RBF function may be different. The centers and radius are determined by the training process. When the x vector is given from the input layer, the hidden neuron computes the Euclidean distance from the neuron’s center point and then applies RBF function to this distance using the radius. The resulting value is passed to the the output layer.
<math>\,y_{k}</math> can be expressed in matrix form as:
<math>\hat Y = \Phi W </math>
where
:<math>\hat Y = \left[ \begin{matrix}
y_{11} & y_{12} & \cdots & y_{1k} \\
y_{21} & y_{22} & \cdots & y_{2k} \\
\vdots & & \ddots & \vdots \\
y_{n1} & y_{n2} & \cdots & y_{nk}
\end{matrix}\right] </math> is the matrix of output variables.


:<math>\Phi = \left[ \begin{matrix}
AnyBoost algorithm:
\phi_{11} & \phi_{12} & \cdots & \phi_{1k} \\
\phi_{21} & \phi_{22} & \cdots & \phi_{2k} \\
\vdots & & \ddots & \vdots \\
\phi_{n1} & \phi_{n2} & \cdots & \phi_{nk}
\end{matrix}\right] </math> is the matrix of Radial Basis Functions.


:<math>W = \left[ \begin{matrix}
#<math>\ H_0(x) = 0</math>
w_{11} & w_{12} & \cdots & w_{1k} \\
#For <math>j=0,\dots, J</math>
w_{21} & w_{22} & \cdots & w_{2k} \\
## Find <math>h_{j+1}:X\rightarrow \{-1,+1\}</math> that maximizes the inner product <math>-\langle\nabla C(H),h_{j+1}\rangle</math>
\vdots & & \ddots & \vdots \\
## If <math>-\langle\nabla C(H),h_{j+1}\rangle \leq 0 </math> then
w_{M1} & w_{M2} & \cdots & w_{Mk}
### Return <math>\ H_j</math>
\end{matrix}\right] </math> is the matrix of weights.
## Choose step size <math>\ \alpha_{j+1}</math>
## <math>\ H_{j+1} = H_j + \alpha_{j+1} h_{j+1}</math>
#The final classifier is <math>\ H_{J+1}</math>


=== Estimation of weight matrix W ===


We need to minimize the squared norm <math>\Vert Y - \Phi W\Vert^{2}</math> which represents the training errro in order to find <math>\,W</math>.
Other voting methods, including AdaBoost, can be viewed as special cases of this algorithm.
From a previous result in linear algebra we know that


<math>\Vert A \Vert^2 = Tr(A^{T}A)</math>
=== Bagging ===


Thus we have
Bagging, or [http://en.wikipedia.org/wiki/Bootstrap_aggregating bootstrap aggregating], is another meta-technique used to reduce the variance of classifiers with high variability.  It exploits the fact that a bootstrap mean is approximately equal to the posterior average.  It is most effective for highly nonlinear classifiers such as decision trees. In particular because of the highly unstable nature of these classifiers, they stand most likely to benefit from bagging.
<math>\ Error = \Vert Y - \Phi W\Vert^{2} = Tr[(Y - \Phi W)^{T}(Y - \Phi W)]</math>


<math>\ Error = Tr[Y^{T}Y - Y^{T}\Phi W - W^{T} \Phi^{T} Y + W^{T}\Phi^{T} \Phi W]</math>
The idea is to train classifiers <math>\ h_{1}(x)</math> to <math>\ h_{B}(x)</math> using B bootstrap samples from the data set. The final classification is obtained using an average or 'plurality vote' of the B classifiers as follows:
   


==== Useful properties of matrix differentiation ====
:<math>\, h(x)= \left\{\begin{matrix}  
 
1 &  \frac{1}{B} \sum_{i=1}^{B} h_{b}(x) \geq \frac{1}{2} \\  
 
0 &  \mathrm{otherwise}   \end{matrix}\right.</math>
<math>\frac{\partial Tr(Ax)}{\partial x} = A^{T}</math>
 
<math>\frac{\partial Tr(x^{T}A)}{\partial x} = A</math>
 
<math>\frac{\partial Tr(x^{T}Ax)}{\partial x} = (A^{T} + A)x</math>
 
 
==== Solving for W ====
 
We can solve for <math>\,W</math> by setting <math>\frac{\partial err}{\partial W}</math> equal to zero and using the aforementioned properties of matrix differentiation.
 
<math>\frac{\partial err}{\partial W} = 0</math>
 
<math>\ 0 - \Phi^{T}Y - \Phi^{T}Y + 2\Phi^{T}\Phi W = 0</math>
 
<math>\ -2 \Phi^{T}Y + 2\Phi^{T}\Phi W = 0</math>
 
<math>\ W = (\Phi^{T}\Phi)^{-1}\Phi^{T}Y</math>
 
<math>\hat{Y} = \Phi W = \Phi(\Phi^{T}\Phi)^{-1})\Phi^{T}Y = HY</math>
 
where <math>\ H = \Phi(\Phi^{T}\Phi)^{-1})\Phi^{T}</math>
 
<math>\,H</math> is the hat matrix for this model.
 
=== RBF networks for classification -- a probabilistic paradigm ===
 
[[File:Rbf_graphical_model.png|350px|thumb|left|Figure 1: RBF graphical model]]
 
An RBF network is akin to fitting a Gaussian mixture model to data.  We assume that each class can be modelled by a single function <math>\,\phi</math> and data is generated by a mixture model. According to Bayes Rule,
 
<math>Pr(Y = y_{k} | X = x) = \frac {Pr(x|y_{k})*Pr(y_{k})}{Pr(x)}</math>


While all classifiers that we have seen thus far in the course have been in discriminative form, the RBF network is a generative model that can be represented using a directed graph.
Many classifiers, such as trees, already have underlying functions that estimate the class probabilities at <math>\,x</math>.  An alternative strategy is to average these class probabilities instead of the final classifiers.  This approach can produce bagged estimates with lower variance and usually better performance.


We can replace the class conditional density in the above conditional probability expression by marginalize <math>\,x</math> over <math>\,j</math>:
References:
<math>\Pr(x|y_{k}) = \sum_{j} Pr(x|j)*Pr(j|y_{k})</math>
Breiman L, Bagging Predictors, Machine Learning, 24, 123–140 (1996)


<br/><br/>
===Example===
*'''Note''' We made the assumption that each class can be modelled by a single function <math>\displaystyle\Phi</math> and that the data was generated by a mixture model.  The Gaussian mixture model has the form:
====Random Forests====
<math>f(x)=\sum_{m=1}^M \alpha_m \phi(x;\mu_m,\Sigma_m)</math> where <math>\displaystyle\alpha_m</math> are mixing proportions, <math>\displaystyle\sum_m \alpha_m=1</math>, and <math>\displaystyle\mu_m</math> and <math>\displaystyle\Sigma_m</math> are the mean and covariance of each Gaussian density respectively. <ref>H. Trevor, R. Tibshirani, J. Friedman, The Elements of Statistical Learning (Springer 2009), pp. 214. </ref> The generative model in Figure 1 shows graphically how each Gaussian in the mixture model is chosen to sample from.
A classifier consisting of a collection of tree–structured classifiers <math>\displaystyle {h(x;\theta_k)k=1,2,…},</math> where <math>{\theta_k }</math> are independently and identically distributed random vectors. The nature and dimensionality of <math>\theta</math> depends on its use in the tree construction. Bagging is such an example. Currently, random forest use randomly selected inputs or combinations of inputs at each node to grow the tree.


==Notes==
How to compare without a test set? Randomly partition the data into 10% and 90% sets. Use the 90% as training data, to grow tree model using cross validation (1-se rule) and also grow a random forest. Then predict the 10% test data. Repeat the procedure 100 times and average the result.R code is as follows.
<references/>




>misv.tree<-rep(0,100);


== '''Radial Basis Function (RBF) Networks - November 9th, 2009(Working in progress!)''' ==
>sizev.tree<-rep(0,100);


=== RBF networks for classication (A probabilistic point of view) ===
>misv.forest<-rep(0,100);


If we have several classes represented by<math>\,y_{1}, y_{2}, ... ,y_{K}, and
>for (j in 1:100)
{
  list<-sample(seq(1:683),70,replace=F)
  train<-data.frame(bc[- list,]);   
  test<-data.frame(bc[list,])
  tr0<-rpart(factor(Y)~.,data=train, control=rpart.control( minsplit=10, minbucket=5, cp=0.0,xval=10))
  x<-printcp(tr0)
  bs<-x[1,2]+1;
  min<-x[1,4];
  cpv<-x[1,1];
  stde<-x[1,5]
  for (i in 1:length(x[,1]))
    {       
        if(x[i,4]<min )
          {
              min<-x[i,4]
              bs<-x[i,2]+1
              cpv<-x[i,1]
              stde<-x[i,5]
            }  
    }
limit<-min+stde  ;
index<-0  ;
for(i in 1:length(x[,1]))
  {
    if(index<1){
    if(x[i,4]>limit)
  {
    bs<-x[i+1,2]+1
    cpv<-x[i+1,1]
    }
else index<-2 }
  }
tr<-prune(tr0,cp=cpv)  # prune tree
fity<-predict(tr,newdata=test,type='class')
table<-table(test[,1],fity)
mis<-table[1,2]+table[2,1]
misv.tree[j]<-mis
sizev.tree[j]<-bs
forest<-randomForest(factor(Y)~.,data=train,mtry=4,ntree=100) # random forest
fity<-predict(forest,newdata=test,type='class')
table<-table(test[,1],fity)
mis<-table[1,2]+table[2,1]
misv.forest[j]<-mis
}

Latest revision as of 08:45, 30 August 2017

Proposal

Mark your contribution here

Scribe sign up

Classfication-2009.9.30

Classification

With the rise of fields such as data-mining, bioinformatics, and machine learning, classification has becomes a fast-developing topic. In the age of information, vast amounts of data are generated constantly, and the goal of classification is to learn from data. Potential application areas include handwritten post codes recognition, medical diagnosis, face recognition, human language processing and so on.

Definition: The problem of Prediction a discrete random variable [math]\displaystyle{ \mathcal{Y} }[/math] from another random variable [math]\displaystyle{ \mathcal{X} }[/math] is called Classification.

In classification,, we attempt to approximate a function [math]\displaystyle{ \,h }[/math], by using a training data set, which will then be able to accurately classify new data inputs.

Given [math]\displaystyle{ \mathcal{X} \subset \mathbb{R}^{d} }[/math], a subset of the [math]\displaystyle{ d }[/math]-dimensional real vectors and [math]\displaystyle{ \mathcal{Y} }[/math], a finite set of labels, We try to determine a 'classification rule' [math]\displaystyle{ \,h }[/math] such that,

[math]\displaystyle{ \,h: \mathcal{X} \mapsto \mathcal{Y} }[/math]

We use [math]\displaystyle{ \,n }[/math] ordered pairs of training data which are identical independent distributions, [math]\displaystyle{ \,\{(X_{1},Y_{1}), (X_{2},Y_{2}), \dots , (X_{n},Y_{n})\} }[/math] where [math]\displaystyle{ \,X_{i} \in \mathcal{X} }[/math],[math]\displaystyle{ \,Y_{i} \in \mathcal{Y} }[/math], to approximate [math]\displaystyle{ \,h }[/math].


Thus, given a new input, [math]\displaystyle{ \,X \in \mathcal{X} }[/math] by using the classification rule we can predict a corresponding [math]\displaystyle{ \,\hat{Y}=h(X) }[/math].

Example Suppose we wish to classify fruits into apples and oranges by considering certain features of the fruit, for instance, color, diameter, and weight.
Let [math]\displaystyle{ \mathcal{X}= (\mathrm{colour}, \mathrm{diameter}, \mathrm{weight}) }[/math] and [math]\displaystyle{ \mathcal{Y}=\{\mathrm{apple}, \mathrm{orange}\} }[/math]. The goal is to find a classification rule such that when a new fruit [math]\displaystyle{ \,X }[/math] is presented based on its features, [math]\displaystyle{ (\,X_{\mathrm{color}}, X_{\mathrm{diameter}}, X{_\mathrm{weight}}) }[/math], our classification rule [math]\displaystyle{ \,h }[/math] can classify it as either an apple or an orange, i.e., [math]\displaystyle{ \,h(X_{\mathrm{color}}, X_{\mathrm{diameter}}, X_{\mathrm{weight}}) }[/math] be the fruit type of [math]\displaystyle{ \,X }[/math].

Error rate

The true error rate' [math]\displaystyle{ \,L(h) }[/math] of a classifier having classification rule [math]\displaystyle{ \,h }[/math] is defined as the probability that [math]\displaystyle{ \,h }[/math] does not correctly classify any new data input, i.e., it is defined as [math]\displaystyle{ \,L(h)=P(h(X) \neq Y) }[/math]. Here, [math]\displaystyle{ \,X \in \mathcal{X} }[/math] and [math]\displaystyle{ \,Y \in \mathcal{Y} }[/math] are the known feature values and the true class of that input, respectively.
The empirical error rate (or training error rate) of a classifier having classification rule [math]\displaystyle{ \,h }[/math] is defined as the frequency at which [math]\displaystyle{ \,h }[/math] does not correctly classify the data inputs in the training set, i.e., it is defined as

[math]\displaystyle{ \,\hat{L}_{n} = \frac{1}{n} \sum_{i=1}^{n} I(h(X_{i}) \neq Y_{i}) }[/math], where [math]\displaystyle{ \,I }[/math] is an indicator variable and [math]\displaystyle{ \,I = \left\{\begin{matrix} 1 &\text{if } h(X_i) \neq Y_i \\ 0 &\text{if } h(X_i) = Y_i \end{matrix}\right. }[/math]. Here, [math]\displaystyle{ \,X_{i} \in \mathcal{X} }[/math] and [math]\displaystyle{ \,Y_{i} \in \mathcal{Y} }[/math] are the known feature values and the true class of the [math]\displaystyle{ \,i_th }[/math] training input, respectively.

Bayes Classifier

The principle of Bayes Classifier is to calculate the posterior probability of a given object from its prior probability via Bayes formula, and then place the object in the class with the largest posterior probability<ref> http://www.wikicoursenote.com/wiki/Stat841f11#Bayes_Classifier </ref>

Intuitively speaking, to classify [math]\displaystyle{ \,x\in \mathcal{X} }[/math] we find [math]\displaystyle{ y \in \mathcal{Y} }[/math] such that [math]\displaystyle{ \,P(Y=y|X=x) }[/math] is maximum over all the members of [math]\displaystyle{ \mathcal{Y} }[/math].

Mathematically, for [math]\displaystyle{ \,k }[/math] classes and given object [math]\displaystyle{ \,X=x }[/math], we find [math]\displaystyle{ \,y\in \mathcal{Y} }[/math] which maximizes [math]\displaystyle{ \,P(Y=y|X=x) }[/math], and classify [math]\displaystyle{ \,X }[/math] into class [math]\displaystyle{ \,y }[/math]. In order to calculate the value of [math]\displaystyle{ \,P(Y=y|X=x) }[/math], we use Bayes formula

[math]\displaystyle{ \begin{align} P(Y=y|X=x) &= \frac{P(X=x|Y=y)P(Y=y)}{P(X=x)} \\ &=\frac{P(X=x|Y=y)P(Y=y)}{\Sigma_{\forall y \in \mathcal{Y}}P(X=x|Y=y)P(Y=y)} \end{align} }[/math]

where [math]\displaystyle{ \,P(Y=y|X=x) }[/math] is referred to as the posterior probability, [math]\displaystyle{ \,P(Y=y) }[/math] as the prior probability, [math]\displaystyle{ \,P(X=x|Y=y) }[/math] as the likelihood, and [math]\displaystyle{ \,P(X=x) }[/math] as the evidence.

For the special case that [math]\displaystyle{ \,Y }[/math] has only two classes, that is, [math]\displaystyle{ \, \mathcal{Y}=\{0, 1\} }[/math]. Consider the probability that [math]\displaystyle{ \,r(X)=P\{Y=1|X=x\} }[/math]. Given [math]\displaystyle{ \,X=x }[/math], By Bayes formula, we have

[math]\displaystyle{ \begin{align} r(X)&=P(Y=1|X=x) \\ &=\frac{P(X=x|Y=1)P(Y=1)}{P(X=x)}\\ &=\frac{P(X=x|Y=1)P(Y=1)}{P(X=x|Y=1)P(Y=1)+P(X=x|Y=0)P(Y=0)} \end{align} }[/math]


Definition:

The Bayes classification rule [math]\displaystyle{ \,h }[/math] is

[math]\displaystyle{ \, h(X)= \left\{\begin{matrix} 1 & r(x)\gt \frac{1}{2} \\ 0 & \mathrm{otherwise} \end{matrix}\right. }[/math]

3 different approaches to classification:

1) Empirical Risk Minimization: Choose a set fo classifier [math]\displaystyle{ \mathcal{H} }[/math] and find [math]\displaystyle{ \,h^*\in \mathcal{H} }[/math] that minimizes some estimate of [math]\displaystyle{ \,L(h) }[/math]

2) Regression: Find an estimate [math]\displaystyle{ (\hat r) }[/math] of the function [math]\displaystyle{ r }[/math] and define

[math]\displaystyle{ \, h(X)= \left\{\begin{matrix} 1 & \hat r(x)\gt \frac{1}{2} \\ 0 & \mathrm{otherwise} \end{matrix}\right. }[/math]

3) Density Estimation: estimate [math]\displaystyle{ \,P(X=x|Y=0) }[/math] and [math]\displaystyle{ \,P(X=x|Y=1) }[/math] (less popular in high-dimension cases)


Bayes Classification Rule Optimality Theorem: The Bayes rule is optimal in true error rate, that is for any other classification rule [math]\displaystyle{ \, \overline{h} }[/math], we have [math]\displaystyle{ \,L(h) \le L(\overline{h}) }[/math]. Intuitively speaking this theorem is saying we cannot do better than classifying [math]\displaystyle{ \,x\in \mathcal{X} }[/math] to [math]\displaystyle{ \,y }[/math] when[math]\displaystyle{ }[/math] the probability of being of type [math]\displaystyle{ \,y }[/math] for [math]\displaystyle{ \,x }[/math] is more than probability of being any other type.

Definition:

The set [math]\displaystyle{ \,D(h)=\{x: P(Y=1|X=x)=P(Y=0|X=x)\} }[/math] is called the decision boundary.


[math]\displaystyle{ \, h^*(X)= \left\{\begin{matrix} 1 & if P(Y=1|X=x)\gt P(Y=0|X=x) \\ 0 & \mathrm{otherwise} \end{matrix}\right. }[/math]

Remark:

1)Bayes classification rule is optimal. Proof:[1]

2)We still need any other method, since we cannot define prior probability in realistic.


Example:
We’re going to predict if a particular student will pass STAT441/841. We have data on past student performance. For each student we know: If student’s GPA > 3.0 (G) If student had a strong math background (M) If student is a hard worker (H) If student passed or failed course

[math]\displaystyle{ \, \mathcal{Y}= \{ 0,1 \} }[/math], where 1 refers to pass and 0 refers to fail. Assume that [math]\displaystyle{ \,P(Y=1)=P(Y=0)=0.5 }[/math]
For a new student comes along with values [math]\displaystyle{ \,G=0, M=1, H=0 }[/math], we calculate [math]\displaystyle{ \,r(X)=P(Y=1|X=(0,1,0)) }[/math] as

[math]\displaystyle{ \,r(X)=P(Y=1|X=(0,1,0))=\frac{P(X=(0,1,0)|Y=1)P(Y=1)}{P(X=(0,1,0)|Y=1)P(Y=1)+P(X=(0,1,0)|Y=0)P(Y=0)}=\frac{0.025}{0.125}=0.2\lt \frac{1}{2} }[/math]
Thus, we classify the new student into class 0, namely, we predict him to fail in this course.


Notice: Although the Bayes rule is optimal, we still need other methods, since it is generally impossible for us to know the prior [math]\displaystyle{ \,P(Y=1) }[/math], and class conditional density [math]\displaystyle{ \,P(X=x|Y=1) }[/math] and ultimately calculate the value of [math]\displaystyle{ \,r(X) }[/math], which makes Bayes rule inconvenient in practice.

Currently, there are four primary classifier based on Bayes Classifier: Naive Bayes classifier[2], tree-augmented naive Bayes (TAN), Bayesian network augmented naive Bayes (BAN) and general Bayesian network (GBN).
useful link:Decision Theory, Bayes Classifier

Bayesian vs. Frequentist

Intuitively, to solve a two-class problem, we may have the following two approaches:

1) If [math]\displaystyle{ \,P(Y=1|X=x)\gt P(Y=0|X=x) }[/math], then [math]\displaystyle{ \,h(x)=1 }[/math], otherwise [math]\displaystyle{ \,h(x)=0 }[/math].

2) If [math]\displaystyle{ \,P(X=x|Y=1)\gt P(X=x|Y=0) }[/math], then [math]\displaystyle{ \,h(x)=1 }[/math], otherwise [math]\displaystyle{ \,h(x)=0 }[/math].

One obvious difference between these two methods is that the first one considers probability as changing based on observation while the second one considers probablity as having objective existence. Actually, they represent two different schools in statistics.

During the history of statistics, there are two major classification methods : Bayesian and frequentist. The two methods represent two different ways of thoughts and hold different view to define probability. The followings are the main differences between Bayes and Frequentist.

Frequentist

  1. Probability is objective.
  2. Data is a repeatable random sample(there is a frequency).
  3. Parameters are fixed and unknown constant.
  4. Not applicable to single event. For example, a frequentist cannot predict the weather of tomorrow because tomorrow is only one unique event, and cannot be referred to a frequency in a lot of samples.

Bayesian

  1. Probability is subjective.
  2. Data are fixed.
  3. Parameters are unknown and random variables that have a given distribution and other probability statements can be made about them.
  4. Can be applied to single events based on degree of confidence or beliefs. For example, Bayesian can predict tomorrow's weather, such as having the probability of [math]\displaystyle{ \,50% }[/math] of rain.

Example

Suppose there is a man named Jack. In Bayesian method, at first, one can see this man (object), and then judge whether his name is Jack (label). On the other hand, in Frequentist method, one doesn’t see the man (object), but can see the photos (label) of this man to judge whether he is Jack.

Linear and Quadratic Discriminant Analysis - October 2,2009

Introduction

Notation

Let us first introduce some new notation for the following sections.

Multi-class Classification:

Y takes on more than two values.

Recall that in the discussion of the Bayes Classifier, we introduced Bayes Formula:

[math]\displaystyle{ \begin{align} P(Y=y|X=x) &=\frac{P(X=x|Y=y)P(Y=y)}{\Sigma_{\forall y \in \mathcal{Y}}P(X=x|Y=y)P(Y=y)} \end{align} }[/math]

We will use new labels for the following equivalent formula:

[math]\displaystyle{ \begin{align} P(Y=k|X=x) &=\frac{f_k(x)\pi_k}{\Sigma_kf_k(x)\pi_k} \end{align} }[/math]
  • [math]\displaystyle{ \,f_k }[/math] is called the class conditional density; also referred to previously as the likelihood function. Essentially, this is the function that allows us to reason about a parameter given a certain outcome.
  • [math]\displaystyle{ \,\pi_k }[/math] is called the prior probability. This is a probability distribution that represents what we know (or believe we know) about a population.
  • [math]\displaystyle{ \,\Sigma_k }[/math] is the sum with respect to all [math]\displaystyle{ \,k }[/math] classes.

Approaches

Representing the optimal method, Bayes classifier cannot be used in the most practical situations though, since usually the prior probability is unknown. Fortunately, other methods of classification have been evolved. These methods fall into three general categories.

1 Empirical Risk Minimization:Choose a set fo classifier [math]\displaystyle{ \mathcal{H} }[/math] and find [math]\displaystyle{ \,h^* \epsilon H }[/math], minimize some estimate of [math]\displaystyle{ \,L(H) }[/math].

2 Regression:Find an estimate [math]\displaystyle{ (\hat r) }[/math] of the function [math]\displaystyle{ \ r }[/math] and deifne

[math]\displaystyle{ \, h(X)= \left\{\begin{matrix} 1 & \hat r(x)\gt \frac{1}{2} \\ 0 & \mathrm{otherwise} \end{matrix}\right. }[/math]

3 Density estimation, estimate [math]\displaystyle{ \ P(X = x|Y = 0) }[/math] and [math]\displaystyle{ \ P(X = x|Y = 1) }[/math]

Note:
The third approach, in this form, is not popular because density estimation doesn't work very well with dimension greater than 2. However this approach is the simplest and we can assume a parametric model for the densities. Linear Discriminate Analysis and Quadratic Discriminate Analysis are examples of the third approach, density estimation.

LDA

Motivation

The Bayes classifier is optimal. Unfortunately, the prior and conditional density of most data is not known. Some estimation of these should be made if we want to classify some data.

The simplest way to achieve this is to assume that all the class densities are approximately a multivariate normal distribution, find the parameters of each such distribution, and use them to calculate the conditional density and prior for unknown points, thus approximating the Bayesian classifier to choose the most likely class. In addition, if the covariance of each class density is assumed to be the same, the number of unknown parameters is reduced and the model is easy to fit and use, as seen later.

History

The name Linear Discriminant Analysis comes from the fact that these simplifications produce a linear model, which is used to discriminate between classes. In many cases, this simple model is sufficient to provide a near optimal classification - for example, the Z-Score credit risk model, designed by Edward Altman in 1968, which is essentially a weighted LDA, revisited in 2000, has shown an 85-90% success rate predicting bankruptcy, and is still in use today.

Purpose

1 feature selection

2 which classification rule best seperate the classes

Definition

To perform LDA we make two assumptions.

  • The clusters belonging to all classes each follow a multivariate normal distribution.
    [math]\displaystyle{ x \in \mathbb{R}^d }[/math] [math]\displaystyle{ f_k(x)=\frac{1}{ (2\pi)^{d/2}|\Sigma_k|^{1/2} }\exp\left( -\frac{1}{2} [x - \mu_k]^\top \Sigma_k^{-1} [x - \mu_k] \right) }[/math]

where [math]\displaystyle{ \ f_k(x) }[/math] is a class conditional density

  • Simplification Assumption: Each cluster has the same covariance matrix [math]\displaystyle{ \,\Sigma }[/math] equal to the covariance of [math]\displaystyle{ \Sigma_k \forall k }[/math].


We wish to solve for the decision boundary where the error rates for classifying a point are equal, where one side of the boundary gives a lower error rate for one class and the other side gives a lower error rate for the other class.

So we solve [math]\displaystyle{ \,r_k(x)=r_l(x) }[/math] for all the pairwise combinations of classes.


[math]\displaystyle{ \,\Rightarrow Pr(Y=k|X=x)=Pr(Y=l|X=x) }[/math]


[math]\displaystyle{ \,\Rightarrow \frac{Pr(X=x|Y=k)Pr(Y=k)}{Pr(X=x)}=\frac{Pr(X=x|Y=l)Pr(Y=l)}{Pr(X=x)} }[/math] using Bayes' Theorem


[math]\displaystyle{ \,\Rightarrow Pr(X=x|Y=k)Pr(Y=k)=Pr(X=x|Y=l)Pr(Y=l) }[/math] by canceling denominators


[math]\displaystyle{ \,\Rightarrow f_k(x)\pi_k=f_l(x)\pi_l }[/math]


[math]\displaystyle{ \,\Rightarrow \frac{1}{ (2\pi)^{d/2}|\Sigma|^{1/2} }\exp\left( -\frac{1}{2} [x - \mu_k]^\top \Sigma^{-1} [x - \mu_k] \right)\pi_k=\frac{1}{ (2\pi)^{d/2}|\Sigma|^{1/2} }\exp\left( -\frac{1}{2} [x - \mu_l]^\top \Sigma^{-1} [x - \mu_l] \right)\pi_l }[/math]


[math]\displaystyle{ \,\Rightarrow \exp\left( -\frac{1}{2} [x - \mu_k]^\top \Sigma^{-1} [x - \mu_k] \right)\pi_k=\exp\left( -\frac{1}{2} [x - \mu_l]^\top \Sigma^{-1} [x - \mu_l] \right)\pi_l }[/math] Since both [math]\displaystyle{ \Sigma }[/math] are equal based on the assumptions specific to LDA.


[math]\displaystyle{ \,\Rightarrow -\frac{1}{2} [x - \mu_k]^\top \Sigma^{-1} [x - \mu_k] + \log(\pi_k)=-\frac{1}{2} [x - \mu_l]^\top \Sigma^{-1} [x - \mu_l] +\log(\pi_l) }[/math] taking the log of both sides.


[math]\displaystyle{ \,\Rightarrow \log(\frac{\pi_k}{\pi_l})-\frac{1}{2}\left( x^\top\Sigma^{-1}x + \mu_k^\top\Sigma^{-1}\mu_k - 2x^\top\Sigma^{-1}\mu_k - x^\top\Sigma^{-1}x - \mu_l^\top\Sigma^{-1}\mu_l + 2x^\top\Sigma^{-1}\mu_l \right)=0 }[/math] by expanding out


[math]\displaystyle{ \,\Rightarrow \log(\frac{\pi_k}{\pi_l})-\frac{1}{2}\left( \mu_k^\top\Sigma^{-1}\mu_k-\mu_l^\top\Sigma^{-1}\mu_l - 2x^\top\Sigma^{-1}(\mu_k-\mu_l) \right)=0 }[/math] after canceling out like terms and factoring.

We can see that this is a linear function in [math]\displaystyle{ \ x }[/math] with general form [math]\displaystyle{ \,ax+b=0 }[/math].

Actually, this linear log function shows that the decision boundary between class [math]\displaystyle{ \ k }[/math] and class [math]\displaystyle{ \ l }[/math], i.e. [math]\displaystyle{ \ P(G=k|X=x)=P(G=l|X=x) }[/math], is linear in [math]\displaystyle{ \ x }[/math]. Given any pair of classes, decision boundaries are always linear. In [math]\displaystyle{ \ d }[/math] dimensions, we separate regions by hyperplanes.

In the special case where the number of samples from each class are equal ([math]\displaystyle{ \,\pi_k=\pi_l }[/math]), the boundary surface or line lies halfway between [math]\displaystyle{ \,\mu_l }[/math] and [math]\displaystyle{ \,\mu_k }[/math]

Limitation

  • LDA implicitly assumes Gaussian distribution of data.
  • LDA implicitly assumes that the mean is the discriminating factor, not variance.
  • LDA may overfit the data.

QDA

The concept uses a same idea as LDA of finding a boundary where the error rate for classification between classes are equal, except the assumption that each cluster has the same variance [math]\displaystyle{ \,\Sigma }[/math] equal to the mean variance of [math]\displaystyle{ \Sigma_k \forall k }[/math] is removed. We can use a hypothesis test with [math]\displaystyle{ \ H_0 }[/math]: [math]\displaystyle{ \Sigma_k \forall k }[/math]=[math]\displaystyle{ \,\Sigma }[/math].The best method is likelihood ratio test.


Following along from where QDA diverges from LDA.

[math]\displaystyle{ \,f_k(x)\pi_k=f_l(x)\pi_l }[/math]

[math]\displaystyle{ \,\Rightarrow \frac{1}{ (2\pi)^{d/2}|\Sigma_k|^{1/2} }\exp\left( -\frac{1}{2} [x - \mu_k]^\top \Sigma_k^{-1} [x - \mu_k] \right)\pi_k=\frac{1}{ (2\pi)^{d/2}|\Sigma_l|^{1/2} }\exp\left( -\frac{1}{2} [x - \mu_l]^\top \Sigma_l^{-1} [x - \mu_l] \right)\pi_l }[/math]


[math]\displaystyle{ \,\Rightarrow \frac{1}{|\Sigma_k|^{1/2} }\exp\left( -\frac{1}{2} [x - \mu_k]^\top \Sigma_k^{-1} [x - \mu_k] \right)\pi_k=\frac{1}{|\Sigma_l|^{1/2} }\exp\left( -\frac{1}{2} [x - \mu_l]^\top \Sigma_l^{-1} [x - \mu_l] \right)\pi_l }[/math] by cancellation


[math]\displaystyle{ \,\Rightarrow -\frac{1}{2}\log(|\Sigma_k|)-\frac{1}{2} [x - \mu_k]^\top \Sigma_k^{-1} [x - \mu_k]+\log(\pi_k)=-\frac{1}{2}\log(|\Sigma_l|)-\frac{1}{2} [x - \mu_l]^\top \Sigma_l^{-1} [x - \mu_l]+\log(\pi_l) }[/math] by taking the log of both sides


[math]\displaystyle{ \,\Rightarrow \log(\frac{\pi_k}{\pi_l})-\frac{1}{2}\log(\frac{|\Sigma_k|}{|\Sigma_l|})-\frac{1}{2}\left( x^\top\Sigma_k^{-1}x + \mu_k^\top\Sigma_k^{-1}\mu_k - 2x^\top\Sigma_k^{-1}\mu_k - x^\top\Sigma_l^{-1}x - \mu_l^\top\Sigma_l^{-1}\mu_l + 2x^\top\Sigma_l^{-1}\mu_l \right)=0 }[/math] by expanding out

[math]\displaystyle{ \,\Rightarrow \log(\frac{\pi_k}{\pi_l})-\frac{1}{2}\log(\frac{|\Sigma_k|}{|\Sigma_l|})-\frac{1}{2}\left( x^\top(\Sigma_k^{-1}-\Sigma_l^{-1})x + \mu_k^\top\Sigma_k^{-1}\mu_k - \mu_l^\top\Sigma_l^{-1}\mu_l - 2x^\top(\Sigma_k^{-1}\mu_k-\Sigma_l^{-1}\mu_l) \right)=0 }[/math] this time there are no cancellations, so we can only factor


The final result is a quadratic equation specifying a curved boundary between classes with general form [math]\displaystyle{ \,ax^2+bx+c=0 }[/math].

It is quadratic because there is no boundaries.

Linear and Quadratic Discriminant Analysis cont'd - October 5, 2009

Linear discriminant analysis[3] is a statistical method used to find the linear combination of features which best separate two or more classes of objects or events. It is widely applied in classifying diseases, positioning, product management, and marketing research.

Quadratic Discriminant Analysis[4], on the other had, aims to find the quadratic combination of features. It is more general than Linear discriminant analysis. Unlike LDA however, in QDA there is no assumption that the covariance of each of the classes is identical. When the assumption is true, the best possible test for the hypothesis that a given measurement is from a given class is the likelihood ratio test. Suppose the means of each class are known to be [math]\displaystyle{ \mu_{y=0},\mu_{y=1} }[/math] and the covariances [math]\displaystyle{ \Sigma_{y=0}, \Sigma_{y=1} }[/math]. Then the likelihood ratio will be given by

Likelihood ratio = [math]\displaystyle{ \frac{ \sqrt{2 \pi |\Sigma_{y=1}|}^{-1} \exp \left( -\frac{1}{2}(x-\mu_{y=1})^T \Sigma_{y=1}^{-1} (x-\mu_{y=1}) \right) }{ \sqrt{2 \pi |\Sigma_{y=0}|}^{-1} \exp \left( -\frac{1}{2}(x-\mu_{y=0})^T \Sigma_{y=0}^{-1} (x-\mu_{y=0}) \right)} \lt t }[/math]

for some threshold t. After some rearrangement, it can be shown that the resulting separating surface between the classes is a quadratic.

Summarizing LDA and QDA

We can summarize what we have learned on LDA and QDA so far into the following theorem.

Theorem:

Suppose that [math]\displaystyle{ \,Y \in \{1,\dots,k\} }[/math], if [math]\displaystyle{ \,f_k(x) = Pr(X=x|Y=k) }[/math] is Gaussian, the Bayes Classifier rule is

[math]\displaystyle{ \,h(X) = \arg\max_{k} \delta_k(x) }[/math]

where

[math]\displaystyle{ \,\delta_k = - \frac{1}{2}log(|\Sigma_k|) - \frac{1}{2}(x-\mu_k)^\top\Sigma_k^{-1}(x-\mu_k) + log (\pi_k) }[/math] (quadratic)
  • Note The decision boundary between classes [math]\displaystyle{ k }[/math] and [math]\displaystyle{ l }[/math] is quadratic in [math]\displaystyle{ x }[/math].

If the covariance of the Gaussians are the same, this becomes

[math]\displaystyle{ \,\delta_k = x^\top\Sigma^{-1}\mu_k - \frac{1}{2}\mu_k^\top\Sigma^{-1}\mu_k + log (\pi_k) }[/math] (linear)
  • Note [math]\displaystyle{ \,\arg\max_{k} \delta_k(x) }[/math]returns the set of k for which [math]\displaystyle{ \,\delta_k(x) }[/math] attains its largest value.

In practice

We need to estimate the prior, so in order to do this, we use the sample estimates of [math]\displaystyle{ \,\pi,\mu_k,\Sigma_k }[/math] in place of the true values, i.e.

File:estimation.png
Estimation of the probability of belonging to either class k or l

[math]\displaystyle{ \,\hat{\pi_k} = \hat{Pr}(y=k) = \frac{n_k}{n} }[/math]

[math]\displaystyle{ \,\hat{\mu_k} = \frac{1}{n_k}\sum_{i:y_i=k}x_i }[/math]

[math]\displaystyle{ \,\hat{\Sigma_k} = \frac{1}{n_k}\sum_{i:y_i=k}(x_i-\hat{\mu_k})(x_i-\hat{\mu_k})^\top }[/math]

In the case where we have a common covariance matrix, we get the ML estimate to be

[math]\displaystyle{ \,\Sigma=\frac{\sum_{r=1}^{k}(n_r\Sigma_r)}{\sum_{l=1}^{k}(n_l)} }[/math]

This is a Maximum Likelihood estimate.

Computation

Case 1: (Example) [math]\displaystyle{ \, \Sigma_k = I }[/math]'

File:case1.jpg

This means that the data is distributed symmetrically around the center [math]\displaystyle{ \mu }[/math], i.e. the isocontours are all circles.

We have:

[math]\displaystyle{ \,\delta_k = - \frac{1}{2}log(|I|) - \frac{1}{2}(x-\mu_k)^\top I(x-\mu_k) + log (\pi_k) }[/math]

We see that the first term in the above equation, [math]\displaystyle{ \,\frac{1}{2}log(|I|) }[/math], is zero since [math]\displaystyle{ \ |I| }[/math] is the determine and [math]\displaystyle{ \ |I|=1 }[/math]. The second term contains [math]\displaystyle{ \, (x-\mu_k)^\top I(x-\mu_k) = (x-\mu_k)^\top(x-\mu_k) }[/math], which is the squared Euclidean distance between [math]\displaystyle{ \,x }[/math] and [math]\displaystyle{ \,\mu_k }[/math]. Therefore we can find the distance between a point and each center and adjust it with the log of the prior, [math]\displaystyle{ \,log(\pi_k) }[/math]. The class that has the minimum distance will maximise [math]\displaystyle{ \,\delta_k }[/math]. According to the theorem, we can then classify the point to a specific class [math]\displaystyle{ \,k }[/math]. In addition, [math]\displaystyle{ \, \Sigma_k = I }[/math] implies that our data is spherical.


Case 2: (General Case) [math]\displaystyle{ \, \Sigma_k \ne I }[/math]

We can decompose this as:

[math]\displaystyle{ \, \Sigma_k = USV^\top = USU^\top }[/math] (In general when [math]\displaystyle{ \,X=USV^\top }[/math], [math]\displaystyle{ \,U }[/math] is the eigenvectors of [math]\displaystyle{ \,XX^T }[/math] and [math]\displaystyle{ \,V }[/math] is the eigenvectors of [math]\displaystyle{ \,X^\top X }[/math]. So if [math]\displaystyle{ \, X }[/math] is symmetric, we will have [math]\displaystyle{ \, U=V }[/math]. Here [math]\displaystyle{ \, \Sigma }[/math] is symmetric)

and the inverse of [math]\displaystyle{ \,\Sigma_k }[/math] is

[math]\displaystyle{ \, \Sigma_k^{-1} = (USU^\top)^{-1} = (U^\top)^{-1}S^{-1}U^{-1} = US^{-1}U^\top }[/math] (since [math]\displaystyle{ \,U }[/math] is orthonormal)

So from the formula for [math]\displaystyle{ \,\delta_k }[/math], the second term is

[math]\displaystyle{ \begin{align} (x-\mu_k)^\top\Sigma_k^{-1}(x-\mu_k)&= (x-\mu_k)^\top US^{-1}U^T(x-\mu_k)\\ & = (U^\top x-U^\top\mu_k)^\top S^{-1}(U^\top x-U^\top \mu_k)\\ & = (U^\top x-U^\top\mu_k)^\top S^{-\frac{1}{2}}S^{-\frac{1}{2}}(U^\top x-U^\top\mu_k) \\ & = (S^{-\frac{1}{2}}U^\top x-S^{-\frac{1}{2}}U^\top\mu_k)^\top I(S^{-\frac{1}{2}}U^\top x-S^{-\frac{1}{2}}U^\top \mu_k) \\ & = (S^{-\frac{1}{2}}U^\top x-S^{-\frac{1}{2}}U^\top\mu_k)^\top(S^{-\frac{1}{2}}U^\top x-S^{-\frac{1}{2}}U^\top \mu_k) \\ \end{align} }[/math]

where we have the Euclidean distance between [math]\displaystyle{ \, S^{-\frac{1}{2}}U^\top x }[/math] and [math]\displaystyle{ \, S^{-\frac{1}{2}}U^\top\mu_k }[/math].

A transformation of all the data points can be done from [math]\displaystyle{ \,x }[/math] to [math]\displaystyle{ \,x^* }[/math] where [math]\displaystyle{ \, x^* \leftarrow S^{-\frac{1}{2}}U^\top x }[/math].

It is now possible to do classification with [math]\displaystyle{ \,x^* }[/math], treating it as in Case 1 above.

Note that when we have multiple classes, they must all have the same transformation, else, ahead of time we would have to assume a data point belongs to one class or the other. All classes therefore need to have the same shape for classification to be applicable using this method. So this method works for LDA.

If the classes have different shapes, in another word, have different covariance [math]\displaystyle{ \,\Sigma_k }[/math], can we use the same method to transform all data points [math]\displaystyle{ \,x }[/math] to [math]\displaystyle{ \,x^* }[/math]?

The answer is NO. Consider that you have two classes with different shapes, then consider transforming them to the same shape. Given a data point, justify which class this point belongs to. The question is, which transformation can you use? For example, if you use the transformation of class A, then you have assumed that this data point belongs to class A.

Kernel QDA In real life, QDA is always better fit the data then LDA because QDA relaxes does not have the assumption made by LDA that the covariance matrix for each class is identical. However, QDA still assumes that the class conditional distribution is Gaussian which does not be the real case in practical. Another method-kernel QDA does not have the Gaussian distribution assumption and it works better.

The Number of Parameters in LDA and QDA

Both LDA and QDA require us to estimate parameters. The more estimation we have to do, the less robust our classification algorithm will be.

LDA: Since we just need to compare the differences between one given class and remaining [math]\displaystyle{ \,K-1 }[/math] classes, totally, there are [math]\displaystyle{ \,K-1 }[/math] differences. For each of them, [math]\displaystyle{ \,a^{T}x+b }[/math] requires [math]\displaystyle{ \,d+1 }[/math] parameters. Therefore, there are [math]\displaystyle{ \,(K-1)\times(d+1) }[/math] parameters.

QDA: For each of the differences, [math]\displaystyle{ \,x^{T}ax + b^{T}x + c }[/math] requires [math]\displaystyle{ \frac{1}{2}(d+1)\times d + d + 1 = \frac{d(d+3)}{2}+1 }[/math] parameters. Therefore, there are [math]\displaystyle{ (K-1)(\frac{d(d+3)}{2}+1) }[/math] parameters.

A plot of the number of parameters that must be estimated, in terms of (K-1). The x-axis represents the number of dimensions in the data. As is easy to see, QDA is far less robust than LDA for high-dimensional data sets.

related link:

LDA:[5]

[6]

Regularized linear discriminant analysis and its application in microarrays

MATHEMATICAL OPERATIONS OF LDA

Application in face recognition and in market

QDA:[7]

Bayes QDA

LDA & QDA

LDA and QDA in Matlab - October 7, 2009

We have examined the theory behind Linear Discriminant Analysis (LDA) and Quadratic Discriminant Analysis (QDA) above; how do we use these algorithms in practice? Matlab offers us a function called classify that allows us to perform LDA and QDA quickly and easily.

In class, we were shown an example of using LDA and QDA on the 2_3 data that is used in the first assignment. The code below reproduces that example, slightly modified, and explains each step.

>> load 2_3;
>> [U, sample] = princomp(X');
>> sample = sample(:,1:2);
First, we do principal component analysis (PCA) on the 2_3 data to reduce the dimensionality of the original data from 64 dimensions to 2. Doing this makes it much easier to visualize the results of the LDA and QDA algorithms.
>> plot (sample(1:200,1), sample(1:200,2), '.');
>> hold on;
>> plot (sample(201:400,1), sample(201:400,2), 'r.');
Recall that in the 2_3 data, the first 200 elements are images of the number two handwritten and the last 200 elements are images of the number three handwritten. This code sets up a plot of the data such that the points that represent a 2 are blue, while the points that represent a 3 are red.
See title and legend for information on adding the title and legend.
Before using classify we can set up a vector that contains the actual labels for our data, to train the classification algorithm. If we don't know the labels for the data, then the element in the group vector should be an empty string or NaN. (See grouping data for more information.)
>> group = ones(400,1);
>> group(201:400) = 2;
We can now classify our data.
>> [class, error, POSTERIOR, logp, coeff] = classify(sample, sample, group, 'linear');
The full details of this line can be examined in the Matlab help file linked above. What we care about are class, which contains the labels that the algorithm thinks that each data point belongs to, and coeff, which contains information about the line that algorithm created to separate the data into each class.
We can see the efficacy of the algorithm by comparing class to group.
>> sum (class==group)
ans =
   369
This compares the value in class to the value in group. The answer of 369 tells us that the algorithm correctly determined the class of the point 369 times, out of a possible 400 data points. This gives us an empirical error rate of 0.0775.
We can see the line produced by LDA using coeff.
>> k = coeff(1,2).const;
>> l = coeff(1,2).linear;
>> f = sprintf('0 = %g+%g*x+%g*y', k, l(1), l(2));
>> ezplot(f, [min(sample(:,1)), max(sample(:,1)), min(sample(:,2)), max(sample(:,2))]);
Those familiar with the programming language C will find the sprintf line refreshingly familiar; those with no exposure to C are directed to Matlab's sprintf page. Essentially, this code sets up the equation of the line in the form 0 = a + bx + cy. We then use the ezplot function to plot the line.
The 2-3 data after LDA is performed. The line shows where the two classes are split.
Let's perform the same steps, except this time using QDA. The main difference with QDA is a slightly different call to classify, and a more complicated procedure to plot the line.
>> [class, error, POSTERIOR, logp, coeff] = classify(sample, sample, group, 'quadratic');
>> sum (class==group)
ans =
   371
>> k = coeff(1,2).const;
>> l = coeff(1,2).linear;
>> q = coeff(1,2).quadratic;
>> f = sprintf('0 = %g+%g*x+%g*y+%g*x^2+%g*x.*y+%g*y.^2', k, l, q(1,1), q(1,2)+q(2,1), q(2,2));
>> ezplot(f, [min(sample(:,1)), max(sample(:,1)), min(sample(:,2)), max(sample(:,2))]);
The 2-3 data after QDA is performed. The curved line shows where QDA splits the two classes. Note that it is only correct 2 in 2 more data points compared to LDA; we can see a blue point and a red point that lie on the correct side of the curve that do not lie on the correct side of the line.

classify can also be used with other discriminant analysis algorithms. The steps laid out above would only need to be modified slightly for those algorithms.

Recall: An analysis of the function of princomp in matlab.
In our assignment 1, we have learnt that how to perform Principal Component Analysis using SVD method. In fact, the matlab offers us a function called princomp which can perform PCA conveniently. From the matlab help file on princomp, you can find the details about this function. But here we will analyze the code of the function of princomp() in matlab to find something different when comparing with SVD method. The following is the code of princomp and explanations to some emphasized steps.

   function [pc, score, latent, tsquare] = princomp(x);
   %   PRINCOMP Principal Component Analysis (centered and scaled data).
   %   [PC, SCORE, LATENT, TSQUARE] = PRINCOMP(X) takes a data matrix X and
   %   returns the principal components in PC, the so-called Z-scores in SC
   %   ORES, the eigenvalues of the covariance matrix of X in LATENT,
   %   and Hotelling's T-squared statistic for each data point in TSQUARE.
   %   Reference: J. Edward Jackson, A User's Guide to Principal Components
   %   John Wiley & Sons, Inc. 1991 pp. 1-25.
   %   B. Jones 3-17-94
   %   Copyright 1993-2002 The MathWorks, Inc.
   %   $Revision: 2.9 $  $Date: 2002/01/17 21:31:45 $
   [m,n] = size(x);       %  get the lengh of the rows and columns of matrix x. 
   r = min(m-1,n);        %  max possible rank of X                    
   avg = mean(x);         %  the mean of every column of X
   centerx = (x - avg(ones(m,1),:));     
                          %  centers X by subtracting off column means                 
   [U,latent,pc] = svd(centerx./sqrt(m-1),0);                          
                          %  "economy size" decomposition
   score = centerx*pc;      
                          %  the representation of X in the principal component space
   if nargout < 3
      return;
    end
    latent = diag(latent).^2;
    if (r   latent = [latent(1:r); zeros(n-r,1)];
    score(:,r+1:end) = 0;
    end
    if nargout < 4
    return;
    end
    tmp = sqrt(diag(1./latent(1:r)))*score(:,1:r)';
    tsquare = sum(tmp.*tmp)';

From the above code, we should pay attention to the following aspects when comparing with SVD method:

First, Rows of [math]\displaystyle{ \,X }[/math] correspond to observations, columns to variables. When using princomp on 2_3 data in assignment 1, note that we take the transpose of [math]\displaystyle{ \,X }[/math].

 >> load 2_3;
 >> [U, score] = princomp(X');

Second, princomp centers X by subtracting off column means.

The third, when [math]\displaystyle{ \,X=UdV' }[/math], princomp uses [math]\displaystyle{ \,V }[/math] as coefficients for principal components, rather than [math]\displaystyle{ \,U }[/math].

The following is an example to perform PCA using princomp and SVD respectively to get the same results.

SVD method
 >> load 2_3
 >> mn=mean(X,2);
 >> X1=X-repmat(mn,1,400);
 >> [s d v]=svd(X1');
 >> y=X1'*v;
princomp
 >>[U score]=princomp(X');

Then we can see that y=score, v=U.

useful resouces: LDA and QDA in Matlab[8],[9],[10]

Trick: Using LDA to do QDA - October 7, 2009

There is a trick that allows us to use the linear discriminant analysis (LDA) algorithm to generate as its output a quadratic function that can be used to classify data. This trick is similar to, but more primitive than, the Kernel trick that will be discussed later in the course.

Essentially, the trick involves adding one or more new features (i.e. new dimensions) that just contain our original data projected to that dimension. We then do LDA on our new higher-dimensional data. The answer provided by LDA can then be collapsed onto a lower dimension, giving us a quadratic answer.

Motivation

Why would we want to use LDA over QDA? In situations where we have fewer data points, LDA turns out to be more robust.

If we look back at the equations for LDA and QDA, we see that in LDA we must estimate [math]\displaystyle{ \,\mu_1 }[/math], [math]\displaystyle{ \,\mu_2 }[/math] and [math]\displaystyle{ \,\Sigma }[/math]. In QDA we must estimate all of those, plus another [math]\displaystyle{ \,\Sigma }[/math]; the extra [math]\displaystyle{ \,\frac{d(d-1)}{2} }[/math] estimations make QDA less robust with fewer data points.

Theoretically

Suppose we can estimate some vector [math]\displaystyle{ \underline{w}^T }[/math] such that

[math]\displaystyle{ y = \underline{w}^Tx }[/math]

where [math]\displaystyle{ \underline{w} }[/math] is a d-dimensional column vector, and [math]\displaystyle{ x\ \epsilon\ \mathbb{R}^d }[/math] (vector in d dimensions).

We also have a non-linear function [math]\displaystyle{ g(x) = y = x^Tvx + \underline{w}^Tx }[/math] that we cannot estimate.

Using our trick, we create two new vectors, [math]\displaystyle{ \,\underline{w}^* }[/math] and [math]\displaystyle{ \,x^* }[/math] such that:

[math]\displaystyle{ \underline{w}^{*T} = [w_1,w_2,...,w_d,v_1,v_2,...,v_d] }[/math]

and

[math]\displaystyle{ x^{*T} = [x_1,x_2,...,x_d,{x_1}^2,{x_2}^2,...,{x_d}^2] }[/math]

We can then estimate a new function, [math]\displaystyle{ g^*(x,x^2) = y^* = \underline{w}^{*T}x^* }[/math].

Note that we can do this for any [math]\displaystyle{ x }[/math] and in any dimension; we could extend a [math]\displaystyle{ D \times n }[/math] matrix to a quadratic dimension by appending another [math]\displaystyle{ D \times n }[/math] matrix with the original matrix squared, to a cubic dimension with the original matrix cubed, or even with a different function altogether, such as a [math]\displaystyle{ \,sin(x) }[/math] dimension.

By Example

Let's use our trick to do a quadratic analysis of the 2_3 data using LDA.

>> load 2_3;
>> [U, sample] = princomp(X');
>> sample = sample(:,1:2);
We start off the same way, by using PCA to reduce the dimensionality of our data to 2.
>> X_star = zeros(400,4);
>> X_star(:,1:2) = sample(:,:);
>> for i=1:400
     for j=1:2
       X_star(i,j+2) = X_star(i,j)^2;
     end
   end
This projects our sample into two more dimensions by squaring our initial two dimensional data set.
>> group = ones(400,1);
>> group(201:400) = 2;
>> [class, error, POSTERIOR, logp, coeff] = classify(X_star, X_star, group, 'linear');
>> sum (class==group)
ans =
   375
We can now display our results.
>> k = coeff(1,2).const;
>> l = coeff(1,2).linear;
>> f = sprintf('0 = %g+%g*x+%g*y+%g*(x)^2+%g*(y)^2', k, l(1), l(2),l(3),l(4));
>> ezplot(f,[min(sample(:,1)), max(sample(:,1)), min(sample(:,2)), max(sample(:,2))]);
The plot shows the quadratic decision boundary obtained using LDA in the four-dimensional space on the 2_3.mat data. Counting the blue and red points that are on the wrong side of the decision boundary, we can confirm that we have correctly classified 375 data points.
Not only does LDA give us a better result than it did previously, it actually beats QDA, which only correctly classified 371 data points for this data set. Continuing this procedure by adding another two dimensions with [math]\displaystyle{ x^4 }[/math] (i.e. we set X_star(i,j+2) = X_star(i,j)^4) we can correctly classify 376 points.

Introduction to Fisher's Discriminant Analysis - October 7, 2009

Fisher's Discriminant Analysis (FDA), also known as Fisher's Linear Discriminant Analysis (LDA) in some sources, is a classical feature extraction technique. It was originally described in 1936 by Sir Ronald Aylmer Fisher, an English statistician and eugenicist who has been described as one of the founders of modern statistical science. His original paper describing FDA can be found here; a Wikipedia article summarizing the algorithm can be found here.

LDA is for classification and FDA is used for feature extraction.

Contrasting FDA with PCA

The goal of FDA is in contrast to our other main feature extraction technique, principal component analysis (PCA).

  • In PCA, we map data to lower dimensions to maximize the variation in those dimensions.
  • In FDA, we map data to lower dimensions to best separate data in different classes.
2 clouds of data, and the lines that might be produced by PCA and FDA.

Because we are concerned with identifying which class data belongs to, FDA is often a better feature extraction algorithm for classification.

Another difference between PCA and FDA is that FDA is a supervised algorithm; that is, we know what class data belongs to, and we exploit that knowledge to find a good projection to lower dimensions.

Intuitive Description of FDA

An intuitive description of FDA can be given by visualizing two clouds of data, as shown above. Ideally, we would like to collapse all of the data points in each cloud onto one point on some projected line, then make those two points as far apart as possible. In doing so, we make it very easy to tell which class a data point belongs to. In practice, it is not possible to collapse all of the points in a cloud to one point, but we attempt to make all of the points in a cloud close to each other while simultaneously far from the points in the other cloud.


Example in R

PCA and FDA primary dimension for normal multivariate data, using R.
>> X = matrix(nrow=400,ncol=2)
>> X[1:200,] = mvrnorm(n=200,mu=c(1,1),Sigma=matrix(c(1,1.5,1.5,3),2))
>> X[201:400,] = mvrnorm(n=200,mu=c(5,3),Sigma=matrix(c(1,1.5,1.5,3),2))
>> Y = c(rep("red",200),rep("blue",200))
Create 2 multivariate normal random variables with [math]\displaystyle{ \, \mu_1 = \left( \begin{array}{c}1 \\ 1 \end{array} \right), \mu_2 = \left( \begin{array}{c}5 \\ 3 \end{array} \right). ~\textrm{Cov} = \left( \begin{array}{cc} 1 & 1.5 \\ 1.5 & 3 \end{array} \right) }[/math]. Create Y, an index indicating which class they belong to.
>> s <- svd(X,nu=1,nv=1)
Calculate the singular value decomposition of X. The most significant direction is in s$v[,1], and is displayed as a black line.
>> s2 <- lda(X,grouping=Y)
The lda function, given the group for each item, uses Fischer's Linear Discriminant Analysis (FLDA) to find the most discriminant direction. This can be found in s2$scaling.

Now that we've calculated the PCA and FLDA decompositions, we create a plot to demonstrate the differences between the two algorithms. FLDA is clearly better suited to discriminating between two classes whereas PCA is primarily good for reducing the number of dimensions when data is high-dimensional.

>> plot(X,col=Y,main="PCA vs. FDA example")
Plot the set of points, according to colours given in Y.
>> slope = s$v[2]/s$v[1]
>> intercept = mean(X[,2])-slope*mean(X[,1])
>> abline(a=intercept,b=slope)
Plot the main PCA direction, drawn through the mean of the dataset. Only the direction is significant.
>> slope2 = s2$scaling[2]/s2$scaling[1]
>> intercept2 = mean(X[,2])-slope2*mean(X[,1])
>> abline(a=intercept2,b=slope2,col="red")
Plot the FLDA direction, again through the mean.
>> legend(-2,7,legend=c("PCA","FDA"),col=c("black","red"),lty=1)
Labeling the lines directly on the graph makes it easier to interpret.


Distance Metric Learning VS FDA

In many fundamental machine learning problems, the Euclidean distances between data points do not represent the desired topology that we are trying to capture. Kernel methods address this problem by mapping the points into new spaces where Euclidean distances may be more useful. An alternative approach is to construct a Mahalanobis distance (quadratic Gaussian metric) over the input space and use it in place of Euclidean distances. This approach can be equivalently interpreted as a linear transformation of the original inputs,followed by Euclidean distance in the projected space. This approach has attracted a lot of recent interest.

Some of the proposed algorithms are iterative and computationally expensive. In the paper,"Distance Metric Learning VS FDA " written by our instructor, they propose a closed-form solution to one algorithm that previously required expensive semidefinite optimization. They provide a new problem setup in which the algorithm performs better or as well as some standard methods, but without the computational complexity. Furthermore, they show a strong relationship between these methods and the Fisher Discriminant Analysis (FDA). They also extend the approach by kernelizing it, allowing for non-linear transformations of the metric.

Fisher's Discriminant Analysis (FDA) - October 9, 2009

The goal of FDA is to reduce the dimensionality of data in order to have separable data points in a new space. We can consider two kinds of problems:

  • 2-class problem
  • multi-class problem
File:graph.jpg
PCA vs FDA

Two-class problem

In the two-class problem, we have the pre-knowledge that data points belong to two classes. Intuitively speaking points of each class form a cloud around the mean of the class, with each class having possibly different size. To be able to separate the two classes we must determine the class whose mean is closest to a given point while also accounting for the different size of each class, which is represented by the covariance of each class.

Assume [math]\displaystyle{ \underline{\mu_{1}}=\frac{1}{n_{1}}\displaystyle\sum_{i:y_{i}=1}\underline{x_{i}} }[/math] and [math]\displaystyle{ \displaystyle\Sigma_{1} }[/math], represent the mean and covariance of the 1st class, and [math]\displaystyle{ \underline{\mu_{2}}=\frac{1}{n_{2}}\displaystyle\sum_{i:y_{i}=2}\underline{x_{i}} }[/math] and [math]\displaystyle{ \displaystyle\Sigma_{2} }[/math] represent the mean and covariance of the 2nd class. We have to find a transformation which satisfies the following goals:

1.To make the means of these two classes as far apart as possible

In other words, the goal is to maximize the distance after projection between class 1 and class 2. This can be done by maximizing the distance between the means of the classes after projection. When projecting the data points to a one-dimensional space, all points will be projected to a single line; the line we seek is the one with the direction that achieves maximum separation of classes upon projetion. If the original points are [math]\displaystyle{ \underline{x_{i}} \in \mathbb{R}^{d} }[/math]and the projected points are [math]\displaystyle{ \underline{w}^T \underline{x_{i}} }[/math] then the mean of the projected points will be [math]\displaystyle{ \underline{w}^T \underline{\mu_{1}} }[/math] and [math]\displaystyle{ \underline{w}^T \underline{\mu_{2}} }[/math] for class 1 and class 2 respectively. The goal now becomes to maximize the Euclidean distance between projected means, [math]\displaystyle{ (\underline{w}^T\underline{\mu_{1}}-\underline{w}^T\underline{\mu_{2}})^T (\underline{w}^T\underline{\mu_{1}}-\underline{w}^T\underline{\mu_{2}}) }[/math]. The steps of this maximization are given below.

2.We want to collapse all data points of each class to a single point, i.e., minimize the covariance within classes

Notice that the variance of the projected classes 1 and 2 are given by [math]\displaystyle{ \underline{w}^T\Sigma_{1}\underline{w} }[/math] and [math]\displaystyle{ \underline{w}^T\Sigma_{2}\underline{w} }[/math]. The second goal is to minimize the sum of these two covariances.

As is demonstrated below, both of these goals can be accomplished simultaneously.

Original points are [math]\displaystyle{ \underline{x_{i}} \in \mathbb{R}^{d} }[/math]
[math]\displaystyle{ \ \{ \underline x_1 \underline x_2 \cdot \cdot \cdot \underline x_n \} }[/math]


Projected points are [math]\displaystyle{ \underline{z_{i}} \in \mathbb{R}^{1} }[/math] with [math]\displaystyle{ \underline{z_{i}} = \underline{w}^T \cdot\underline{x_{i}} }[/math] [math]\displaystyle{ \ z_i }[/math] is a sclar

Between class covariance

In this particular case, we want to project all the data points in one dimensional space.


We want to maximize the Euclidean distance between projected means, which is

[math]\displaystyle{ \begin{align} (\underline{w}^T \underline{\mu_{1}} - \underline{w}^T \underline{\mu_{2}})^T(\underline{w}^T \underline{\mu_{1}} - \underline{w}^T \underline{\mu_{2}}) &= (\underline{\mu_{1}}-\underline{\mu_{2}})^T\underline{w} . \underline{w}^T(\underline{\mu_{1}}-\underline{\mu_{2}})\\ &= \underline{w}^T(\underline{\mu_{1}}-\underline{\mu_{2}})(\underline{\mu_{1}}-\underline{\mu_{2}})^T\underline{w} \end{align} }[/math] which is scalar


The quantity [math]\displaystyle{ (\underline{\mu_{1}}-\underline{\mu_{2}})(\underline{\mu_{1}}-\underline{\mu_{2}})^T }[/math] is called between class covariance or [math]\displaystyle{ \,S_{B} }[/math].

The goal is to maximize : [math]\displaystyle{ \underline{w}^T S_{B} \underline{w} }[/math]

Within class covariance

Covariance of class 1 is [math]\displaystyle{ \,\Sigma_{1} }[/math] Covariance of class 2 is [math]\displaystyle{ \,\Sigma_{2} }[/math] So covariance of projected points will be [math]\displaystyle{ \,\underline{w}^T \Sigma_{1} \underline{w} }[/math] and [math]\displaystyle{ \underline{w}^T \Sigma_{2} \underline{w} }[/math]

If we sum this two quantities we have

[math]\displaystyle{ \begin{align} \underline{w}^T \Sigma_{1} \underline{w} + \underline{w}^T \Sigma_{2} \underline{w} &= \underline{w}^T(\Sigma_{1} + \Sigma_{2})\underline{w} \end{align} }[/math]

The quantity [math]\displaystyle{ \,(\Sigma_{1} + \Sigma_{2}) }[/math] is called within class covariance or [math]\displaystyle{ \,S_{W} }[/math]

The goal is to minimize [math]\displaystyle{ \underline{w}^T S_{W} \underline{w} }[/math]

Objective Function

Instead of maximizing [math]\displaystyle{ \underline{w}^T S_{B} \underline{w} }[/math] and minimizing [math]\displaystyle{ \underline{w}^T S_{W} \underline{w} }[/math] we can define the following objective function:

[math]\displaystyle{ \underset{\underline{w}}{max}\ \frac{\underline{w}^T S_{B} \underline{w}}{\underline{w}^T S_{W} \underline{w}} }[/math]

This maximization problem is equivalent to [math]\displaystyle{ \underset{\underline{w}}{max}\ \underline{w}^T S_{B} \underline{w} \equiv \max(\underline w^T S_B \underline w) }[/math] subject to constraint [math]\displaystyle{ \underline{w}^T S_{W} \underline{w} = 1 }[/math], where [math]\displaystyle{ \ \underline w^T S_B \underline w }[/math] is no upper bound and [math]\displaystyle{ \ \underline w^T S_w \underline w }[/math] is no lower bound.

We can use the Lagrange multiplier method to solve it:

[math]\displaystyle{ L(\underline{w},\lambda) = \underline{w}^T S_{B} \underline{w} - \lambda(\underline{w}^T S_{W} \underline{w} - 1) }[/math] where [math]\displaystyle{ \ \lambda }[/math] is the weight


With [math]\displaystyle{ \frac{\part L}{\part \underline{w}} = 0 }[/math] we get:

[math]\displaystyle{ \begin{align} &\Rightarrow\ 2\ S_{B}\ \underline{w}\ - 2\lambda\ S_{W}\ \underline{w}\ = 0\\ &\Rightarrow\ S_{B}\ \underline{w}\ =\ \lambda\ S_{W}\ \underline{w} \\ &\Rightarrow\ S_{W}^{-1}\ S_{B}\ \underline{w}\ =\ \lambda\ \underline{w} \end{align} }[/math]

Note that [math]\displaystyle{ \, S_{W}=\Sigma_1+\Sigma_2 }[/math] is sum of two positive matrices and so it has an inverse.

Here [math]\displaystyle{ \underline{w} }[/math] is the eigenvector of [math]\displaystyle{ S_{w}^{-1}\ S_{B} }[/math] corresponding to the largest eigenvalue [math]\displaystyle{ \ \lambda }[/math].

In facts, this expression can be simplified even more.

[math]\displaystyle{ \Rightarrow\ S_{w}^{-1}\ S_{B}\ \underline{w}\ =\ \lambda\ \underline{w} }[/math] with [math]\displaystyle{ S_{B}\ =\ (\underline{\mu_{1}}-\underline{\mu_{2}})(\underline{\mu_{1}}-\underline{\mu_{2}})^T }[/math]
[math]\displaystyle{ \Rightarrow\ S_{w}^{-1}\ (\underline{\mu_{1}}-\underline{\mu_{2}})(\underline{\mu_{1}}-\underline{\mu_{2}})^T \underline{w}\ =\ \lambda\ \underline{w} }[/math]

The quantity [math]\displaystyle{ (\underline{\mu_{1}}-\underline{\mu_{2}})^T \underline{w} }[/math] and [math]\displaystyle{ \lambda }[/math] are scalars.
So we can say the quantity [math]\displaystyle{ S_{w}^{-1}\ (\underline{\mu_{1}}-\underline{\mu_{2}}) }[/math] is proportional to [math]\displaystyle{ \underline{w} }[/math]

FDA vs. PCA Example in Matlab

We can compare PCA and FDA through the figure produced by matlab.

The following are the code to produce the figure step by step and the explanation for steps.

 >>X1=mvnrnd([1,1],[1 1.5;1.5 3],300);
 >>X2=mvnrnd([5,3],[1 1.5;1.5 3],300);
 >>X=[X1;X2];
Create two multivariate normal random variables with [math]\displaystyle{ \, \mu_1 = \left( \begin{array}{c}1 \\ 1 \end{array} \right), \mu_2 = \left( \begin{array}{c}5 \\ 3 \end{array} \right). ~\textrm{Cov} = \left( \begin{array}{cc} 1 & 1.5 \\ 1.5 & 3 \end{array} \right) }[/math].
 >>plot(X(1:300,1),X(1:300,2),'.');
 >>hold on
 >>p1=plot(X(301:600,1),X(301:600,2),'r.');
Plot the the data of the two classes respectively.
 >>[U Y]=princomp(X);
 >>plot([0 U(1,1)*10],[0 U(2,1)*10]);
Using PCA to find the principal component and plot it.
 >>sw=2*[1 1.5;1.5 3];
 >>sb=([1; 1]-[5 ;3])*([1; 1]-[5; 3])';
 >>g =inv(sw)*sb;
 >>[v w]=eigs(g);
 >>plot([v(1,1)*5 0],[v(2,1)*5 0],'r')
Using FDA to find the principal component and plot it.

Now we can compare them through the figure.

PCA and FDA primary dimension for normal multivariate data, using matlab

From the graph: when we see using PCA, we have a huge overlap for two classes, so PCA is not good. However, there is no overlap for the two classes and they are seperated pretty. Thus, FDA is better than PCA here.

Practical example of 2_3

In this matlab example we explore FDA using our familiar data set 2_3 which consists of 200 handwritten "2" and 200 handwritten "3".

X is a matrix of size 64*400 and each column represents an 8*8 image of "2" or "3". Here X1 gets all "2" and X2 gets all "3".

>>load 2_3
>>X1 = X(:, 1:200);
>>X2 = X(:, 201:400);

Next we calculate within class covariance and between class covariance as before.

>>mu1 = mean(X1, 2);
>>mu2 = mean(X2, 2);
>>sb = (mu1 - mu2) * (mu1 - mu2)';
>>sw = cov(X1') + cov(X2');

We use the first two eigenvectors to project the dato in a two-dimensional space.

>>[v d] = eigs( inv(sw) * sb );
>>w = v(:, 1:2);
>>X_hat = w'*X;

Finally we plot the data and visualize the effect of FDA.

>> scatter(ones(1,200),X_hat(1:200))
>> hold on
>> scatter(ones(1,200),X_hat(201:400),'r')
File:fda2-3.jpg
FDA projection of data 2_3, using Matlab.

Map the data into a linear line, and the two classes are seperated perfectly here.


An extension of Fisher's discriminant analysis for stochastic processes

A general notion of Fisher's linear discriminant analysis can extend the classical multivariate concept to situations that allow for function-valued random elements. The development uses a bijective mapping that connects a second order process to the reproducing kernel Hilbert space generated by its within class covariance kernel. This approach provides a seamless transition between Fisher's original development and infinite dimensional settings that lends itself well to computation via smoothing and regularization.

Link for Algorithm introduction:[[11]]

FDA for Multi-class Problems - October 14, 2009

FDA method for Multi-class Problems

For the [math]\displaystyle{ k }[/math]-class problem, we need to find a projection from [math]\displaystyle{ d }[/math]-dimensional space to a [math]\displaystyle{ (k-1) }[/math]-dimensional space.

(It is more reasonable to have at least 2 directions)

Basically, the within class covariance matrix [math]\displaystyle{ \mathbf{S}_{W} }[/math] is easily to obtain:

[math]\displaystyle{ \begin{align} \mathbf{S}_{W} = \sum_{i=1}^{k} \mathbf{S}_{W,i} \end{align} }[/math]

where [math]\displaystyle{ \mathbf{S}_{W,i} = \frac{1}{n_{i}}\sum_{j: y_{j}=i}(\mathbf{x}_{j} - \mathbf{\mu}_{i})(\mathbf{x}_{j} - \mathbf{\mu}_{i})^{T} }[/math] and [math]\displaystyle{ \mathbf{\mu}_{i} = \frac{\sum_{j: y_{j}=i}\mathbf{x}_{j}}{n_{i}} }[/math].

However, the between class covariance matrix [math]\displaystyle{ \mathbf{S}_{B} }[/math] is not easy to obtain. One of the simplifications is that we may assume that the total covariance [math]\displaystyle{ \mathbf{S}_{T} }[/math] of the data is constant, since [math]\displaystyle{ \mathbf{S}_{W} }[/math] is easy to compute, we can get [math]\displaystyle{ \mathbf{S}_{B} }[/math] using the following relationship:

[math]\displaystyle{ \begin{align} \mathbf{S}_{B} = \mathbf{S}_{T} - \mathbf{S}_{W} \end{align} }[/math]

Actually, there is another generation for [math]\displaystyle{ \mathbf{S}_{B} }[/math]. Denote a total mean vector [math]\displaystyle{ \mathbf{\mu} }[/math] by

[math]\displaystyle{ \begin{align} \mathbf{\mu} = \frac{1}{n}\sum_{i}\mathbf{x_{i}} = \frac{1}{n}\sum_{j=1}^{k}n_{j}\mathbf{\mu}_{j} \end{align} }[/math]

Thus the total covariance matrix [math]\displaystyle{ \mathbf{S}_{T} }[/math] is

[math]\displaystyle{ \begin{align} \mathbf{S}_{T} = \sum_{i}(\mathbf{x_{i}-\mu})(\mathbf{x_{i}-\mu})^{T} \end{align} }[/math]

Thus we obtain

[math]\displaystyle{ \begin{align} & \mathbf{S}_{T} = \sum_{i=1}^{k}\sum_{j: y_{j}=i}(\mathbf{x}_{j} - \mathbf{\mu}_{i} + \mathbf{\mu}_{i} - \mathbf{\mu})(\mathbf{x}_{j} - \mathbf{\mu}_{i} + \mathbf{\mu}_{i} - \mathbf{\mu})^{T} \\& = \sum_{i=1}^{k}\sum_{j: y_{j}=i}(\mathbf{x}_{j}-\mathbf{\mu}_{i})(\mathbf{x}_{j}-\mathbf{\mu}_{i})^{T}+ \sum_{i=1}^{k}\sum_{j: y_{j}=i}(\mathbf{\mu}_{i}-\mathbf{\mu})(\mathbf{\mu}_{i}-\mathbf{\mu})^{T} \\& = \mathbf{S}_{W} + \sum_{i=1}^{k} n_{i}(\mathbf{\mu}_{i}-\mathbf{\mu})(\mathbf{\mu}_{i}-\mathbf{\mu})^{T} \end{align} }[/math]

Since the total covariance [math]\displaystyle{ \mathbf{S}_{T} }[/math] is the sum of the within class covariance [math]\displaystyle{ \mathbf{S}_{W} }[/math] and the between class covariance [math]\displaystyle{ \mathbf{S}_{B} }[/math], we can denote the second term as the general between class covariance matrix [math]\displaystyle{ \mathbf{S}_{B} }[/math], thus we obtain

[math]\displaystyle{ \begin{align} \mathbf{S}_{B} = \sum_{i=1}^{k} n_{i}(\mathbf{\mu}_{i}-\mathbf{\mu})(\mathbf{\mu}_{i}-\mathbf{\mu})^{T} \end{align} }[/math]

Therefore,

[math]\displaystyle{ \begin{align} \mathbf{S}_{T} = \mathbf{S}_{W} + \mathbf{S}_{B} \end{align} }[/math]

Recall that in the two class case problem, we have

[math]\displaystyle{ \begin{align} & \mathbf{S}_{B^{\ast}} = (\mathbf{\mu}_{1}-\mathbf{\mu}_{2})(\mathbf{\mu}_{1}-\mathbf{\mu}_{2})^{T} \\ & = (\mathbf{\mu}_{1}-\mathbf{\mu}+\mathbf{\mu}-\mathbf{\mu}_{2})(\mathbf{\mu}_{1}-\mathbf{\mu}+\mathbf{\mu}-\mathbf{\mu}_{2})^{T} \\ & = ((\mathbf{\mu}_{1}-\mathbf{\mu})-(\mathbf{\mu}_{2}-\mathbf{\mu}))((\mathbf{\mu}_{1}-\mathbf{\mu})-(\mathbf{\mu}_{2}-\mathbf{\mu}))^{T} \\ & = (\mathbf{\mu}_{1}-\mathbf{\mu})(\mathbf{\mu}_{1}-\mathbf{\mu})^{T}+(\mathbf{\mu}_{2}-\mathbf{\mu})(\mathbf{\mu}_{2}-\mathbf{\mu})^{T} \end{align} }[/math]

From the general form,

[math]\displaystyle{ \begin{align} & \mathbf{S}_{B} = n_{1}(\mathbf{\mu}_{1}-\mathbf{\mu})(\mathbf{\mu}_{1}-\mathbf{\mu})^{T} + n_{2}(\mathbf{\mu}_{2}-\mathbf{\mu})(\mathbf{\mu}_{2}-\mathbf{\mu})^{T} \end{align} }[/math]

Apparently, they are very similar.

Now, we are trying to find the optimal transformation. Basically, we have

[math]\displaystyle{ \begin{align} \mathbf{z}_{i} = \mathbf{W}^{T}\mathbf{x}_{i}, i=1,2,...,k-1 \end{align} }[/math]

where [math]\displaystyle{ \mathbf{z}_{i} }[/math] is a [math]\displaystyle{ (k-1)\times 1 }[/math] vector, [math]\displaystyle{ \mathbf{W} }[/math] is a [math]\displaystyle{ d\times (k-1) }[/math] transformation matrix, i.e. [math]\displaystyle{ \mathbf{W} = [\mathbf{w}_{1}, \mathbf{w}_{2},..., \mathbf{w}_{k-1}] }[/math], and [math]\displaystyle{ \mathbf{x}_{i} }[/math] is a [math]\displaystyle{ d\times 1 }[/math] column vector.

Thus we obtain

[math]\displaystyle{ \begin{align} & \mathbf{S}_{W}^{\ast} = \sum_{i=1}^{k}\sum_{j: y_{j}=i}(\mathbf{W}^{T}\mathbf{x}_{j}-\mathbf{W}^{T}\mathbf{\mu}_{i})(\mathbf{W}^{T}\mathbf{x}_{j}-\mathbf{W}^{T}\mathbf{\mu}_{i})^{T} \\ & = \sum_{i=1}^{k}\sum_{j: y_{j}=i}\mathbf{W}^{T}(\mathbf{x}_{j}-\mathbf{\mu}_{i})(\mathbf{x}_{j}-\mathbf{\mu}_{i})\mathbf{W} \\ & = \mathbf{W}^{T}\left[\sum_{i=1}^{k}\sum_{j: y_{j}=i}(\mathbf{x}_{j}-\mathbf{\mu}_{i})(\mathbf{x}_{j}-\mathbf{\mu}_{i})\right]\mathbf{W} \\ & = \mathbf{W}^{T}\mathbf{S}_{W}\mathbf{W} \end{align} }[/math]

Similarly, we obtain

[math]\displaystyle{ \begin{align} & \mathbf{S}_{B}^{\ast} = \sum_{i=1}^{k}n_{i}(\mathbf{W}^{T}\mathbf{\mu}_{i}-\mathbf{W}^{T}\mathbf{\mu})(\mathbf{W}^{T}\mathbf{\mu}_{i}-\mathbf{W}^{T}\mathbf{\mu})^{T} \\ & = \sum_{i=1}^{k}n_{i}\mathbf{W}^{T}(\mathbf{\mu}_{i}-\mathbf{\mu})(\mathbf{\mu}_{i}-\mathbf{\mu})^{T}\mathbf{W} \\ & = \mathbf{W}^{T}\left[ \sum_{i=1}^{k}n_{i}(\mathbf{\mu}_{i}-\mathbf{\mu})(\mathbf{\mu}_{i}-\mathbf{\mu})^{T}\right]\mathbf{W} \\ & = \mathbf{W}^{T}\mathbf{S}_{B}\mathbf{W} \end{align} }[/math]

Now, we use the determinant of the matrix, i.e. the product of the eigenvalues of the matrix, as our measure.

[math]\displaystyle{ \begin{align} \phi(\mathbf{W}) = \frac{|\mathbf{S}_{B}^{\ast}|}{|\mathbf{S}_{W}^{\ast}|} = \frac{\mathbf{W}^{T}\mathbf{S}_{B}\mathbf{W}}{\mathbf{W}^{T}\mathbf{S}_{W}\mathbf{W}} \end{align} }[/math]

The solution for this question is that the columns of the transformation matrix [math]\displaystyle{ \mathbf{W} }[/math] are exactly the eigenvectors that correspond to largest [math]\displaystyle{ k-1 }[/math] eigenvalues with respect to

[math]\displaystyle{ \begin{align} \mathbf{S}_{W}^{-1}\mathbf{S}_{B}\mathbf{w}_{i} = \lambda_{i}\mathbf{w}_{i} \end{align} }[/math]

Also, note that we can use

[math]\displaystyle{ \begin{align} \sum_{i=1}^{k}n_{i}\|(\mathbf{W}^{T}\mathbf{\mu}_{i}-\mathbf{W}^{T}\mathbf{\mu})^{T}\|^{2} \end{align} }[/math]

as our measure.

Recall that

[math]\displaystyle{ \begin{align} \|\mathbf{X}\|^2 = Tr(\mathbf{X}^{T}\mathbf{X}) \end{align} }[/math]

Thus we obtain that

[math]\displaystyle{ \begin{align} & \sum_{i=1}^{k}n_{i}\|(\mathbf{W}^{T}\mathbf{\mu}_{i}-\mathbf{W}^{T}\mathbf{\mu})^{T}\|^{2} \\ & = \sum_{i=1}^{k}n_{i}Tr[(\mathbf{W}^{T}\mathbf{\mu}_{i}-\mathbf{W}^{T}\mathbf{\mu})(\mathbf{W}^{T}\mathbf{\mu}_{i}-\mathbf{W}^{T}\mathbf{\mu})^{T}] \\ & = Tr[\sum_{i=1}^{k}n_{i}(\mathbf{W}^{T}\mathbf{\mu}_{i}-\mathbf{W}^{T}\mathbf{\mu})(\mathbf{W}^{T}\mathbf{\mu}_{i}-\mathbf{W}^{T}\mathbf{\mu})^{T}] \\ & = Tr[\sum_{i=1}^{k}n_{i}\mathbf{W}^{T}(\mathbf{\mu}_{i}-\mathbf{\mu})(\mathbf{\mu}_{i}-\mathbf{\mu})^{T}\mathbf{W}] \\ & = Tr[\mathbf{W}^{T}\sum_{i=1}^{k}n_{i}(\mathbf{\mu}_{i}-\mathbf{\mu})(\mathbf{\mu}_{i}-\mathbf{\mu})^{T}\mathbf{W}] \\ & = Tr[\mathbf{W}^{T}\mathbf{S}_{B}\mathbf{W}] \end{align} }[/math]

Similarly, we can get [math]\displaystyle{ Tr[\mathbf{W}^{T}\mathbf{S}_{W}\mathbf{W}] }[/math]. Thus we have following criterion function

[math]\displaystyle{ \begin{align} \phi(\mathbf{W}) = \frac{Tr[\mathbf{W}^{T}\mathbf{S}_{B}\mathbf{W}]}{Tr[\mathbf{W}^{T}\mathbf{S}_{W}\mathbf{W}]} \end{align} }[/math]

Similar to the two class case problem, we have:

max [math]\displaystyle{ Tr[\mathbf{W}^{T}\mathbf{S}_{B}\mathbf{W}] }[/math] subject to [math]\displaystyle{ Tr[\mathbf{W}^{T}\mathbf{S}_{W}\mathbf{W}]=1 }[/math]

To solve this optimization problem a Lagrange multiplier [math]\displaystyle{ \Lambda }[/math], which actually is a [math]\displaystyle{ d \times d }[/math] diagonal matrix, is introduced:

[math]\displaystyle{ \begin{align} L(\mathbf{W},\Lambda) = Tr[\mathbf{W}^{T}\mathbf{S}_{W}\mathbf{B}] - \Lambda\left\{ Tr[\mathbf{W}^{T}\mathbf{S}_{W}\mathbf{W}] - 1 \right\} \end{align} }[/math]

Differentiating with respect to [math]\displaystyle{ \mathbf{W} }[/math] we obtain:

[math]\displaystyle{ \begin{align} \frac{\partial L}{\partial \mathbf{W}} = (\mathbf{S}_{B} + \mathbf{S}_{B}^{T})\mathbf{W} - \Lambda (\mathbf{S}_{W} + \mathbf{S}_{W}^{T})\mathbf{W} \end{align} }[/math]

Note that the [math]\displaystyle{ \mathbf{S}_{B} }[/math] and [math]\displaystyle{ \mathbf{S}_{W} }[/math] are both symmetric matrices, thus set the first derivative to zero, we obtain:

[math]\displaystyle{ \begin{align} \mathbf{S}_{B}\mathbf{W} - \Lambda\mathbf{S}_{W}\mathbf{W}=0 \end{align} }[/math]

Thus,

[math]\displaystyle{ \begin{align} \mathbf{S}_{B}\mathbf{W} = \Lambda\mathbf{S}_{W}\mathbf{W} \end{align} }[/math]

where

[math]\displaystyle{ \mathbf{\Lambda} = \begin{pmatrix} \lambda_{1} & & 0\\ &\ddots&\\ 0 & &\lambda_{d} \end{pmatrix} }[/math]

and [math]\displaystyle{ \mathbf{W} = [\mathbf{w}_{1}, \mathbf{w}_{2},..., \mathbf{w}_{k-1}] }[/math].

As a matter of fact, [math]\displaystyle{ \mathbf{\Lambda} }[/math] must have [math]\displaystyle{ \mathbf{k-1} }[/math] nonzero eigenvalues, because [math]\displaystyle{ rank({S}_{W}^{-1}\mathbf{S}_{B})=k-1 }[/math].

Therefore, the solution for this question is as same as the previous case. The columns of the transformation matrix [math]\displaystyle{ \mathbf{W} }[/math] are exactly the eigenvectors that correspond to largest [math]\displaystyle{ k-1 }[/math] eigenvalues with respect to

[math]\displaystyle{ \begin{align} \mathbf{S}_{W}^{-1}\mathbf{S}_{B}\mathbf{w}_{i} = \lambda_{i}\mathbf{w}_{i} \end{align} }[/math]

Generalization of Fisher's Linear Discriminant

Fisher's linear discriminant (Fisher, 1936) is very popular among users of discriminant analysis.Some of the reasons for this are its simplicity and unnecessity of strict assumptions. However it has optimality properties only if the underlying distributions of the groups are multivariate normal. It is also easy to verify that the discriminant rule obtained can be very harmed by only a small number of outlying observations. Outliers are very hard to detect in multivariate data sets and even when they are detected simply discarding them is not the most effcient way of handling the situation. Therefore the need for robust procedures that can accommodate the outliers and are not strongly affected by them. Then, a generalization of Fisher's linear discriminant algorithm [[12]]is developed to lead easily to a very robust procedure.

Linear Regression Models - October 14, 2009

Regression analysis is a general statistical technique for modelling and analyzing how a dependent variable changes according to changes in independent variables. In classification, we are interested in how a label, [math]\displaystyle{ \,y }[/math], changes according to changes in [math]\displaystyle{ \,X }[/math].

We will start by considering a very simple regression model, the linear regression model.

General information on linear regression can be found at the University of South Florida and this MIT lecture.

For the purpose of classification, the linear regression model assumes that the regression function [math]\displaystyle{ \,E(Y|X) }[/math] is linear in the inputs [math]\displaystyle{ \,\mathbf{x}_{1}, ..., \mathbf{x}_{p} }[/math].

The simple linear regression model has the general form:

[math]\displaystyle{ \begin{align} f(x) = \beta^{T}\mathbf{x}_{i}+\beta_{0} \end{align} }[/math]

where [math]\displaystyle{ \,\beta }[/math] is a [math]\displaystyle{ 1 \times d }[/math] vector and [math]\displaystyle{ \ x_i }[/math] is a [math]\displaystyle{ d \times 1 }[/math] vector .

Given input data [math]\displaystyle{ \,\mathbf{x}_{1}, ..., \mathbf{x}_{p} }[/math] and [math]\displaystyle{ \,y_{1}, ..., y_{p} }[/math] our goal is to find [math]\displaystyle{ \,\beta }[/math] and [math]\displaystyle{ \,\beta_0 }[/math] such that the linear model fits the data while minimizing sum of squared errors using the Least Squares method.

Note that vectors [math]\displaystyle{ \mathbf{x}_{i} }[/math] could be numerical inputs, transformations of the original data, i.e. [math]\displaystyle{ \log \mathbf{x}_{i} }[/math] or [math]\displaystyle{ \sin \mathbf{x}_{i} }[/math], or basis expansions, i.e. [math]\displaystyle{ \mathbf{x}_{i}^{2} }[/math] or [math]\displaystyle{ \mathbf{x}_{i}\times \mathbf{x}_{j} }[/math].

Denote [math]\displaystyle{ \mathbf{X} }[/math] as a [math]\displaystyle{ n\times(d+1) }[/math] matrix with each row an input vector (with 1 in the first position), [math]\displaystyle{ \,\beta = (\beta_0, \beta_1,..., \beta_{d})^{T} }[/math] and [math]\displaystyle{ \mathbf{y} }[/math] as a [math]\displaystyle{ n \times 1 }[/math] vector of outputs. We then try to minimize the residual sum-of-squares

[math]\displaystyle{ \begin{align} \mathrm{RSS}(\beta)=(\mathbf{y}-\mathbf{X}\beta)(\mathbf{y}-\mathbf{X}\beta)^{T} \end{align} }[/math]

This is a quadratic function in the [math]\displaystyle{ \,d+1 }[/math] parameters. Differentiating with respect to [math]\displaystyle{ \,\beta }[/math] we obtain

[math]\displaystyle{ \begin{align} \frac{\partial \mathrm{RSS}}{\partial \beta} = -2\mathbf{X}^{T}(\mathbf{y}-\mathbf{X}\beta) \end{align} }[/math]
[math]\displaystyle{ \begin{align} \frac{\partial^{2}\mathrm{RSS}}{\partial \beta \partial \beta^{T}}=2\mathbf{X}^{T}\mathbf{X} \end{align} }[/math]

Set the first derivative to zero

[math]\displaystyle{ \begin{align} \mathbf{X}^{T}(\mathbf{y}-\mathbf{X}\beta)=0 \end{align} }[/math]

we obtain the solution

[math]\displaystyle{ \begin{align} \hat \beta = (\mathbf{X}^{T}\mathbf{X})^{-1}\mathbf{X}^{T}\mathbf{y} \end{align} }[/math]

Thus the fitted values at the inputs are

[math]\displaystyle{ \begin{align} \mathbf{\hat y} = \mathbf{X}\hat\beta = \mathbf{X} (\mathbf{X}^{T}\mathbf{X})^{-1}\mathbf{X}^{T}\mathbf{y} \end{align} }[/math]

where [math]\displaystyle{ \mathbf{H} = \mathbf{X} (\mathbf{X}^{T}\mathbf{X})^{-1}\mathbf{X}^{T} }[/math] is called the hat matrix.


  • Note For classification purposes, this is not a correct model. Recall the following application of Bayes classifier:

[math]\displaystyle{ r(x)= P( Y=k | X=x )= \frac{f_{k}(x)\pi_{k}}{\Sigma_{k}f_{k}(x)\pi_{k}} }[/math]
It is clear that to make sense mathematically, [math]\displaystyle{ \displaystyle r(x) }[/math] must be a value between 0 and 1. If this is estimated with the regression function [math]\displaystyle{ \displaystyle r(x)=E(Y|X=x) }[/math] and [math]\displaystyle{ \mathbf{\hat\beta} }[/math] is learned as above, then there is nothing that would restrict [math]\displaystyle{ \displaystyle r(x) }[/math] to taking values between 0 and 1. This is more direct approach to classification since it do not need to estimate [math]\displaystyle{ \ f_k(x) }[/math] and [math]\displaystyle{ \ \pi_k }[/math]. [math]\displaystyle{ \ 1 \times P(Y=1|X=x)+0 \times P(Y=0|X=x)=E(Y|X) }[/math] This model does not classify Y between 0 and 1, so it is not good and sometimes it can lead to a decent classifier. [math]\displaystyle{ \ y_i=\frac{1}{n_1} }[/math] [math]\displaystyle{ \ \frac{-1}{n_2} }[/math]

A linear regression example in Matlab

We can see how linear regression works through the following example in Matlab. The following is the code and the explanation for each step.

Again, we use the data in 2_3.m.

 >>load 2_3;
 >>[U, sample] = princomp(X');
 >>sample = sample(:,1:2);

We carry out Principal Component Analysis (PCA) to reduce the dimensionality from 64 to 2.

 >>y = zeros(400,1);
 >>y(201:400) = 1;

We let y represent the set of labels coded as 0 and 1.

 >>x=[sample;ones(1,400)];

Construct x by adding a row of vector 1 to data.

 >>b=inv(x*x')*x*y;

Calculate b, which represents [math]\displaystyle{ \beta }[/math] in the linear regression model.

 >>x1=x';
 >>for i=1:400
   if x1(i,:)*b>0.5
        plot(x1(i,1),x1(i,2),'.')
        hold on
   elseif x1(i,:)*b < 0.5
       plot(x1(i,1),x1(i,2),'r.')
   end 
 end

Plot the fitted y values.

File:linearregression.png
the figure shows that the classification of the data points in 2_3.m by the linear regression model

Comments about Linear regression model

Linear regression model is almost the easiest and most popular way to analyze the relationship of different data sets. However, it has some disadvantages as well as its advantages. We should be clear about them before we apply the model.

Advantages: Linear least squares regression has earned its place as the primary tool for process modeling because of its effectiveness and completeness. Though there are types of data that are better described by functions that are nonlinear in the parameters, many processes in science and engineering are well-described by linear models. This is because either the processes are inherently linear or because, over short ranges, any process can be well-approximated by a linear model. The estimates of the unknown parameters obtained from linear least squares regression are the optimal estimates from a broad class of possible parameter estimates under the usual assumptions used for process modeling. Practically speaking, linear least squares regression makes very efficient use of the data. Good results can be obtained with relatively small data sets. Finally, the theory associated with linear regression is well-understood and allows for construction of different types of easily-interpretable statistical intervals for predictions, calibrations, and optimizations. These statistical intervals can then be used to give clear answers to scientific and engineering questions.

Disadvantages: The main disadvantages of linear least squares are limitations in the shapes that linear models can assume over long ranges, possibly poor extrapolation properties, and sensitivity to outliers. Linear models with nonlinear terms in the predictor variables curve relatively slowly, so for inherently nonlinear processes it becomes increasingly difficult to find a linear model that fits the data well as the range of the data increases. As the explanatory variables become extreme, the output of the linear model will also always more extreme. This means that linear models may not be effective for extrapolating the results of a process for which data cannot be collected in the region of interest. Of course extrapolation is potentially dangerous regardless of the model type. Finally, while the method of least squares often gives optimal estimates of the unknown parameters, it is very sensitive to the presence of unusual data points in the data used to fit a model. One or two outliers can sometimes seriously skew the results of a least squares analysis. This makes model validation, especially with respect to outliers, critical to obtaining sound answers to the questions motivating the construction of the model.

useful link:[13] [14]

Logistic Regression- October 16, 2009

The logistic regression model arises from the desire to model the posterior probabilities of the [math]\displaystyle{ \displaystyle K }[/math] classes via linear functions in [math]\displaystyle{ \displaystyle x }[/math], while at the same time ensuring that they sum to one and remain in [0,1].Logistic regression models are usually fit by maximum likelihood, using the conditional likelihood ,using [math]\displaystyle{ \displaystyle Pr(Y|X) }[/math]. Since [math]\displaystyle{ \displaystyle Pr(Y|X) }[/math] completely specifies the conditional distribution, the multinomial distribution is appropriate. This model is widely used in biostatistical applications for two classes. For instance: people survive or die, have a disease or not, have a risk factor or not.

logistic function

A logistic function or logistic curve is the most common sigmoid curve.

[math]\displaystyle{ y = \frac{1}{1+e^{-x}} }[/math]

1. [math]\displaystyle{ \frac{dy}{dx} = y(1-y)=\frac{e^{x}}{(1+e^{x})^{2}} }[/math]

2. [math]\displaystyle{ y(0) = \frac{1}{2} }[/math]

3. [math]\displaystyle{ \int y dx = ln(1 + e^{x}) }[/math]

4. [math]\displaystyle{ y(x) = \frac{1}{2} + \frac{1}{4}x - \frac{1}{48}x^{3} + \frac{1}{48}x^{5} \cdots }[/math]

5. The logistic curve shows early exponential growth for negative t, which slows to linear growth of slope 1/4 near t = 0, then approaches y = 1 with an exponentially decaying gap.

Intuition behind Logistic Regression

Recall that, for classification purposes, the linear regression model presented in the above section is not correct because it does not force [math]\displaystyle{ \,r(x) }[/math] to be between 0 and 1 and sum to 1. Consider the following log odds model (for two classes):

[math]\displaystyle{ \log\left(\frac{P(Y=1|X=x)}{P(Y=0|X=x)}\right)=\beta^Tx }[/math]

Calculating [math]\displaystyle{ \,P(Y=1|X=x) }[/math] leads us to the logistic regression model, which as opposed to the linear regression model, allows the modelling of the posterior probabilities of the classes through linear methods and at the same time ensures that they sum to one and are between 0 and 1. It is a type of Generalized Linear Model (GLM).

The Logistic Regression Model

The logistic regression model for the two class case is defined as

Class 1

[math]\displaystyle{ P(Y=1 | X=x) }[/math]
[math]\displaystyle{ P(Y=1 | X=x) =\frac{\exp(\underline{\beta}^T \underline{x})}{1+\exp(\underline{\beta}^T \underline{x})}=P(x;\underline{\beta}) }[/math]


Then we have that

Class 0

[math]\displaystyle{ P(Y=0 | X=x) }[/math]
[math]\displaystyle{ P(Y=0 | X=x) = 1-P(Y=1 | X=x)=1-\frac{\exp(\underline{\beta}^T \underline{x})}{1+\exp(\underline{\beta}^T \underline{x})}=\frac{1}{1+\exp(\underline{\beta}^T \underline{x})} }[/math]

Fitting a Logistic Regression

Logistic regression tries to fit a distribution. The fitting of logistic regression models is usually accomplished by maximum likelihood, using Pr(Y|X). The maximum likelihood of [math]\displaystyle{ \underline\beta }[/math] maximizes the probability of obtaining the data [math]\displaystyle{ \displaystyle{x_{1},...,x_{n}} }[/math] from the known distribution. Combining [math]\displaystyle{ \displaystyle P(Y=1 | X=x) }[/math] and [math]\displaystyle{ \displaystyle P(Y=0 | X=x) }[/math] as follows, we can consider the two classes at the same time:

[math]\displaystyle{ p(\underline{x_{i}};\underline{\beta}) = \left(\frac{\exp(\underline{\beta}^T \underline{x_i})}{1+\exp(\underline{\beta}^T \underline{x_i})}\right)^{y_i} \left(\frac{1}{1+\exp(\underline{\beta}^T \underline{x_i})}\right)^{1-y_i} }[/math]

Assuming the data [math]\displaystyle{ \displaystyle {x_{1},...,x_{n}} }[/math] is drawn independently, the likelihood function is

[math]\displaystyle{ \begin{align} \mathcal{L}(\theta)&=p({x_{1},...,x_{n}};\theta)\\ &=\displaystyle p(x_{1};\theta) p(x_{2};\theta)... p(x_{n};\theta) \quad \mbox{(by independence)}\\ &= \prod_{i=1}^n p(x_{i};\theta) \end{align} }[/math]

Since it is more convenient to work with the log-likelihood function, take the log of both sides, we get

[math]\displaystyle{ \displaystyle l(\theta)=\displaystyle \sum_{i=1}^n \log p(x_{i};\theta) }[/math]

So,

[math]\displaystyle{ \begin{align} l(\underline\beta)&=\displaystyle\sum_{i=1}^n y_{i}\log\left(\frac{\exp(\underline{\beta}^T \underline{x_i})}{1+\exp(\underline{\beta}^T \underline{x_i})}\right)+(1-y_{i})\log\left(\frac{1}{1+\exp(\underline{\beta}^T\underline{x_i})}\right)\\ &= \displaystyle\sum_{i=1}^n y_{i}(\underline{\beta}^T\underline{x_i}-\log(1+\exp(\underline{\beta}^T\underline{x_i}))+(1-y_{i})(-\log(1+\exp(\underline{\beta}^T\underline{x_i}))\\ &= \displaystyle\sum_{i=1}^n y_{i}\underline{\beta}^T\underline{x_i}-y_{i} \log(1+\exp(\underline{\beta}^T\underline{x_i}))- \log(1+\exp(\underline{\beta}^T\underline{x_i}))+y_{i} \log(1+\exp(\underline{\beta}^T\underline{x_i}))\\ &=\displaystyle\sum_{i=1}^n y_{i}\underline{\beta}^T\underline{x_i}- \log(1+\exp(\underline{\beta}^T\underline{x_i}))\\ \end{align} }[/math]


To maximize the log-likelihood, set its derivative to 0.

[math]\displaystyle{ \begin{align} \frac{\partial l}{\partial \underline{\beta}} &= \sum_{i=1}^n \left[{y_i} \underline{x}_i- \frac{\exp(\underline{\beta}^T \underline{x_i})}{1+\exp(\underline{\beta}^T \underline{x}_i)}\underline{x}_i\right]\\ &=\sum_{i=1}^n \left[{y_i} \underline{x}_i - p(\underline{x}_i;\underline{\beta})\underline{x}_i\right] \end{align} }[/math]

There are n+1 nonlinear equations in [math]\displaystyle{ / \beta }[/math]. The first column is vector 1, then [math]\displaystyle{ \ \sum_{i=1}^n {y_i} =\sum_{i=1}^n p(\underline{x}_i;\underline{\beta}) }[/math] i.e. the expected number of class ones matches the observed number.

To solve this equation, the Newton-Raphson algorithm is used which requires the second derivative in addition to the first derivative. This is demonstrated in the next section.

Advantages and Disadvantages

Logistic regression has several advantages over discriminant analysis:

  • it is more robust: the independent variables don't have to be normally distributed, or have equal variance in each group
  • It does not assume a linear relationship between the IV and DV
  • It may handle nonlinear effects
  • You can add explicit interaction and power terms
  • The DV need not be normally distributed.
  • There is no homogeneity of variance assumption.
  • Normally distributed error terms are not assumed.
  • It does not require that the independents be interval.
  • It does not require that the independents be unbounded.

With all this flexibility, you might wonder why anyone would ever use discriminant analysis or any other method of analysis. Unfortunately, the advantages of logistic regression come at a cost: it requires much more data to achieve stable, meaningful results. With standard regression, and DA, typically 20 data points per predictor is considered the lower bound. For logistic regression, at least 50 data points per predictor is necessary to achieve stable results.

some resources: [15], [16]

Extension

  • When we are dealing with a problem with more than two classes, we need to generalize our logistic regression to a Multinomial Logit model.
  • Limitations of Logistic Regression:
1. We know that there is no assumptions are made about the distributions of the features of the data (i.e. the explanatory variables). However, the features should not be highly correlated with one another because this could cause problems with estimation.
2. Large number of data points (i.e.the sample sizes) are required for logistic regression to provide sufficient numbers in both classes. The more number of features/dimensions of the data, the larger the sample size required.

Logistic Regression(2) - October 19, 2009

Logistic Regression Model

Recall that in the last lecture, we learned the logistic regression model.

  • [math]\displaystyle{ P(Y=1 | X=x)=P(\underline{x}_i;\underline{\beta})=\frac{exp(\underline{\beta}^T \underline{x})}{1+exp(\underline{\beta}^T \underline{x})} }[/math]
  • [math]\displaystyle{ P(Y=0 | X=x)=1-P(\underline{x}_i;\underline{\beta})=\frac{1}{1+exp(\underline{\beta}^T \underline{x})} }[/math]

Find [math]\displaystyle{ \underline{\beta} }[/math]

Criteria: find a [math]\displaystyle{ \underline{\beta} }[/math] that maximizes the conditional likelihood of Y given X using the training data.

From above, we have the first derivative of the log-likelihood:

[math]\displaystyle{ \frac{\partial l}{\partial \underline{\beta}} = \sum_{i=1}^n \left[{y_i} \underline{x}_i- \frac{exp(\underline{\beta}^T \underline{x_i})}{1+exp(\underline{\beta}^T \underline{x}_i)}\underline{x}_i\right] }[/math] [math]\displaystyle{ =\sum_{i=1}^n \left[{y_i} \underline{x}_i - P(\underline{x}_i;\underline{\beta})\underline{x}_i\right] }[/math]

Newton-Raphson algorithm:
If we want to find [math]\displaystyle{ \ x^* }[/math] such that [math]\displaystyle{ \ f(x^*)=0 }[/math]

[math]\displaystyle{ \ X^{new} \leftarrow x^{old}-\frac {f(x^{old})}{\partial f(x^{old})} }[/math]

[math]\displaystyle{ \ x^{new} \rightarrow x^* }[/math]

If we want to maximize or minimize [math]\displaystyle{ \ f(x) }[/math], then solve for [math]\displaystyle{ \ \partial f(x)=0 }[/math]

[math]\displaystyle{ \ X^{new} \leftarrow x^{old}-\frac {\partial f(x^{old})}{\partial^2 f(x^{old})} }[/math]


The Newton-Raphson algorithm requires the second-derivative or Hessian matrix.


[math]\displaystyle{ \frac{\partial^{2} l}{\partial \underline{\beta} \partial \underline{\beta}^T }= \sum_{i=1}^n - \underline{x_i} \frac{(exp(\underline{\beta}^T\underline{x}_i) \underline{x}_i^T)(1+exp(\underline{\beta}^T \underline{x}_i))-\underline{x}_i exp(\underline{\beta}^T\underline{x}_i)exp(\underline{\beta}^T\underline{x}_i)}{(1+exp(\underline{\beta}^T \underline{x}))^2} }[/math]

(note: [math]\displaystyle{ \frac{\partial\underline{\beta}^T\underline{x}_i}{\partial \underline{\beta}^T}=\underline{x}_i^T }[/math] you can check it here, it's a very useful website including a Matrix Reference Manual that you can find information about linear algebra and the properties of real and complex matrices.)


[math]\displaystyle{ =\sum_{i=1}^n - \underline{x}_i \frac{(-\underline{x}_i exp(\underline{\beta}^T\underline{x}_i) \underline{x}_i^T)}{(1+exp(\underline{\beta}^T \underline{x}))(1+exp(\underline{\beta}^T \underline{x}))} }[/math] (by cancellation)
[math]\displaystyle{ =\sum_{i=1}^n - \underline{x}_i \underline{x}_i^T P(\underline{x}_i;\underline{\beta}))[1-P(\underline{x}_i;\underline{\beta})]) }[/math](since [math]\displaystyle{ P(\underline{x}_i;\underline{\beta})=\frac{exp(\underline{\beta}^T \underline{x})}{1+exp(\underline{\beta}^T \underline{x})} }[/math] and [math]\displaystyle{ 1-P(\underline{x}_i;\underline{\beta})=\frac{1}{1+exp(\underline{\beta}^T \underline{x})} }[/math])

The same second derivative can be achieved if we reduce the occurrences of beta to 1 by the identity[math]\displaystyle{ \frac{a}{1+a}=1-\frac{1}{1+a} }[/math]

And solving [math]\displaystyle{ \frac{\partial}{\partial \underline{\beta}^T}\sum_{i=1}^n \left[{y_i} \underline{x}_i-\left[1-\frac{1}{1+exp(\underline{\beta}^T \underline{x}_i)}\right]\underline{x}_i\right] }[/math]


Starting with [math]\displaystyle{ \,\underline{\beta}^{old} }[/math], the Newton-Raphson update is

[math]\displaystyle{ \,\underline{\beta}^{new}\leftarrow \,\underline{\beta}^{old}- (\frac{\partial ^2 l}{\partial \underline{\beta}\partial \underline{\beta}^T})^{-1}(\frac{\partial l}{\partial \underline{\beta}}) }[/math] where the derivatives are evaluated at [math]\displaystyle{ \,\underline{\beta}^{old} }[/math]

The iteration will terminate when [math]\displaystyle{ \underline{\beta}^{new} }[/math] is very close to [math]\displaystyle{ \underline{\beta}^{old} }[/math].

The iteration can be described in matrix form.

  • Let [math]\displaystyle{ \,\underline{Y} }[/math] be the column vector of [math]\displaystyle{ \,y_i }[/math]. ([math]\displaystyle{ n\times1 }[/math])
  • Let [math]\displaystyle{ \,X }[/math] be the [math]\displaystyle{ {d}\times{n} }[/math] input matrix.
  • Let [math]\displaystyle{ \,\underline{P} }[/math] be the [math]\displaystyle{ {n}\times{1} }[/math] vector with [math]\displaystyle{ i }[/math]th element [math]\displaystyle{ P(\underline{x}_i;\underline{\beta}^{old}) }[/math].
  • Let [math]\displaystyle{ \,W }[/math] be an [math]\displaystyle{ {n}\times{n} }[/math] diagonal matrix with [math]\displaystyle{ i }[/math]th element [math]\displaystyle{ P(\underline{x}_i;\underline{\beta}^{old})[1-P(\underline{x}_i;\underline{\beta}^{old})] }[/math]

then

[math]\displaystyle{ \frac{\partial l}{\partial \underline{\beta}} = X(\underline{Y}-\underline{P}) }[/math]

[math]\displaystyle{ \frac{\partial ^2 l}{\partial \underline{\beta}\partial \underline{\beta}^T} = -XWX^T }[/math]

The Newton-Raphson step is

[math]\displaystyle{ \underline{\beta}^{new} \leftarrow \underline{\beta}^{old}+(XWX^T)^{-1}X(\underline{Y}-\underline{P}) }[/math]

This equation is sufficient for computation of the logistic regression model. However, we can simplify further to uncover an interesting feature of this equation.

[math]\displaystyle{ \begin{align} \underline{\beta}^{new} &= (XWX^T)^{-1}(XWX^T)\underline{\beta}^{old}+(XWX^T)^{-1}XWW^{-1}(\underline{Y}-\underline{P})\\ &=(XWX^T)^{-1}XW[X^T\underline{\beta}^{old}+W^{-1}(\underline{Y}-\underline{P})]\\ &=(XWX^T)^{-1}XWZ \end{align} }[/math]

where [math]\displaystyle{ Z=X\underline{\beta}^{old}+W^{-1}(\underline{Y}-\underline{P}) }[/math]

This is a adjusted response and it is solved repeatedly when [math]\displaystyle{ \ p }[/math], [math]\displaystyle{ \ W }[/math], and [math]\displaystyle{ \ z }[/math] changes. This algorithm is called iteratively reweighted least squares because it solves the weighted least squares problem repeatedly.

Recall that linear regression by least square finds the following minimum: [math]\displaystyle{ \min_{\underline{\beta}}(\underline{y}-\underline{\beta}^T X)^T(\underline{y}-\underline{\beta}^TX) }[/math]

we have [math]\displaystyle{ \underline\hat{\beta}=(XX^T)^{-1}X\underline{y} }[/math]

Similarly, we can say that [math]\displaystyle{ \underline{\beta}^{new} }[/math] is the solution of a weighted least square problem:

[math]\displaystyle{ \underline{\beta}^{new} \leftarrow arg \min_{\underline{\beta}}(Z-X\underline{\beta}^T)W(Z-X\underline{\beta}) }[/math]

WLS

Actually, the weighted least squares estimator minimizes the weighted sum of squared errors [math]\displaystyle{ S(\beta) = \sum_{i=1}^{n}w_{i}[y_{i}-\mathbf{x}_{i}^{T}\beta]^{2} }[/math] where [math]\displaystyle{ \displaystyle w_{i}\gt 0 }[/math]. Hence the WLS estimator is given by [math]\displaystyle{ \hat\beta^{WLS}=\left[\sum_{i=1}^{n}w_{i}\mathbf{x}_{i}\mathbf{x}_{i}^{T} \right]^{-1}\left[ \sum_{i=1}^{n}w_{i}\mathbf{x}_{i}y_{i}\right] }[/math]

A weighted linear regression of the iteratively computed response [math]\displaystyle{ \mathbf{z}=\mathbf{X}^{T}\beta^{old}+\mathbf{W}^{-1}(\mathbf{y}-\mathbf{p}) }[/math]

Therefore, we obtain

[math]\displaystyle{ \begin{align} & \hat\beta^{WLS}=\left[\sum_{i=1}^{n}w_{i}\mathbf{x}_{i}\mathbf{x}_{i}^{T} \right]^{-1}\left[ \sum_{i=1}^{n}w_{i}\mathbf{x}_{i}z_{i}\right] \\& = \left[ \mathbf{XWX}^{T}\right]^{-1}\left[ \mathbf{XWz}\right] \\& = \left[ \mathbf{XWX}^{T}\right]^{-1}\mathbf{XW}(\mathbf{X}^{T}\beta^{old}+\mathbf{W}^{-1}(\mathbf{y}-\mathbf{p})) \\& = \beta^{old}+ \left[ \mathbf{XWX}^{T}\right]^{-1}\mathbf{X}(\mathbf{y}-\mathbf{p}) \end{align} }[/math]


note:Here we obtain [math]\displaystyle{ \underline{\beta} }[/math], which is a [math]\displaystyle{ d\times{1} }[/math] vector, because we construct the model like [math]\displaystyle{ \underline{\beta}^T\underline{x} }[/math]. If we construct the model like [math]\displaystyle{ \underline{\beta}_0+ \underline{\beta}^T\underline{x} }[/math], then similar to linear regression, [math]\displaystyle{ \underline{\beta} }[/math] will be a [math]\displaystyle{ (d+1)\times{1} }[/math] vector.

Choosing [math]\displaystyle{ \displaystyle\beta=0 }[/math] seems to be a suitable starting value for the Newton-Raphson iteration procedure in this case. However, this does not guarantee convergence. The procedure will usually converge since the log-likelihood function is concave(or convex), but overshooting can occur. In the rare cases that the log-likelihood decreases, cut step

size by half, then we can always have convergence. In the case that it does not, we can just prove the local convergence of the method, which means the iteration would converge only if the initial point is closed enough to the exact solution. However, in practice, choosing an appropriate initial value is really trivial, namely, it is not often to find a initial too far from the exact solution to make the iteration invalid. <ref>C. T. Kelley, Iterative Methods for Linear and Nonlinear Equations, chapter 5 </ref> Besides, step-size halving will solve this problem. <ref>H. Trevor, R. Tibshirani, J. Friedman, The Elements of Statistical Learning (Springer 2009),121.</ref>

For multiclass cases: the Newton algorithm can also be expressed as an iteratively reweighted least squares algorithm, but with a vector of [math]\displaystyle{ \ k-1 }[/math] response and a nondiagonal weight matrix per observation. And we can use coordinate-descent method to maximize the log-likelihood efficiently.

Pseudo Code

  1. [math]\displaystyle{ \underline{\beta} \leftarrow 0 }[/math]
  2. Set [math]\displaystyle{ \,\underline{Y} }[/math], the label associated with each observation [math]\displaystyle{ \,i=1...n }[/math].
  3. Compute [math]\displaystyle{ \,\underline{P} }[/math] according to the equation [math]\displaystyle{ P(\underline{x}_i;\underline{\beta})=\frac{exp(\underline{\beta}^T \underline{x})}{1+exp(\underline{\beta}^T \underline{x})} }[/math] for all [math]\displaystyle{ \,i=1...n }[/math].
  4. Compute the diagonal matrix [math]\displaystyle{ \,W }[/math] by setting [math]\displaystyle{ \,w_i,i }[/math] to [math]\displaystyle{ P(\underline{x}_i;\underline{\beta}))[1-P(\underline{x}_i;\underline{\beta})] }[/math] for all [math]\displaystyle{ \,i=1...n }[/math].
  5. [math]\displaystyle{ Z \leftarrow X^T\underline{\beta}+W^{-1}(\underline{Y}-\underline{P}) }[/math].
  6. [math]\displaystyle{ \underline{\beta} \leftarrow (XWX^T)^{-1}XWZ }[/math].
  7. If the new [math]\displaystyle{ \underline{\beta} }[/math] value is sufficiently close to the old value, stop; otherwise go back to step 3.

Comparison with Linear Regression

  • Similarities
  1. They are both to attempt to estimate [math]\displaystyle{ \,P(Y=k|X=x) }[/math] (For logistic regression, we just mentioned about the case that [math]\displaystyle{ \,k=0 }[/math] or [math]\displaystyle{ \,k=1 }[/math] now).
  2. They are both have linear boundaris.
note:For linear regression, we assume the model is linear. The boundary is [math]\displaystyle{ P(Y=k|X=x)=\underline{\beta}^T\underline{x}_i+\underline{\beta}_0=0.5 }[/math] (linear)
For logistic regression, the boundary is [math]\displaystyle{ P(Y=k|X=x)=\frac{exp(\underline{\beta}^T \underline{x})}{1+exp(\underline{\beta}^T \underline{x})}=0.5 \Rightarrow exp(\underline{\beta}^T \underline{x})=1\Rightarrow \underline{\beta}^T \underline{x}=0 }[/math] (linear)
  • Differences
  1. Linear regression: [math]\displaystyle{ \,P(Y=k|X=x) }[/math] is linear function of [math]\displaystyle{ \,x }[/math], [math]\displaystyle{ \,P(Y=k|X=x) }[/math] is not guaranteed to fall between 0 and 1 and to sum up to 1.
  2. Logistic regression: [math]\displaystyle{ \,P(Y=k|X=x) }[/math] is a nonlinear function of [math]\displaystyle{ \,x }[/math], and it is guaranteed to range from 0 to 1 and to sum up to 1.

Comparison with LDA

  1. The linear logistic model only consider the conditional distribution [math]\displaystyle{ \,P(Y=k|X=x) }[/math]. No assumption is made about [math]\displaystyle{ \,P(X=x) }[/math].
  2. The LDA model specifies the joint distribution of [math]\displaystyle{ \,X }[/math] and [math]\displaystyle{ \,Y }[/math].
  3. Logistic regression maximizes the conditional likelihood of [math]\displaystyle{ \,Y }[/math] given [math]\displaystyle{ \,X }[/math]: [math]\displaystyle{ \,P(Y=k|X=x) }[/math]
  4. LDA maximizes the joint likelihood of [math]\displaystyle{ \,Y }[/math] and [math]\displaystyle{ \,X }[/math]: [math]\displaystyle{ \,P(Y=k,X=x) }[/math].
  5. If [math]\displaystyle{ \,\underline{x} }[/math] is d-dimensional,the number of adjustable parameter in logistic regression is [math]\displaystyle{ \,d }[/math]. The number of parameters grows linearly w.r.t dimension.
  6. If [math]\displaystyle{ \,\underline{x} }[/math] is d-dimensional,the number of adjustable parameter in LDA is [math]\displaystyle{ \,(2d)+d(d+1)/2+2=(d^2+5d+4)/2 }[/math]. The number of parameters grows quardratically w.r.t dimension.
  7. LDA estimate parameters more efficiently by using more information about data and samples without class labels can be also used in LDA.
  8. As logistic regression relies on fewer assumptions, it seems to be more robust.
  9. In practice, Logistic regression and LDA often give the similar results.

By example

Now we compare LDA and Logistic regression by an example. Again, we use them on the 2_3 data.

 >>load 2_3;
 >>[U, sample] = princomp(X');
 >>sample = sample(:,1:2);
 >>plot (sample(1:200,1), sample(1:200,2), '.');
 >>hold on;
 >>plot (sample(201:400,1), sample(201:400,2), 'r.'); 
First, we do PCA on the data and plot the data points that represent 2 or 3 in different colors. See the previous example for more details.
 >>group = ones(400,1);
 >>group(201:400) = 2;
Group the data points.
 >>[B,dev,stats] = mnrfit(sample,group);
 >>x=[ones(1,400); sample'];
Now we use mnrfit to use logistic regression to classfy the data. This function can return B which is a [math]\displaystyle{ (d+1)\times{(k–1)} }[/math] matrix of estimates, where each column corresponds to the estimated intercept term and predictor coefficients. In this case, B is a [math]\displaystyle{ 3\times{1} }[/math] matrix.
 >> B
 B =0.1861
   -5.5917
   -3.0547
This is our [math]\displaystyle{ \underline{\beta} }[/math]. So the posterior probabilities are:
[math]\displaystyle{ P(Y=1 | X=x)=\frac{exp(0.1861-5.5917X_1-3.0547X_2)}{1+exp(0.1861-5.5917X_1-3.0547X_2)} }[/math].
[math]\displaystyle{ P(Y=2 | X=x)=\frac{1}{1+exp(0.1861-5.5917X_1-3.0547X_2)} }[/math]
The classification rule is:
[math]\displaystyle{ \hat Y = 1 }[/math], if [math]\displaystyle{ \,0.1861-5.5917X_1-3.0547X_2\gt =0 }[/math]
[math]\displaystyle{ \hat Y = 2 }[/math], if [math]\displaystyle{ \,0.1861-5.5917X_1-3.0547X_2\lt 0 }[/math]
 >>f = sprintf('0 = %g+%g*x+%g*y', B(1), B(2), B(3));
 >>ezplot(f,[min(sample(:,1)), max(sample(:,1)), min(sample(:,2)), max(sample(:,2))])
Plot the decision boundary by logistic regression.
This is a decision boundary by logistic regression.The line shows how the two classes split.
 >>[class, error, POSTERIOR, logp, coeff] = classify(sample, sample, group, 'linear');
 >>k = coeff(1,2).const;
 >>l = coeff(1,2).linear;
 >>f = sprintf('0 = %g+%g*x+%g*y', k, l(1), l(2));
 >>h=ezplot(f, [min(sample(:,1)), max(sample(:,1)), min(sample(:,2)), max(sample(:,2))]);
Plot the decision boundary by LDA. See the previous example for more information about LDA in matlab.
From this figure, we can see that the results of Logistic Regression and LDA are very similar.

2009.10.21

Multi-Class Logistic Regression

Our earlier goal with logistic regression was to model the posteriors for a 2 class classification problem with a linear function bounded by the interval [0,1]. In that case our model was,

[math]\displaystyle{ \log\left(\frac{P(Y=1|X=x)}{P(Y=0|X=x)}\right)= \log\left(\frac{\frac{\exp(\beta^T x)}{1+\exp(\beta^T x)}}{\frac{1}{1+\exp(\beta^T x)}}\right) =\beta^Tx }[/math]

We can extend this idea to the more general case with K-classes. This model is specified with K - 1 terms where the Kth class in the denominator can be chosen arbitrarily.

[math]\displaystyle{ \log\left(\frac{P(Y=i|X=x)}{P(Y=K|X=x)}\right)=\beta_i^Tx,\quad i \in \{1,\dots,K-1\} }[/math]

The posteriors for each class are given by,


[math]\displaystyle{ P(Y=i|X=x) = \frac{\exp(\beta_i^T x)}{1+\sum_{k=1}^{K-1}\exp(\beta_k^T x)}, \quad i \in \{1,\dots,K-1\} }[/math]

[math]\displaystyle{ P(Y=K|X=x) = \frac{1}{1+\sum_{k=1}^{K-1}\exp(\beta_k^T x)} }[/math]

Seeing these equations as a weighted least squares problem makes them easier to derivate.

Note that we still retain the property that the sum of the posteriors is 1. In general the posteriors are no longer complements of each other as in true in the 2 class problem where we could express [math]\displaystyle{ \displaystyle P(Y=1|X=x)=1-P(Y=0|X=x) }[/math]. Fitting a Logistic model for the K>2 class problem isn't as 'nice' as in the 2 class problem since we don't have the same simplification.

Multi-class kernel logistic regression

Logistic regression (LR) and kernel logistic regression (KLR) have already proven their value in the statistical and machine learning community. Opposed to an empirically risk minimization approach such as employed by Support Vector Machines (SVMs), LR and KLR yield probabilistic outcomes based on a maximum likelihood argument. It seems that this framework provides a natural extension to multiclass classification tasks, which must be contrasted to the commonly used coding approach.

A paper uses the LS-SVM framework to solve the KLR problem. In that paper,they see that the minimization of the negative penalized log likelihood criterium is equivalent to solving in each iteration a weighted version of least squares support vector machines (wLS-SVMs). In the derivation it turns out that the global regularization term is reflected as usual in each step. In a similar iterative weighting of wLS-SVMs, with different weighting factors is reported to converge to an SVM solution.

Unlike SVMs, KLR by its nature is not sparse and needs all training samples in its final model. Different adaptations to the original algorithm were proposed to obtain sparseness. The second one uses a sequential minimization optimization (SMO) approach and in the last case, the binary KLR problem is reformulated into a geometric programming system which can be efficiently solved by an interior-point algorithm. In the LS-SVM framework, fixed-size LS-SVM has shown its value on large data sets. It approximates the feature map using a spectral decomposition, which leads to a sparse representation of the model when estimating in the primal space. They use this technique as a practical implementation of KLR with estimation in the primal space. To reduce the size of the Hessian, an alternating descent version of Newton’s method is used which has the extra advantage that it can be easily used in a distributed computing environment. The proposed algorithm is compared to existing algorithms using small size to large scale benchmark data sets.

Paper's Link: [[17]]

Perceptron (Foundation of Neural Network)

Separating Hyperplane Classifiers

Separating hyperplane trys to separate the data using linear decision boundaries. When the classes overlap, it can be generalized to support vector machine, which constructs nonlinear boundaries by constructing a linear boundary in an enlarged and transformed feature space.

Perceptron

Figure 1: Diagram of a linear perceptron.

Recall the use of Least Squares regression as a classifier, shown to be identical to LDA. To classify points with least squares we take the sign of a linear combination of data points and assign a label equivalent to +1 or -1.

Least Squares returns the sign of a linear combination of data points as the class label

[math]\displaystyle{ sign(\underline{\beta}^T \underline{x} + {\beta}_0) = sign(\beta_{0}+\beta_{1}x_{1}+\beta_{2}x_{2}) }[/math]


In the 1950s Frank Rosenblatt developed an iterative linear classifier while at Cornell University known as the Perceptron. The concept of a perceptron was fundamental to the later development of the Artificial Neural Network models. The perceptron is a simple type of neural network which models the electrical signals of biological neurons. In fact, it was the first neural network to be algorithmically described. <ref>Simon S. Haykin, Neural Networks and Learning Machines, (Prentice Hall 2008). </ref>

As in other linear classification methods like Least Squares, Rosenblatt's classifier determines a hyperplane for the decision boundary. Linear methods all determine slightly different decision boundaries, Rosenblatt's algorithm seeks to minimize the distance between the decision boundary and the misclassified points <ref>H. Trevor, R. Tibshirani, J. Friedman, The Elements of Statistical Learning (Springer 2009),156.</ref>.

Particular to the iterative nature of the solution, the problem has no global mean (not convex). It does not converge to give a unique hyperplane, and the solutions depend on the size of the gap between classes. If the classes are separable then the algorithm is shown to converge to a local mean. The proof of this convergence is known as the perceptron convergence theorem. However, for overlapping classes convergence to a local mean cannot be guaranteed.


If we find a hyperplane that is not unique between 2 classes, there will be infinitely many solutions obtained from the perceptron algorithm.


As seen in Figure 1, after training, the perceptron determines the label of the data by computing the sign of a linear combination of components.

A Perceptron Example

The perceptron network can figure out the decision boundray line even if we dont know how to draw the line. We just have to give it some examples first. For example:

Features:x1, x2, x3 Answer
1,0,0 +1
1,0,1 +1
1,1,0 +1
0,0,1 -1
0,1,1 -1
1,1,1 -1

Then the perceptron starts out not knowing how to separate the answers so it guesses. For example we input 1,0,0 and it guesses -1. But the right answer is +1. So the perceptron adjusts its line and we try the next example. Eventually the perceptron will have all the answers right.

y=[1;1;1;-1;-1;-1];
x=[1,0,0;1,0,1;,1,1,0;0,0,1;0,1,1;1,1,1]';
b_0=0;
b=[1;1;1];
rho=.5;
for j=1:100;
    changed=0;
    for i=1:6
        d=(b'*x(:,i)+b_0)*y(i);
        if d<0
            b=b+rho*x(:,i)*y(i);
            b_0=b_0+rho*y(i);
            changed=1;
        end 
    end
    if changed==0
        break;
    end
end

The Perceptron (Lecture October 23, 2009)

File:misclass.png
Figure 2: This figure shows a misclassified point and the movement of the decision boundary.

A Perceptron can be modeled as shown in Figure 1 of the previous lecture where[math]\displaystyle{ \,x_0 }[/math] is the model intercept and [math]\displaystyle{ x_{1},\ldots,x_{d} }[/math] represent the feature data, [math]\displaystyle{ \sum_{i=0}^d \beta_{j}x_{j} }[/math] is a linear combination of some weights of these inputs, and [math]\displaystyle{ I(\sum_{i=1}^d \beta_{j}x_{j}) }[/math], where [math]\displaystyle{ \,I }[/math] indicates the sign of the expression and returns the label of the data point.


The Perceptron algorithm seeks a linear boundary between two classes. A linear decision boundary can be represented by[math]\displaystyle{ \underline{\beta}^T\underline{x}+\beta_{0}. }[/math] The algorithm begins with an arbitrary hyperplane [math]\displaystyle{ \underline{\beta}^T\underline{x}+\beta_{0} }[/math] (initial guess). Its goal is to minimize the distance between the decision boundary and the misclassified data points. This is illustrated in Figure 2. It attempts to find the optimal [math]\displaystyle{ \underline\beta }[/math] by iteratively adjusting the decision boundary until all points are on the correct side of the boundary. It terminates when there are no misclassified points.

File:distance2.jpg
Figure 3: This figure illustrates the derivation of the distance between the decision boundary and misclassified points

Derivation: The distance between the decision boundary and misclassified points.

If [math]\displaystyle{ \underline{x_{1}} }[/math] and [math]\displaystyle{ \underline{x_{2}} }[/math]both lie on the decision boundary then,

[math]\displaystyle{ \begin{align} \underline{\beta}^T\underline{x_{1}}+\beta_{0} &= \underline{\beta}^T\underline{x_{2}}+\beta_{0} \\ \underline{\beta}^T (x_{1}-x_{2})&=0 \end{align} }[/math]

[math]\displaystyle{ \underline{\beta}^T (x_{1}-x_{2}) }[/math] denotes an inner product. Since the inner product is 0 and [math]\displaystyle{ (\underline{x_{1}}-\underline{x_{2}}) }[/math] is a vector lying on the decision boundary, [math]\displaystyle{ \underline{\beta} }[/math] is orthogonal to the decision boundary.

Let [math]\displaystyle{ \underline{x_{i}} }[/math] be a misclassified point.

Then the projection of the vector [math]\displaystyle{ \underline{x_{i}} }[/math] on the direction that is orthogonal to the decision boundary is [math]\displaystyle{ \underline{\beta}^T\underline{x_{i}} }[/math]. Now, if [math]\displaystyle{ \underline{x_{0}} }[/math] is also on the decision boundary, then [math]\displaystyle{ \underline{\beta}^T\underline{x_{0}}+\beta_{0}=0 }[/math] and so [math]\displaystyle{ \underline{\beta}^T\underline{x_{0}}= -\beta_{0} }[/math]. Looking at Figure 3, it can be seen that the distance between [math]\displaystyle{ \underline{x_{i}} }[/math] and the decision boundary is the absolute value of [math]\displaystyle{ \underline{\beta}^T\underline{x_{i}}+\beta_{0}. }[/math]

Consider [math]\displaystyle{ y_{i}(\underline{\beta}^T\underline{x_{i}}+\beta_{0}). }[/math]

Notice that if [math]\displaystyle{ \underline{x_{i}} }[/math] is classified correctly then this product is positive. This is because if it is classified correctly, then either both ([math]\displaystyle{ \underline{\beta}^T\underline{x_{i}}+\beta_{0}) }[/math] and[math]\displaystyle{ \displaystyle y_{i} }[/math] are positive or they are both negative. However, if [math]\displaystyle{ \underline{x_{i}} }[/math] is classified incorrectly then one of [math]\displaystyle{ (\underline{\beta}^T\underline{x_{i}}+\beta_{0}) }[/math] and [math]\displaystyle{ \displaystyle y_{i} }[/math] is positive and the other is negative. The result is that the above product is negative for a point that is misclassified.


For the algorithm, we need only consider the distance between the misclassified points and the decision boundary.

Consider [math]\displaystyle{ \phi(\underline{\beta},\beta_{0})= -\displaystyle\sum_{i\in M} –y_{i}(\underline{\beta}^T\underline{x_{i}}+\beta_{0}) }[/math]

which is a summation of positive numbers and where [math]\displaystyle{ \displaystyle M }[/math] is the set of all misclassified points.
The goal now becomes to [math]\displaystyle{ \min_{\underline{\beta},\beta_{0}} \phi(\underline{\beta},\beta_{0}). }[/math]

This can be done using a gradient descent approach, which is a numerical method that takes one predetermined step in the direction of the gradient, getting closer to a minimum at each step, until the gradient is zero. A problem with this algorithm is the possibility of getting stuck in a local minimum. To continue, the following derivatives are needed:

[math]\displaystyle{ \frac{\partial \phi}{\partial \underline{\beta}}= -\displaystyle\sum_{i \in M}y_{i}\underline{x_{i}} \ \ \ \ \ \ \ \ \ \ \ \frac{\partial \phi}{\partial \beta_{0}}= -\displaystyle\sum_{i \in M}y_{i} }[/math]


Then the gradient descent type algorithm (Perceptron Algorithm) is

[math]\displaystyle{ \begin{pmatrix} \underline{\beta}^{\mathrm{new}}\\ \underline{\beta_0}^{\mathrm{new}} \end{pmatrix} = \begin{pmatrix} \underline{\beta}^{\mathrm{old}}\\ \underline{\beta_0}^{\mathrm{old}} \end{pmatrix} +\rho \begin{pmatrix} y_i \underline{x_i}\\ y_i \end{pmatrix} }[/math]

where [math]\displaystyle{ \displaystyle\rho }[/math] is the magnitude of each step called the "learning rate" or the "convergence rate". The algorithm continues until [math]\displaystyle{ \begin{pmatrix} \underline{\beta}^{\mathrm{new}}\\ \underline{\beta_0}^{\mathrm{new}} \end{pmatrix} = \begin{pmatrix} \underline{\beta}^{\mathrm{old}}\\ \underline{\beta_0}^{\mathrm{old}} \end{pmatrix} }[/math] or until it has iterated a specified number of times. If the algorithm converges, it has found a linear classifier, ie., there are no misclassified points.

Problems with the Algorithm and Issues Affecting Convergence

  1. The output values of a perceptron can take on only one of two values (+1 or -1); that is, it only can be used for two-class classification.
  2. If the data is not separable, then the Perceptron algorithm will not converge since it cannot find a linear classifier that classifies all of the points correctly.
  3. Convergence rates depend on the size of the gap between classes. If the gap is large, then the algorithm converges quickly. However, if the gap is small, the algorithm converges slowly. This problem can be eliminated by using basis expansions technique. To be specific, we try to find a hyperplane not in the original space, but in the enlarged space obtained by using some basis functions.
  4. If the classes are separable, there exists infinitely many solutions to Perceptron, all of which are hyperplanes.
  5. The speed of convergence of the algorithm is also dependent on the value of [math]\displaystyle{ \displaystyle\rho }[/math], the learning rate. A larger value of [math]\displaystyle{ \displaystyle\rho }[/math] could yield quicker convergence, but if this value is too large, it may also result in “skipping over” the minimum that the algorithm is trying to find and possibly oscillating forever between the last two points, before and after the min.
  6. A perfect separation is not always available even desirable. If observations comes from different classes sharing the same imput, the classification model seems to be overfitting and will generally have poor predictive performance.
  7. The perceptron convergence theorem states that if there exists an exact solution (in other words, if the training data set is linearly separable), then the perceptron learning algorithm is guaranteed to find an exact solution in a finite number of steps. Proofs of this theorem can be found for example in Rosenblatt (1962), Block (1962), Nilsson (1965), Minsky and Papert (1969), Hertz et al. (1991), and Bishop (1995a). Note, however, that the number of steps required to achieve convergence could still be substantial, and in practice, until convergence is achieved we will not be able to distinguish between a nonseparable problem and one that is simply slow to converge<ref>

Pattern Recognition and Machine Learning,Christopher M. Bishop,194

</ref>.

Comment on gradient descent algorithm

Consider yourself on the peak and you want to get to the land as fast as possible. So which direction should you step? Intuitively it should be the direction in which the height decreases fastest, which is given by the gradient. However, if the mountain has a saddle shape and you initially stand in the middle, then you will finally arrive at the saddle point (local minimum) and get stuck there.

In addition, note that in the final form of our gradient descent algorithm, we get rid of the summation over [math]\displaystyle{ \,i }[/math] (all data points). Actually, this is an alternative of the original gradient descent algorithm (sometimes called batch gradient descent) known as Stochastic gradient descent, where we approximate the true gradient by only evaluating on a single training example. This means that [math]\displaystyle{ \,{\beta} }[/math] gets improved by computation of only one sample. When there is a large data set, say, population database, it's very time-consuming to do summation over millions of samples. By Stochastic gradient descent, we can treat the problem sample by sample and still get decent result in practice.


Neural Networks (NN) - October 28, 2009

Introduction

A neural network is a two stage regression for classification model. It can be represented by a network diagram. It is a parallel, distributed information processing structure consisting of processing elements interconnected together with signal channels called connections. Each processing element has a single output connection with branches that "fan out" onto as many connections as desired, each carrying the same signal - the processing element output signal.

<ref> Haykin, Simon (2009). Neural Networks and Learning Machines. Pearson Education, Inc. </ref> A neural network resembles the brain in two respects:

1. Knowledge is acquired by the network from its environment through a learning process.

2. Interneuron connection strengths, known as synaptic weights, are used to store the acquired knowledge.

<ref> Theory of the Backpropagation Neural Network, R. Necht-Nielsen </ref> It is a multistage regression or classification model represented by a network. Figure 1 is an example of a typical neural network but it can have many different forms. It network applies both to regression or classification.

Figure 1: General Structure of a Neural Network.
  • A regression problem typically has only one unit [math]\displaystyle{ \ y_1 }[/math] in the output layer, but these networks can handle multiple quantitative responses in a seamless fashion.
  • In a k-class classification problem, there are usually k target measurements units [math]\displaystyle{ \ y_1,...,y_k }[/math] in the output layer that each represent the probability of class k and each [math]\displaystyle{ \displaystyle y_k }[/math] is coded (0,1).

Activation Function

Activation Function is a term that is frequently used in classification by NN.

In perceptron, we have a "sign" function that takes the sign of a weighted sum of input features.

File:signfuncperceptron.png
The sign function is of the form File:signfunc1.png ; it is not continuous at 0 and we cannot take derivative of it. Thus, we replace it by a smooth function [math]\displaystyle{ \displaystyle \sigma }[/math] of the form File:signfunc2.png and call it the activation function.
The choice of this function [math]\displaystyle{ \displaystyle \sigma }[/math] is determined by the properties of the data and the assumed distribution of target variables, but for multiple binary classification problems the logistic function, also known as inverse-logit (sigmoid function), is often used: [math]\displaystyle{ \sigma(a)=\frac {1}{1+e^{-a}} }[/math]

Figure: Graph of [math]\displaystyle{ \sigma(a)=\frac {1}{1+e^{-a}} }[/math]

There are some important properties for the activation function.

  1. Activation function is nonlinear. It can be shown that if the activation function of the hidden units is linear, a three-layer neural network is equivalent to a two layer one.
  2. Activation function saturate, which means there are maximum and minimum output value. This property ensures that the weights are bounded and therefore the searching time is limited.
  3. Activation function is continuous and smooth.
  4. Activation function is monotonic. This property is not necessary, since we know that RBF networks is also a kind of power model.

Note: A key difference between a perceptron and a neural network is that a neural network uses continuous nonlinearities in the units, for the purpose of differentiation, whereas the perceptron often uses a non-differentiable activation function. The neural network function is differentiable with respect to the network parameters so that a gradient descent method can be used in training. Moreover, a perceptron is a linear classifier, whereas a neural network,by introducting the nonlinear transformation [math]\displaystyle{ \ \sigma }[/math],it greatly enlarges the class of linear models and by combining layers of perceptrons, neural network is able to classify non-linear problems through proper training.

By assigning some weights to the connectors in the neural network (see diagram above) we weigh the input that comes into the perceptron, to get an output that in turn acts as an input to the next layer of perceptrons, and so on for each layer(There are no cross-connections between units in the same layer and no backward connections from layers downstream. Typically, units in layer k provide input only to units in layer k +1). This type of neural network is called Feed-Forward Neural Network. Applications to Feed-Forward Neural Networks include data reduction, speech recognition, sensor signal processing, and ECG abnormality detection, to name a few. <ref>J. Annema, Feed-Forward Neural Networks, (Springer 1995), pp. 9 </ref>

Back-propagation

Introduction:

For a while, the Neural Network model was just an idea, since there were no algorithms for training the model until 1986, when Geoffrey Hinton <ref> http://www.cs.toronto.edu/~hinton/backprop.html </ref> devised an algorithm called back-propagation [18]. After that, a number of other training algorithms and various configurations of neural networks were implemented. Work procedure: Each neuron receives a signal from previous neurons, and each of their signals is multiplied by a different weight value. Sum up these weighted inputs and passed through the activation function which scales the output to a fixed range of values. The output of the limiter is then broadcast to all of the neurons in the next layer i.e. we apply the input values to the inputs of the first layer, allow the signals to propagate through the network, and read the output values.


When we were talking about perceptrons, we applied a gradient descent algorithm for optimizing weights. Back-propagation uses this idea of gradient descent to train a neural network based on the chain rule in calculus.

Assume that the last output layer has only one unit, so we are working with a regression problem. Later we will see how this can be extended to more output layers and thus turn into a classificaiton problem.

For simplicity, there is only 1 unit at the end and assume for the moment we are doing regression.

File:backpropagation.png

Note that we make a distinction between the input weights [math]\displaystyle{ \displaystyle (w_i) }[/math] and hidden weights [math]\displaystyle{ \displaystyle (u_i) }[/math].

Within each unit we have a function [math]\displaystyle{ \displaystyle z_i=\sigma(a_i) }[/math] that takes input [math]\displaystyle{ \displaystyle a_i }[/math] (linear sum of previous level) and outputs [math]\displaystyle{ \displaystyle z_i's }[/math]. The [math]\displaystyle{ \displaystyle z_i's }[/math] are the inputs into the final output of the model [math]\displaystyle{ \Rightarrow \hat y_i=\sum_{i=1}^p w_i z_i }[/math]

We can find the error of the neural network output by evaluating the squared difference between the true classification and the resulting classification output [math]\displaystyle{ \Rightarrow \displaystyle error=||y-\hat y ||^2 }[/math]


First find derivative of the model error with respect to output weights [math]\displaystyle{ \displaystyle w_i }[/math]
[math]\displaystyle{ \frac{\partial err}{\partial w_i}=\frac{\partial err}{\partial \hat y} \cdot \frac{\partial \hat y}{\partial w_i} }[/math]
[math]\displaystyle{ \frac{\partial err}{\partial w_i}=2(y-\hat y) \cdot z_i }[/math]


Now we need to find the derivative of the model error with respect to hidden weights [math]\displaystyle{ \displaystyle u_i's }[/math]
Consider the following diagram that opens up the hidden layers of the neural network:

File:propagationhidden.png

i j are reversed!

Notice that the weighted sum on the output of the perceptrons at layer [math]\displaystyle{ \displaystyle l }[/math] are the inputs into the perceptrons at layer [math]\displaystyle{ \displaystyle j }[/math] and so on for all hidden layers.

So, using the chain rule
[math]\displaystyle{ \frac{\partial err}{\partial u_{jl}}=\frac{\partial err}{\partial a_j} \cdot \frac{\partial a_j}{\partial u_{jl}} }[/math]
[math]\displaystyle{ \frac{\partial err}{\partial u_{jl}}=\delta_j \cdot z_l }[/math]

Note that a change in [math]\displaystyle{ \,a_j }[/math] causes changes in all [math]\displaystyle{ \,a_i }[/math] in the next layer on which the error is based, so we need to sum over i in the chain: [math]\displaystyle{ \delta_j = \frac{\partial err}{\partial a_j} = \sum_i \frac{\partial err}{\partial a_i} \cdot \frac{\partial a_i}{\partial a_j} =\sum_i \delta_i \cdot \frac{\partial a_i}{\partial a_j} }[/math]
[math]\displaystyle{ \,\frac{\partial a_i}{\partial a_j}=\frac{\partial a_i}{\partial z_j} \cdot \frac{\partial z_j}{\partial a_j}=u_{ij} \cdot \sigma'(a_j) }[/math] Using the activation function [math]\displaystyle{ \,\sigma(\cdot) }[/math]

So [math]\displaystyle{ \delta_j = \sum_i \delta_i \cdot u_{ij} \cdot \sigma'(a_j) }[/math]
[math]\displaystyle{ \delta_j = \sigma'(a_j)\sum_i \delta_i \cdot u_{ij} }[/math]

We can propagate the error calculated in the output back through the previous layers and adjust weights to minimize error.


Back-Propagation neural network is a good method for following situations:

  • The problem is very complex and the number of input or output data points is very large, and have no idea to relate the input to the output.
  • The solution varies over time within the bounds of the given input and output data, or the output is not easy to measure.

Neural Networks (NN) - October 30, 2009

[math]\displaystyle{ \sum_{i=1}^k\ g_{ji}\cdot\cfrac{\partial u_j}{\partial a_i} }[/math]

Back-propagation

The idea is that we first feed an input (we can normalize the data before feeding) from the training set to the Neural Network, then find the error rate at the output and then we propagate the error to previous layers and for each edge of weight [math]\displaystyle{ \,u_{ij} }[/math] we find [math]\displaystyle{ \frac{\partial \mathrm{err}}{\partial u_{ij}} }[/math]. Having the error rates at hand we adjust the weight of each edge by taking steps proportional to the negative of the gradient to decrease the error at output. The next step is to apply the next input from the training set and go through the described adjustment procedure. The overview of Back-propagation algorithm:

  1. Feed a point [math]\displaystyle{ \,x }[/math] in the training set to the network, and find the output of all the nodes.
  2. Evaluate [math]\displaystyle{ \,\delta_k=y_k-\hat{y_k} }[/math] for all output units, where [math]\displaystyle{ y_k }[/math] is the expected output and [math]\displaystyle{ \hat{y_k} }[/math] is the real output.
  3. By propagating to the previous layers evaluate all [math]\displaystyle{ \,\delta_j }[/math]s for hidden units: [math]\displaystyle{ \,\delta_j=\sigma'(a_j)\sum_i \delta_i u_{ij} }[/math] where [math]\displaystyle{ i }[/math] is associated to the previous layer.
  4. Using [math]\displaystyle{ \frac{\partial \mathrm{err}}{\partial u_{jl}} = \delta_j\cdot z_l }[/math] find all the derivatives.
  5. Adjust each weight by taking steps proportional to the negative of the gradient: [math]\displaystyle{ u_{jl}^{\mathrm{new}} \leftarrow u_{jl}^{\mathrm{old}} -\rho \frac{\partial \mathrm{err}}{u_{ij}} }[/math]
  6. Feed the next point in the training set and repeat the above steps.

Advantage of Back propageation:

  • Reduce the cost of computing derivatives by a factor of the number of derivatives to be calculated when minimizing the error.
  • Allow higher degrees of nonlinearity and precision to be applied to problems.

How to initialize the weights

This still leaves the question of how to initialize the weights [math]\displaystyle{ \,u_{ij}, w_i }[/math]. The method of choosing weights mentioned in class was to randomize the weights before the first step. This is not likely to be near the optimal solution in every case, but is simple to implement. To be more specific, random values near zero will be a good choice for the initial weights(usually from [-1,1]). In this case, the model evolves from a nearly linear one to a nonlinear one as we desired. An alternative is to use an orthogonal least squares method to find the initial weights <ref>http://www.mitpressjournals.org/doi/abs/10.1162/neco.1995.7.5.982</ref>. Regression is performed on the weights and output by using a linear approximation of [math]\displaystyle{ \,\sigma(a_i) }[/math], and finds optimal weights in the linear model. Back propagation is used afterward to find the optimal solution, since the NN is non-linear.

Why all initial weights should be randomized and small?

  • Since the error back propagated through the network is proportional to the value of the weights. If all the weights are the same, then the back propagated errors will be the same as well and causing all of the weights will be updated by the same amount. Thus, same initial weights will prevent the network from learning.
  • Since the weights updates in the Back Prop algorithm are proportional to the derivative of activation function, it is important to consider how the net input affects its value. The derivative is a maximum when the activation function is equal to 0.5 and approaches its minimum as the activation function approaches 0 or 1, then its associated weights will vary very little. Thus, if we choose small initial weights, we will have the activation function close to the maximal weight change.

How to set learning rates

The learning rate [math]\displaystyle{ \,\rho }[/math] is usually a constant.

If we use On-line learning, as a form of stochastic approximation process, [math]\displaystyle{ \,\rho }[/math] should decrease as the iteration increase.

In typical feedforwad NNs with hidden units, the objective function has many local and global optimal values, so the optimal learning rate often changes dramatically during the training process. The larger the learning rate the larger the the weight changes on each epoch, and the quicker the network learns.However, the size of the learning rate can also influence whether the network achieves a stable solution. Choosing too large learning rate may cause the unstability of the system and make the weights and objective function diverge, while the too small learning rate may lead to a very slow convergence rate(very long time in learning phase). However, the advantage of small learning rate is that it can guarantee the convergence. Thus, generally, it is better to choose a relatively small learning rate to ensure the stability. Usually, choose [math]\displaystyle{ \,\rho }[/math] between 0.01 and 0.7.

If the learning rate is appropriate, the algorithm is guaranteed to converge to a local minimum, but not a global minimum which is better. Furthermore, there can exist many local minimum values.

How to determine the number of hidden units

Here we will mainly discuss how to estimate the number of hidden units at very beginning. Obviously, we should adjust it to be more precise using CV, LOO or other complexity control methods.

Basically, if the patterns are well separated, few hidden units are fairly enough. If the patterns are drawn from some highly complicated mixture models, more hidden units are really needed.

Actually, the number of hidden units determines the size of the model, and therefore the total number of the weights in the model. Typically speaking, the number of weights should not be larger than the number of training data, say N. Thus, sometimes, N/10 is a good choice. However, in pratice, many well performed models will use more hidden units.

Dimensionality reduction application

Figure 1: Bottleneck configuration for applying dimensionality reduction.

One possible application of Neural Networks is to perform dimensionality reduction, like other techniques, e.g., PCA, MDS, LLE and Isomap.

Consider the following configuration as shown in figure 1: As we go forward in layers of this Neural Network, the number of nodes is reduced, until we reach a layer with the number of nodes representing the desired dimensionality. However, note that at the very first few layers the number of nodes may not be strictly decreasing, as long as finally it can reach a layer with less nodes. From now on in the Neural Network the previous layers are mirrored. So at the output layer we have the same number of states as we have in the input layer. Now note that if we feed the network with each point and get an output approximately equal to the fed input, that means at the output the same input is reconstructed from the middle layer units. So the output of the middle layer units can represent the input with less dimensions.

To train this Neural Network, we feed the network with a training point and through back propagation we adjust the network weights based on the error between the input layer and the reconstruction at the output layer. Our low dimensional mapping will be the observed output from the middle layer. Data reconstruction consists of putting the low dimensional data through the second half of the network.

Deep Neural Network

Back-propagation in practice may not work well when there are too many hidden layers, since the [math]\displaystyle{ \,\delta }[/math] may become negligible and the errors vanish. This is a numerical problem, where it is difficult to estimate the errors. So in practice configuring a Neural Network with Back-propagation faces some subtleties. Deep Neural Networks became popular two or three years ago, when introduced by Bradford Nill in his PhD thesis. Deep Neural Network training algorithm deals with the training of a Neural Network with a large number of layers.

The approach of training the deep network is to assume the network has only two layers first and train these two layers. After that we train the next two layers, so on and so forth.

Although we know the input and we expect a particular output, we do not know the correct output of the hidden layers, and this will be the issue that the algorithm mainly deals with. There are two major techniques to resolve this problem: using Boltzman machine to minimize the energy function, which is inspired from the theory in atom physics concerning the most stable condition; or somehow finding out what output of the second layer is most likely to lead us to the expected output at the output layer.

Difficulties of training deep architecture <ref>Template:Cite journal</ref>

Given a particular task, a natural way to train a deep network is to frame it as an optimization problem by specifying a supervised cost function on the output layer with respect to the desired target and use a gradient-based optimization algorithm in order to adjust the weights and biases of the network so that its output has low cost on samples in the training set. Unfortunately, deep networks trained in that manner have generally been found to perform worse than neural networks with one or two hidden layers.

We discuss two hypotheses that may explain this difficulty. The first one is that gradient descent can easily get stuck in poor local minima (Auer et al., 1996) or plateaus of the non-convex training criterion. The number and quality of these local minima and plateaus (Fukumizu and Amari, 2000) clearly also influence the chances for random initialization to be in the basin of attraction (via gradient descent) of a poor solution. It may be that with more layers, the number or the width of such poor basins increases. To reduce the difficulty, it has been suggested to train a neural network in a constructive manner in order to divide the hard optimization problem into several greedy but simpler ones, either by adding one neuron (e.g., see Fahlman and Lebiere, 1990) or one layer (e.g., see Lengell´e and Denoeux, 1996) at a time. These two approaches have demonstrated to be very effective for learning particularly complex functions, such as a very non-linear classification problem in 2 dimensions. However, these are exceptionally hard problems, and for learning tasks usually found in practice, this approach commonly overfits.

This observation leads to a second hypothesis. For high capacity and highly flexible deep networks, there actually exists many basins of attraction in its parameter space (i.e., yielding different solutions with gradient descent) that can give low training error but that can have very different generalization errors. So even when gradient descent is able to find a (possibly local) good minimum in terms of training error, there are no guarantees that the associated parameter configuration will provide good generalization. Of course, model selection (e.g., by cross-validation) will partly correct this issue, but if the number of good generalization configurations is very small in comparison to good training configurations, as seems to be the case in practice, then it is likely that the training procedure will not find any of them. But, as we will see in this paper, it appears that the type of unsupervised initialization discussed here can help to select basins of attraction (for the supervised fine-tuning optimization phase) from which learning good solutions is easier both from the point of view of the training set and of a test set.

Neural Networks in Practice

Now that we know so much about Neural Networks, what are suitable real world applications? Neural Networks have already been successfully applied in many industries.

Since neural networks are good at identifying patterns or trends in data, they are well suited for prediction or forecasting needs, such as customer research, sales forecasting, risk management and so on.

Take a specific marketing case for example. A feedforward neural network was trained using back-propagation to assist the marketing control of airline seat allocations. The neural approach was adaptive to the rule. The system is used to monitor and recommend booking advice for each departure.

Issues with Neural Network

When Neural Networks was first introduced they were thought to be modeling human brains, hence they were given the fancy name "Neural Network". But now we know that they are just logistic regression layers on top of each other but have nothing to do with the real function principle in the brain.

We do not know why deep networks turn out to work quite well in practice. Some people claim that they mimic the human brains, but this is unfounded. As a result of these kinds of claims it is important to keep the right perspective on what this field of study is trying to accomplish. For example, the goal of machine learning may be to mimic the 'learning' function of the brain, but necessarily the processes the brain uses to learn.

As for the algorithm, since it does not have a convex form, we still face the problem of local minimum, although people have devised other techniques to avoid this dilemma.

In sum, Neural Network lacks a strong learning theory to back up its "success", thus it's hard for people to wisely apply and adjust it. Having said that, it is not an active research area in machine learning. NN still has wide applications in the engineering field such as in control.


BUSINESS APPLICATIONS OF NEURAL NETWORKS

Neural networks are increasingly being used in real-world business applications and, in some cases, such as fraud detection, they have already become the method of choice. Their use for risk assessment is also growing and they have been employed to visualize complex databases for marketing segmentation. This method covers a wide range of business interests — from finance management, through forecasting, to production. The combination of statistical, neural and fuzzy methods now enables direct quantitative studies to be carried out without the need for rocket-science expertise.

  • On the Use of Neural Networks for Analysis Travel Preference Data
  • Extracting Rules Concerning Market Segmentation from Artificial Neural Networks
  • Characterization and Segmenting the Business-to-Consumer E-Commerce Market Using Neural Networks
  • A Neurofuzzy Model for Predicting Business Bankruptcy
  • Neural Networks for Analysis of Financial Statements
  • Developments in Accurate Consumer Risk Assessment Technology
  • Strategies for Exploiting Neural Networks in Retail Finance
  • Novel Techniques for Profiling and Fraud Detection in Mobile Telecommunications
  • Detecting Payment Card Fraud with Neural Networks
  • Money Laundering Detection with a Neural-Network
  • Utilizing Fuzzy Logic and Neurofuzzy for Business Advantage

Complexity Control October 30, 2009

File:overfitting-model.png
Figure 2. The overfitting model passes through all the points of the training set, but has poor predictive power for new points. In exchange the line model has some error on the training points but has extracted the main characteristic of the training points, and has good predictive power.

There are two issues that we have to avoid in Machine Learning:

  1. Overfitting
  2. Underfitting


Overfitting occurs when our model is heavily complex with so many degrees of freedom, that we can learn every detail of the training set. Such a model will have very high precision on the training set but will show very poor ability to predict outcomes of new instances, especially outside the domain of the training set.Dangerous for the overfitting:it will easily lead the predictions to the range that is far beyond the training data, and produce wild predictions in multilayer perceptrons even with noise-free data.The best way to avoid overfitting is to use lots of training data.


In a Neural Network if the depth is too much, the network will have many degrees of freedom and will learn every characteristic of the training data set. That means it will show a very precise outcome of the training set but will not be able to generalize the commonality of the training set to predict the outcome of new cases.

Underfitting occurs when the model we picked to describe the data is not complex enough, and has high error rate on the training set. There is always a trade-off. If our model is too simple, underfitting could occur and if it is too complex, overfitting can occur.

Example

  1. Consider the example showed in the figure. We have a training set and we want to find a model which fits it the best. We can find a polynomial of high degree which almost passes through all the points in the training set. But, in fact the training set is coming from a line model. Now the problem is although the complex model has less error on the training set it diverges from the line in other ranges which we have no training point. Because of that the high degree polynomail has very poor predictive result on test cases. This is an example of overfitting model.
  2. Now consider a training set which comes from a polynomial of degree two model. If we model this training set with a polynomial of degree one, our model will have high error rate on the training set, and is not complex enough to describe the problem.
  3. Consider a simple classification example. If our classification rule takes as input only the colour of a fruit and concludes that it is a banana, then it is not a good classifier. The reason is that just because a fruit is a yellow, does not mean that it is a banana. We can add complexity to our model to make it a better classifier by considering more features typical of bananas, such as size and shape. If we continue to make our model more and more complex in order to improve our classifier, we will eventually reach a point where the quality of our classifier no longer improves, ie., we have overfit the data. This occurs when we have considered so many features that we have perfectly described the existing bananas, but if presented with a new banana of slightly different shape than the existing bananas, for example, it cannot be detected. This is the tradeoff; what is the right level of complexity?

Complexity Control - Nov 2, 2009

Overfitting occurs when the model becomes too complex and underfitting occurs when it is not complex enough, both of which are not desirable. To control complexity, it is necessary to make assumptions for the model before fitting the data. Assumptions that we can make for a model are with polynomials or a neural network. There are other ways as well.

Figure 1: An example of a model with a family of polynomials

We do not want a model to get too complex, so we control it by making an assumption on the model. With complexity control, we want a model or a classifier with a low error rate. The lecture will explain the tradeoff between Bias and variance for model complexity control.

How do we choose a good classifier?

Our goal is to find a classifier that minimizes the true error rate[math]\displaystyle{ \ L(h) }[/math].

[math]\displaystyle{ \ L(h)=Pr\{h(x)\neq y\} }[/math]

Recall the empirical error rate

[math]\displaystyle{ \ \hat L_{h}= \frac{1}{n} \sum_{i=1}^{n} I(h(x_{i}) \neq y_{i}) }[/math]

[math]\displaystyle{ \,h }[/math] is a classifier and we want to minimize the error rate. So we apply [math]\displaystyle{ \displaystyle x_1 }[/math] to [math]\displaystyle{ \displaystyle x_n }[/math] to [math]\displaystyle{ \displaystyle h }[/math], and take the average to get the empirical true error rate estimation of probability that [math]\displaystyle{ h(x_{i}) \neq y_{i} }[/math].

Figure 2

There is a downward bias to this estimate meaning that it is always less than the true error rate.

If there is a change in our complexity from low to high, our error rate is always decreasing. When we apply our model to the test data, our error rate will start to decrease to a point, but then it will increase since the model hasn't seen it before. This can be explained since training error will decrease when we fit the model better by increasing its complexity, but as we have seen, this complex model will not generalize well, resulting in a larger test error.

We use our test data (from the test sample line shown on Figure 2) to get our empirical error rate. Right complexity is defined as where error rate of the test data is minimum; and this is one idea behind complexity control.


Figure 3

We assume that we have samples [math]\displaystyle{ \,X_1, . . . ,X_n }[/math] that follow some (possibly unknown) distribution. We want to estimate a parameter [math]\displaystyle{ \,f }[/math] of the unknown distribution. This parameter may be the mean [math]\displaystyle{ \,E(X_i) }[/math], the variance [math]\displaystyle{ \,var(X_i) }[/math] or some other quantity.

The unknown parameter [math]\displaystyle{ \,f }[/math] is a fixed real number [math]\displaystyle{ f\in R }[/math]. To estimate it, we use an estimator which is a function of our observations, [math]\displaystyle{ \hat{f}(X_1,...,X_n) }[/math].

[math]\displaystyle{ Bias (\hat{f}) = E(\hat{f}) - f }[/math]

[math]\displaystyle{ MSE (\hat{f}) = E[(\hat{f} - f)^2]=Varince (\hat f)+Bias^2(\hat f ) }[/math]

[math]\displaystyle{ Variance (\hat{f}) = E[(\hat{f} - E(\hat{f}))^2] }[/math]

One property we desire of the estimator is that it is correct on average, that is, it is unbiased. [math]\displaystyle{ Bias (\hat{f}) = E(\hat{f}) - f=0 }[/math]. However, there is a more important property for an estimator than just being unbiased: the mean squared error. In statistics, there are problems for which it may be good to use an estimator with a small bias. In some cases, an estimator with a small bias may have lesser mean squared error or be median-unbiased (rather than mean-unbiased, the standard unbiasedness property). The property of median-unbiasedness is invariant under transformations while the property of mean-unbiasedness may be lost under nonlinear transformations. For example, while using an unbiased estimator with large mean square error to estimate the parameter, we highly risk a big error. In contrast, a biased estimator with small mean square error will well improve the precision of our prediction.

Hence, our goal is to minimize [math]\displaystyle{ MSE (\hat{f}) }[/math].

From figure 3, we can see that the relationship of the three parameters is: [math]\displaystyle{ MSE (\hat{f})=Variance (\hat{f})+Bias ^2(\hat{f}) }[/math]. Thus given the Mean Squared Error (MSE), if we have a low bias, then we will have a high variance and vice versa.

A Test error is a good estimation on MSE. We want to have a somewhat balanced bias and variance (not high on bias or variance), although it will have some bias.


Referring to Figure 2, overfitting happens after the point where training data (training sample line) starts to decrease and test data (test sample line) starts to increase. There are 2 main approaches to avoid overfitting:

1. Estimating error rate

[math]\displaystyle{ \hookrightarrow }[/math] Empirical training error is not a good estimation

[math]\displaystyle{ \hookrightarrow }[/math] Empirical test error is a better estimation

[math]\displaystyle{ \hookrightarrow }[/math] Cross-Validation is fast

[math]\displaystyle{ \hookrightarrow }[/math] Computing error bound (analytically) using some probability inequality.

We will not discuss computing the error bound in class; however, a popular method for doing this computation is called VC Dimension (short for Vapnik–Chervonenkis Dimension). Information can be found from Andrew Moore and Steve Gunn.

2. Regularization

[math]\displaystyle{ \hookrightarrow }[/math] Use of shrinkage method

[math]\displaystyle{ \hookrightarrow }[/math] Decrease the chance of overfitting by controlling the weights

Example of under and overfitting in R

To give further intuition of over and underfitting, consider this example. A simple quadratic data set with some random noise is generated, and then polynomials of varying degrees are fitted. The errors for the training set and a test set are calculated.

Polynomial fits to curved data set.
 >> x <- rnorm(200,0,1)
 >> y <- x^2-0.5*x+rnorm(200,0,0.3)
 >> xtest <- rnorm(50,1,1)
 >> ytest <- xtest^2-0.5*xtest+rnorm(50,0,0.3)
 >> p1 <- lm(y~x)
 >> p2 <- lm(y ~ poly(x,2))
 >> pn <- lm(y ~ poly(x,10))
 >> psi <- lm(y~I(sin(x))+I(cos(x)))
x values for the training set are based on a [math]\displaystyle{ \,N(0,1) }[/math] distribution, while the test set has a [math]\displaystyle{ \,N(1,1) }[/math] distribution. y values are determined by [math]\displaystyle{ \,y = x^2 + x + N(0,0.3) }[/math], a quadratic function with some random variation. Polynomial least square fits of degree 1, 2, and 10 are calculated, as well as a fit of [math]\displaystyle{ \,sin(x)+cos(x) }[/math].
 >> > # calculate the mean squared error of degree 1 poly
 >> > sum((y-predict(p1,data.frame(x)))^2)/length(y)
 >> [1] 1.576042
 >> > sum((ytest-predict(p1,data.frame(x=xtest)))^2)/length(ytest)
 >> [1] 7.727615
Training and test mean squared errors for the linear fit. These are both quite high - and since the data is non-linear, the different mean value of the test data increases the error quite a bit.
 >> > # calculate the mean squared error of degree 2 poly
 >> > sum((y-predict(p2,data.frame(x)))^2)/length(y)
 >> [1] 0.08608467
 >> > sum((ytest-predict(p2,data.frame(x=xtest)))^2)/length(ytest)
 >> [1] 0.08407432
This fit is far better - and there is not much difference between the training and test error, either.
 >> > # calculate the mean squared error of degree 10 poly
 >> > sum((y-predict(pn,data.frame(x)))^2)/length(y)
 >> [1] 0.07967558
 >> > sum((ytest-predict(pn,data.frame(x=xtest)))^2)/length(ytest)
 >> [1] 156.7139
With a high-degree polynomial, the training error continues to decrease, but not by much - and the test set error has risen again. The overfitting makes it a poor predictor. As the degree of the polynomial rises further, the accuracy of the computer becomes an issue - and a good fit is not even consistently produced for the training data.
 >> > # calculate mse of sin/cos fit
 >> > sum((y-predict(psi,data.frame(x)))^2)/length(y)
 >> [1] 0.1105446
 >> > sum((ytest-predict(psi,data.frame(x=xtest)))^2)/length(ytest)
 >> [1] 1.320404
Fitting a function of the form sin(x)+cos(x) works pretty well on the training set, but because it is not the real underlying function, it fails on test data which doesn't lie on the same domain.

Cross-Validation (CV) - Introduction

Figure 1: Illustration of Cross-Validation

Cross-Validation is used to estimate the error rate of a classifier with respect to test data rather than data used in the model. Here is a general introduction to CV:

[math]\displaystyle{ \hookrightarrow }[/math] We have a set of collected data for which we know the proper labels

[math]\displaystyle{ \hookrightarrow }[/math] We divide it into 2 parts, Training data (T) and Validation data (V)

[math]\displaystyle{ \hookrightarrow }[/math] For our calculation, we pretend that we do not know the label of V and we use data in T to train the classifier

[math]\displaystyle{ \hookrightarrow }[/math] We estimate an empirical error rate on V since the model hasn't seen V yet and we know the proper label of all elements in V to know how many were misclassified.

CV has different implementations which can reduce the variance of the calculated error rate, but sometimes with a tradeoff of a higher calculation time.

Complexity Control - Nov 4, 2009

Cross-validation

Figure 1: Classical/Standard cross-validation

Cross-validation is the simplest and most widely used method to estimate the true error. It comes from the observation that although training error always decreases with the increasing complexity of the model, the test error starts to increase from a certain point, which is noted as overfitting (see figure 2 above). Since test error estimates MSE (mean square error) best, people came up with the idea to divide the data set into three parts: training set, validation set, and test set. training set is used to build the model, validation set is used to deside the parameters and the optimal model, and the test set is used to estimate the performance of the chosen model. A classical division is 50% for training set, and 25% each for validation set and test set. All of them are randomly selected from the original data set.
Training set: a set of examples used for learning: to fit the parameters of the classifier.
Validation set: a set of examples used to tune the parameters of a classifier.
Test set: a set of examples used only to assess the performance of a fully trained classifier.

Then, we only use the part of our data marked as the "training set" to train our algorithm, while keeping the remaining marked as the "validation set" untouched. As a result, the validation set will be totally unknown to the trained model. The error rate is then estimated by:

[math]\displaystyle{ \hat L(h) = \frac{1}{|\nu|}\sum_{X_i \in \nu}(h(x_i) \neq y_i) }[/math], where [math]\displaystyle{ \,|\nu| }[/math] is the cardinality of the validation set.

When we change the complexity, the error generated by the validation set will have the same behavior as the test set, so we are able to choose the best parameters to get the lowest error.


K-fold Cross-validation

File:k-fold.png
Figure 2: K-fold cross-validation

Above is the simplest form of complexity control. However, in reality, it may be hard to collect data ??and we usually suffer from the curse of dimensionality??, and a larger data set may be hard to come by. Consequently, we may not be able to afford to sacrifice part of the limited resources. In this case we use another method that addresses this problem, K-fold cross-validation.The advantage of K-Fold Cross validation is that all the examples in the dataset are eventually used for both training and testing. We divide the data set into [math]\displaystyle{ \,K }[/math] subsets roughly equal in size. The usual choice is [math]\displaystyle{ \,K = 10 }[/math].

Generally, how to choose [math]\displaystyle{ \,K }[/math]:

if [math]\displaystyle{ \,K=n }[/math], leave one out, low bias, high variance. Each subset contains a single element, so the model is trained with all except one point, and then validated using that point.

if [math]\displaystyle{ \,K=2 }[/math], say 2-fold, 5-fold, high bias, low variance. Each subset contains approximately [math]\displaystyle{ \,\frac{1}{2} }[/math] or [math]\displaystyle{ \,\frac{1}{5} }[/math] of the data.

For every [math]\displaystyle{ \,k }[/math]th [math]\displaystyle{ ( \,k \in [ 1, K ] ) }[/math] part, we use the other [math]\displaystyle{ \,K-1 }[/math] parts to fit the model and test on the [math]\displaystyle{ \,k }[/math]th part to estimate the prediction error [math]\displaystyle{ \hat L_k }[/math], where

[math]\displaystyle{ \hat L(h) = \frac{1}{K}\sum_{k=1}^K\hat L_k }[/math]

For example, suppose we want to fit a polynomial model to the data set and split the set into four equal subsets as shown in Figure 2. First we choose the degree to be 1, i.e. a linear model. Next we use the first three sets as training sets and the last as validation set, then the 1st, 2nd, 4th subsets as training set and the 3rd as validation set, so on and so forth until all the subsets have been the validation set once (all observations are used for both training and validation). After we get [math]\displaystyle{ \hat L_1, \hat L_2, \hat L_3, \hat L_4 }[/math], we can calculate the average [math]\displaystyle{ \hat L }[/math] for degree 1 model. Similarly, we can estimate the error for n degree model and generate a simulating curve. Now we are able to choose the right degree which corresponds to the minimum error. Also, we can use this method to find the optimal unit number of hidden layers of neural networks. We can begin with 1 unit number, then 2, 3 and so on and so forth. Then find the unit number of hidden layers with lowest average error.

Generalized Cross-validation

If the vector of observed values is denoted by [math]\displaystyle{ \mathbf{y} }[/math], and the vector of fitted values by [math]\displaystyle{ \hat\mathbf{y} }[/math].

[math]\displaystyle{ \mathbf{y} = \mathbf{H}\hat\mathbf{y} }[/math],

where the hat matrix is given by

[math]\displaystyle{ \mathbf{H} = \mathbf{X}( \mathbf{X}^{T} \mathbf{X})^{-1}\mathbf{X}^{T} }[/math],

[math]\displaystyle{ \frac{1}{N}\sum_{i=1}^{N}[y_{i} - \hat f^{-i}(\mathbf{x}_{i})]^{2}=\frac{1}{N}\sum_{i=1}^{N}[\frac{y_{i}-\hat f(x_{i})}{1-\mathbf{H}_{ii}}]^{2} }[/math],

Then the GCV approximation is given by

[math]\displaystyle{ GCV(\hat f) = \frac{1}{N}\sum_{i=1}^{N}[\frac{y_{i}-\hat f(x_{i})}{1-trace(\mathbf{H})/N}]^{2} }[/math],

Thus, one of the biggest advantages of the GCV is that the trace is more easily computed.

Leave-one-out Cross-validation

Leave-one-out cross-validation involves using all but one data point in the original training data set to train our model, then using the data point that we initially left out as a means to estimate true error. But repeating this process for every data point in our original data set, we can obtain a good estimation of true error.

In other words, leave-one-out cross-validation is k-fold cross-validation in which we set the subset number [math]\displaystyle{ \,K }[/math] to be the cardinality of the whole data set.

In the above example, we can see that k-fold cross-validation can be computationally expensive: for every possible value of the parameter, we must train the model [math]\displaystyle{ \,K }[/math] times. This deficiency is even more obvious in leave-one-out cross-validation, where we must train the model [math]\displaystyle{ \,n }[/math] times, where [math]\displaystyle{ \,n }[/math] is the number of data point in the data set.

Fortunately, when adding data points to the classifier is reversible, calculating the difference between two classifiers is computationally more efficient than calculating the two classifiers separately. So, if the classifier on all the data points is known, we simply undo the changes from a data point [math]\displaystyle{ \,K }[/math] times to calculate the leave-one-out cross-validation error rate.

How to decide the number of folds? For a large number of folds, the bias of the true error rate estimator will be small, the variance of it and the computing time will be large. For a small number of folds, everything will be opposite. When the datasets is large, 3-fold cross validation will be enough, but if the datasets is very sparse we prefer to use leave-one-out.

Regularization for Neural Network — Weight Decay

Figure 1: activation function

Weight decay training is suggested as an implementation for achieving a robust neural network which is insensitive to noise. Since the number of hidden layers in NN is usually decided by certain domain knowledge, it may easily get into the problem of overfitting.

It can be seen from Figure 1 that when the weight is in the vicinity of zero, the operative part of the activation function shows linear behavior. The NN then collapses to an approximately linear model. Note that a linear model is the simplest model, we can avoid overfitting by constraining the weights to be small. This gives us a hint to initialize the random weights to be close to zero.

Formally, we penalize nonlinear weights by adding a penalty term in the error function. Now the regularized error function becomes:

[math]\displaystyle{ \,REG = err + \lambda(\sum_{i}|w_i|^2 + \sum_{jk}|u_{jk}|^2) }[/math], where [math]\displaystyle{ \,err }[/math] is the original error in back-propagation; [math]\displaystyle{ \,w_i }[/math] is the weights of the output layer; [math]\displaystyle{ \,u_{jk} }[/math] is the weights of the hidden layers.

Usually, too large [math]\displaystyle{ \,\lambda }[/math] will make the weights [math]\displaystyle{ \,w_i }[/math] and [math]\displaystyle{ \,u_{jk} }[/math] too small. We can use cross validation to estimate [math]\displaystyle{ \,\lambda }[/math].Another approach to choosing the [math]\displaystyle{ \,\lambda }[/math] is to train several networks with different amounts of decay and estimates the generalization error for each; then choose the [math]\displaystyle{ \,\lambda }[/math] that minimizes the estimated generalization error.


A similar penalty, weight elimination, is given by,

[math]\displaystyle{ \,REG = err + \lambda(\sum_{i}\frac{|w_i|^2}{1 + |w_i|^2} + \sum_{jk}\frac{|u_{jk}|^2}{1+|u_{jk}|^2}) }[/math].

As in back-propagation, we take partial derivative with respect to the weights:

[math]\displaystyle{ \frac{\partial REG}{\partial w_i} = \frac{\partial err}{\partial w_i} + 2\lambda w_i }[/math]

[math]\displaystyle{ \frac{\partial REG}{\partial u_{jk}} = \frac{\partial err}{\partial u_{jk}} + 2\lambda u_{jk} }[/math]

[math]\displaystyle{ w^{new} \leftarrow w^{old} - \rho\left(\frac{\partial err}{\partial w} + 2\lambda w\right) }[/math]

[math]\displaystyle{ u^{new} \leftarrow u^{old} - \rho\left(\frac{\partial err}{\partial u} + 2\lambda u\right) }[/math]

Note:
here [math]\displaystyle{ \,\lambda }[/math] serves as a trade-off parameter, tuning between the error rate and the linearity. Actually, we may also set [math]\displaystyle{ \,\lambda }[/math] by cross-validation. The tuning parameter is important since weights of zero will lead to zero derivatives and the algorithm will not change. On the other hand, starting with weights that are too large means starting with a nonlinear model which can often lead to poor solutions. <ref>Trevor Hastie, Robert Tibshirani, Jerome Friedman, Elements of Statistical Learning (Springer 2009) pp.398</ref>
We can standardize or normalize the inputs and targets, or adjust the penalty term for the standard deviations of all the inputs and targets in order to omit the biases and get good result from weight decay.
[math]\displaystyle{ \,\lambda }[/math]is different for different types of weights in the NN. We can have different [math]\displaystyle{ \,\lambda }[/math] for input-to-hidden, hidden-to-hidden, and hidden-to-output weights.

Radial Basis Function (RBF) Networks - November 6, 2009

Figure 1: Radial Basis Function Network

Introduction

A Radial Basis Function (RBF) network [19] is a type of artificial neural network with an output layer and a single hidden layer, with weights from the hidden layer to the output layer, and can be trained without back propagation since it has a closed-form solution. The neurons in the hidden layer contain basis functions. One choice that has been widely used is that of radial basis functions, which have the property that each basis function depends only on the radial distance (typically Euclidean) from a center [math]\displaystyle{ \displaystyle\mu_{j} }[/math], so that [math]\displaystyle{ \phi_{j}(x)= h({\Vert x - \mu_{j}\Vert}) }[/math].

RBFN were first used in solving multivariate interpolation problems and numerical analysis. Their prospect is similar in neural network applications, where the training and query targets are rather continuous. RBFN are artificial neural networks and it can be applied to Regression, Classification and Time series prediction.

[math]\displaystyle{ \ x_1 \cdot \cdot \cdot x_d }[/math]: input layer of d dimension of training patterns
[math]\displaystyle{ \ \phi_1 \cdot \cdot \cdot \phi_m }[/math]: hidden layer of up to m locally tuned neurons centered over receptive fields
[math]\displaystyle{ \ y_1\cdot \cdot \cdot y_k }[/math]: output layer that provides the response of the network

The output of an RBF network can be expressed as a weighted sum of its radial basis functions as follows:

[math]\displaystyle{ \hat y_{k} = \sum_{j=1}^M\phi_{j}(x) w_{jk} }[/math]

The radial basis function is:

[math]\displaystyle{ \phi_{j}(x) = e^{\frac{-\Vert x - \mu_{j}\Vert ^2}{2\sigma_{j}^2}} }[/math]
(Gaussian without a normalization constant)

note:The hidden layer has a variable number of neurons (the optimal number is determined by the training process). As usual the more neurons in the hidden layer, the higher the model complexity. Each neuron consists of a radial basis function centered on a point with the same dimensions as the input data. The radii of the RBF functions may be different. The centers and radii can be determined through clustering or an EM algorithm. When the x vector is given from the input layer, the hidden neuron computes the radial distance from the neuron’s center point and then applies RBF function to this distance. The resulting value is passed to the the output layer and weighed together to form the output.

[math]\displaystyle{ \,y_{k} }[/math] can be expressed in matrix form as:

[math]\displaystyle{ \hat Y = \Phi W }[/math]

where

[math]\displaystyle{ \hat{Y}_{n,k} = \left[ \begin{matrix} \hat{y}_{1,1} & \hat{y}_{1,2} & \cdots & \hat{y}_{1,k} \\ \hat{y}_{2,1} & \hat{y}_{2,2} & \cdots & \hat{y}_{2,k} \\ \vdots &\vdots & \ddots & \vdots \\ \hat{y}_{n,1} & \hat{y}_{n,2} & \cdots & \hat{y}_{n,k} \end{matrix}\right] }[/math] is the matrix of output variables.
[math]\displaystyle{ \Phi_{n,m} = \left[ \begin{matrix} \phi_{1,1} & \phi_{1,2} & \cdots & \phi_{1,m} \\ \phi_{2,1} & \phi_{2,2} & \cdots & \phi_{2,m} \\ \vdots & \vdots & \ddots & \vdots \\ \phi_{n,1} & \phi_{n,2} & \cdots & \phi_{n,m} \end{matrix}\right] }[/math] is the matrix of Radial Basis Functions.
[math]\displaystyle{ W_{m,k} = \left[ \begin{matrix} w_{1,1} & w_{1,2} & \cdots & w_{1,k} \\ w_{2,1} & w_{2,2} & \cdots & w_{2,k} \\ \vdots & \vdots & \ddots & \vdots \\ w_{m,1} & w_{m,2} & \cdots & w_{m,k} \end{matrix}\right] }[/math] is the matrix of weights.

Here, [math]\displaystyle{ k }[/math] is the number of outputs, [math]\displaystyle{ n }[/math] is the number of data points, and [math]\displaystyle{ m }[/math] is the number of hidden units. If [math]\displaystyle{ k = 1 }[/math], [math]\displaystyle{ \hat Y }[/math] and [math]\displaystyle{ W }[/math] are column vectors.

related reading:

Introduction of the Radial Basis Function (RBF) Networks [20]

Paper about the BBFN for multi-task learning [21]

Radial Basis Function (RBF) Networks [22] [23] [24]

Advantage of RBFN: 1. First, it can model any nonlinear function using a single hidden layer, which removes some design-decisions about numbers of layers. 2.Second, the simple linear transformation in the output layer can be optimized fully using traditional linear modeling techniques

Estimation of weight matrix W

We minimize the training error, [math]\displaystyle{ \Vert Y - \hat{Y}\Vert^2 }[/math] in order to find [math]\displaystyle{ \,W }[/math].

From a previous result in linear algebra we know that

[math]\displaystyle{ \Vert A \Vert^2 = Tr(A^{T}A) }[/math]

Thus we have a problem similar to linear regression:

[math]\displaystyle{ \ err = \Vert Y - \Phi W\Vert^{2} = Tr[(Y - \Phi W)^{T}(Y - \Phi W)] }[/math]

[math]\displaystyle{ \ err = Tr[Y^{T}Y - Y^{T}\Phi W - W^{T} \Phi^{T} Y + W^{T}\Phi^{T} \Phi W] }[/math]


Useful properties of matrix differentiation

[math]\displaystyle{ \frac{\partial Tr(AX)}{\partial X} = A^{T} }[/math]

[math]\displaystyle{ \frac{\partial Tr(X^{T}A)}{\partial X} = A }[/math]

[math]\displaystyle{ \frac{\partial Tr(X^{T}AX)}{\partial X} = (A^{T} + A)X }[/math]

Solving for W

We find the minimum over [math]\displaystyle{ \,W }[/math] by setting [math]\displaystyle{ \frac{\partial err}{\partial W} }[/math] equal to zero and using the aforementioned properties of matrix differentiation.

[math]\displaystyle{ \frac{\partial err}{\partial W} = 0 }[/math]

[math]\displaystyle{ \ 0 - \Phi^{T}Y - \Phi^{T}Y + 2\Phi^{T}\Phi W = 0 }[/math]

[math]\displaystyle{ \ -2 \Phi^{T}Y + 2\Phi^{T}\Phi W = 0 }[/math]

[math]\displaystyle{ \ W = (\Phi^{T}\Phi)^{-1}\Phi^{T}Y }[/math]

[math]\displaystyle{ \hat{Y} = \Phi W = \Phi(\Phi^{T}\Phi)^{-1}\Phi^{T}Y = HY }[/math]

where [math]\displaystyle{ \ H = \Phi(\Phi^{T}\Phi)^{-1}\Phi^{T} }[/math]

[math]\displaystyle{ \,H }[/math] is the hat matrix for this model. This gives us a nice results since the solution has a closed form and we do not have to worry about convexity problems in this case.

Including an additional bias

[math]\displaystyle{ \,y_{k} }[/math] can be expressed in matrix form as:

[math]\displaystyle{ \hat Y = \Phi W }[/math]

where

[math]\displaystyle{ \hat Y = \left[ \begin{matrix} y_{11} & y_{12} & \cdots & y_{1k} \\ y_{21} & y_{22} & \cdots & y_{2k} \\ \vdots & & \ddots & \vdots \\ y_{n1} & y_{n2} & \cdots & y_{nk} \end{matrix}\right] }[/math] is the matrix(n by k) of output variables.
[math]\displaystyle{ \Phi = \left[ \begin{matrix} \phi_{10} &\phi_{11} & \phi_{12} & \cdots & \phi_{1M} \\ \phi_{20} & \phi_{21} & \phi_{22} & \cdots & \phi_{2M} \\ \vdots & & \ddots & \vdots \\ \phi_{n0} &\phi_{n1} & \phi_{n2} & \cdots & \phi_{nM} \end{matrix}\right] }[/math] is the matrix(n by M+1) of Radial Basis Functions.
[math]\displaystyle{ W = \left[ \begin{matrix} w_{01} & w_{02} & \cdots & w_{0k} \\ w_{11} & w_{12} & \cdots & w_{1k} \\ w_{21} & w_{22} & \cdots & w_{2k} \\ \vdots & & \ddots & \vdots \\ w_{M1} & w_{M2} & \cdots & w_{Mk} \end{matrix}\right] }[/math] is the matrix(M+1 by k) of weights.

where the extra basis function [math]\displaystyle{ \Phi_{0} }[/math] is set to 1.

Normalized RBF

In addition to the above unnormalized architecture, the normalized RBF can be represented as:

[math]\displaystyle{ \hat{y}_{k}(X) = \frac{\sum_{j=1}^{M} w_{jk}\Phi_{j}(X)}{\sum_{r=1}^{M}\Phi_{r}(X)} }[/math]


Actually, [math]\displaystyle{ \Phi^{\ast}_{j}(X) = \frac{\Phi_{j}(X)}{\sum_{r=1}^{M}\Phi_{r}(X)} }[/math] is known as a normalized radial basis function. Giving the familiar form,

[math]\displaystyle{ \hat{y}_{k}(X) = \sum_{j=1}^{M} w_{jk}\Phi^{\ast}_{j}(X) }[/math]

Conceptualizing RBF networks

In the past, we have classified data using models that were explicitly linear, quadratic, or otherwise definite. In RBF networks, like in Neural Networks, we can fit an arbitrary model. How can we do this without changing the equations being used?

Recall a trick that was discussed in the October 7 lecture: if we add new features to our original data set, we can project into higher dimensions, use a linear algorithm, and get a quadratic result by collapsing to a lower dimension afterward. In RBF networks, something similar can happen.

Think of [math]\displaystyle{ \,\Phi }[/math], our matrix of radial basis functions, as a feature space of the input. Each hidden unit, then, can be thought to represent a feature; we can see that, if there are more hidden units than input units, we can essentially project to a higher-dimensional space, as we did in our earlier trick. However, this does not mean that an RBF network will actually do this, it is merely a way to convince yourself that RBF networks (and neural networks) can fit arbitrary models. Nevertheless, it is also noticed that just because of such great power, the problem of overfitting appears more important. We have to control its complexity so that it does not fit to an arbitrary training model but to a general one.

RBF networks for classification -- a probabilistic paradigm

Figure 1: RBF graphical model

An RBF network is akin to fitting a Gaussian mixture model to data. We assume that each class can be modelled by a single function [math]\displaystyle{ \,\phi }[/math] and data is generated by a mixture model. According to Bayes Rule,

[math]\displaystyle{ Pr(Y = y_{k} | X = x) = \frac {Pr(x|y_{k})*Pr(y_{k})}{Pr(x)} }[/math]

While all classifiers that we have seen thus far in the course have been in discriminative form, the RBF network is a generative model that can be represented using a directed graph.

We can replace the class conditional density in the above conditional probability expression by marginalizing [math]\displaystyle{ \,x }[/math] over [math]\displaystyle{ \,j }[/math]: [math]\displaystyle{ \Pr(x|y_{k}) = \sum_{j} Pr(x|j)*Pr(j|y_{k}) }[/math]




  • Note We made the assumption that each class can be modelled by a single function [math]\displaystyle{ \displaystyle\Phi }[/math] and that the data was generated by a mixture model. The Gaussian mixture model has the form:

[math]\displaystyle{ f(x)=\sum_{m=1}^M \alpha_m \phi(x;\mu_m,\Sigma_m) }[/math] where [math]\displaystyle{ \displaystyle\alpha_m }[/math] are mixing proportions, [math]\displaystyle{ \displaystyle\sum_m \alpha_m=1 }[/math], and [math]\displaystyle{ \displaystyle\mu_m }[/math] and [math]\displaystyle{ \displaystyle\Sigma_m }[/math] are the mean and covariance of each Gaussian density respectively. <ref>H. Trevor, R. Tibshirani, J. Friedman, The Elements of Statistical Learning (Springer 2009), pp. 214. </ref> The generative model in Figure 1 shows graphically how each Gaussian in the mixture model is chosen to sample from.

Radial Basis Function (RBF) Networks - November 9th, 2009

RBF Network for classification (A probabilistic point of view)

Using RBF Network[25] to do classification, we usually treat it as regression problem. We want to set a threshold to decide what the data’s class membership is. However, to find some insight to the classification problem of what we are doing in terms of RBF Network, we often think of mixture models and make certain assumptions. Before we mainly using deterministic model to describe data, which means a given input always generate the same output, now we are going to consider generative model of data. In this case, some hidden variables are incorporated and joint probability is assigned between the nodes so that we can derive results through the bayes rule.

Figure 26.1: RBF Network Classification Demo

We assume, as we can see in the graph on the right hand side, that we have three random variables, [math]\displaystyle{ \displaystyle y_k }[/math], [math]\displaystyle{ \displaystyle j }[/math], and [math]\displaystyle{ \displaystyle x }[/math] where [math]\displaystyle{ \displaystyle y_k }[/math] denotes class [math]\displaystyle{ \,k }[/math], [math]\displaystyle{ \displaystyle x }[/math] is what we observed here, and [math]\displaystyle{ \displaystyle j }[/math] is some hidden random variable. There is a process that there are different classes, and each class can trigger a different hidden random variable [math]\displaystyle{ \displaystyle j }[/math]. To understand this, we can assume that, for instance, this random variable [math]\displaystyle{ \displaystyle j }[/math] has a Gaussian distribution (it could have any other distribution as well) and that all the [math]\displaystyle{ \displaystyle j }[/math]’s have the same distribution (Gaussian), but with different parameters. From each Gaussian distribution triggered by each class, we are going to sample some data points. Therefore, in the end, we are going to get a set of data, which are not strictly Gaussian, but are actually a mixture of Gaussians.

Again, we look at the posterior distribution from Bayes' Rule.

[math]\displaystyle{ Pr(Y = y_{k} | X = x) = \frac {Pr(X = x | Y = y_{k})*Pr(Y = y_{k})}{Pr(X = x)} }[/math]

Since we made the assumption that the data has been generated from a mixture model, we can estimate this conditional probability by

[math]\displaystyle{ \Pr(X = x | Y = y_{k}) = \sum_{j} Pr(X = x | j)*Pr(j | Y = y_{k}) }[/math],

which is the class conditional distribution (or probability) of the mixture model. Note, here, if we only have a simple model from [math]\displaystyle{ \displaystyle y_k }[/math] to [math]\displaystyle{ \displaystyle x }[/math], then we won’t have this summation.

We can substitute this class conditional distribution into Bayes' formula. We can see that the posterior of class [math]\displaystyle{ \displaystyle k }[/math] is the summation over [math]\displaystyle{ \displaystyle j }[/math] of the probability of [math]\displaystyle{ \displaystyle x }[/math] given [math]\displaystyle{ \displaystyle j }[/math] times the probability of [math]\displaystyle{ \displaystyle j }[/math] given [math]\displaystyle{ \displaystyle y_k }[/math], times the prior distribution of class [math]\displaystyle{ \displaystyle k }[/math], and lastly divided by the marginal probability of [math]\displaystyle{ \displaystyle x }[/math]. That is,

[math]\displaystyle{ \Pr(y_k | x) = \frac {\sum_{j} Pr(x | j)*Pr(j | y_{k})*Pr(y_{k})}{Pr(x)} }[/math].

Since, the prior probability of class [math]\displaystyle{ \displaystyle k }[/math], [math]\displaystyle{ \displaystyle Pr(y_{k}) }[/math], does not have an index of [math]\displaystyle{ \displaystyle j }[/math], it can be taken out of the summation. This yields,

[math]\displaystyle{ \Pr(y_k | x) = \frac {Pr(y_{k})\sum_{j} Pr(x | j)*Pr(j | y_{k})}{Pr(x)} }[/math].

We multiply this by [math]\displaystyle{ \displaystyle 1 = \frac {Pr(j)}{Pr(j)} }[/math]. Then, it becomes,

[math]\displaystyle{ \Pr(y_k | x) = \frac {Pr(y_{k})\sum_{j} Pr(x | j)*Pr(j | y_{k})}{Pr(x)} * \frac {Pr(j)}{Pr(j)} }[/math].

Next, note that [math]\displaystyle{ \displaystyle Pr(j | x) = \frac {Pr(x | j)*Pr(j)}{Pr(x)} }[/math], and [math]\displaystyle{ \displaystyle Pr(y_k | j) = \frac {Pr(j | y_k)*Pr(y_k)}{Pr(j)} }[/math]. Then rearranging the terms, we finally have the posterior:

[math]\displaystyle{ \displaystyle Pr(y_k | x) = \sum_{j} Pr(j | x)Pr(y_k | j) }[/math].

where [math]\displaystyle{ \displaystyle Pr(j | x) }[/math] is the probability of future given data, [math]\displaystyle{ \displaystyle Pr(y_k | j) }[/math] is the probability of class membership given a future.

Interestingly, this is just the product of the posterior of the two functions that are summed.

Interpretation of RBF Network classification

Figure 26.1.2(2): RBF Nerwork

We want to relate the results that we derived above to our RBF Network. In a RBF Network, as we can see on the right hand side, we have a set of data, [math]\displaystyle{ \displaystyle x_1 }[/math] to [math]\displaystyle{ \displaystyle x_d }[/math], and the hidden basis function, [math]\displaystyle{ \displaystyle \phi_{1} }[/math] to [math]\displaystyle{ \displaystyle \phi_{M} }[/math], and then we have some output, [math]\displaystyle{ \displaystyle y_1 }[/math] to [math]\displaystyle{ \displaystyle y_k }[/math]. Also, we have weights from the hidden layer to output layer. The output is just the linear sum of [math]\displaystyle{ \displaystyle \phi }[/math]’s.

Now consider probability of [math]\displaystyle{ \displaystyle j }[/math] given [math]\displaystyle{ \displaystyle x }[/math] to be [math]\displaystyle{ \displaystyle \phi }[/math], and the probability of [math]\displaystyle{ \displaystyle y_k }[/math] given [math]\displaystyle{ \displaystyle j }[/math] to be the weights [math]\displaystyle{ \displaystyle w_{jk} }[/math], then the posterior can be written as,

[math]\displaystyle{ \displaystyle Pr(y_k | x) = \sum_{j} \phi_{j}(x)*w_{jk} }[/math].

Figure 26.1.2(1): Gaussian mixture

Now, let us look at an example in one dimensional case. Suppose,

[math]\displaystyle{ \phi_{j}(x) = e^{\frac{-\Vert x - \mu_{j}\Vert ^2}{2\sigma_{j}^2}} }[/math], and [math]\displaystyle{ \displaystyle j }[/math] is from 1 to 2.

We know that [math]\displaystyle{ \displaystyle \phi }[/math] is a radial basis function. It's as if we put some Gaussian over data. And for each Gaussian, we consider the center [math]\displaystyle{ \displaystyle \mu }[/math]. Then, what [math]\displaystyle{ \displaystyle \phi }[/math] computes is the similarity of any data point to the center.

We can see the graph on the left which plots the density of [math]\displaystyle{ \displaystyle \phi_{1} }[/math] and [math]\displaystyle{ \displaystyle \phi_{2} }[/math]. Take [math]\displaystyle{ \displaystyle \phi_{1} }[/math] for instance, if the point gets far from the center [math]\displaystyle{ \displaystyle \mu_{1} }[/math], then it will reduce [math]\displaystyle{ \displaystyle \phi_{1} }[/math] to become nearly zero. Remember that, we can usually find a non-linear regression or classification of input space by doing a linear one in some extended space or some feature space (more details in Aside). Here, the [math]\displaystyle{ \displaystyle \phi }[/math]’s actually produce that feature space.

So, one way to look at this is that this [math]\displaystyle{ \displaystyle \phi }[/math] is telling us that given an input, how likely the probability of presence of a particular feature is. Say, for example, we define the features as the centers of these Gaussian distributions. Then, this [math]\displaystyle{ \displaystyle \phi }[/math] function somehow computes the possibility given certain data points, of this kind of feature appearing. If the data point is right at the center, then the value of that [math]\displaystyle{ \displaystyle \phi }[/math] would be one, i.e. the probability is 1. If the point is far from the center, then the probability ([math]\displaystyle{ \displaystyle \phi }[/math] function value) will be close to zero, that is, it’s less likely. Therefore, we can treat [math]\displaystyle{ \displaystyle Pr(j | x) }[/math] as the probability of a particular feature given data.

When we have those features, then [math]\displaystyle{ \displaystyle y }[/math] is the linear combination of the features. Hence, any of the weights [math]\displaystyle{ \displaystyle w }[/math], which is equal to [math]\displaystyle{ \displaystyle Pr(y_k | j) }[/math], tells us how likely this particular [math]\displaystyle{ \displaystyle y }[/math] will appear given those features. Therefore, the weight [math]\displaystyle{ \displaystyle w_{jk} }[/math] shows the probability of class membership given feature.

Hence, we have found a probabilistic point of view to look at RBF Network!

  • Note There are some inconsistencies with this probabilistic point of view. There are no restrictions that force [math]\displaystyle{ \displaystyle Pr(y_k | x) = \sum_{j} \phi_{j}(x)*w_{jk} }[/math] to be between 0 and 1. So if least squares is used to solve this, [math]\displaystyle{ \displaystyle w_{jk} }[/math] cannot be interpreted as a probability.


Aside

  • Feature Space:
One way to produce a feature space is LDA
Suppose, we have n data points [math]\displaystyle{ \mathbf{x}_1 }[/math] to [math]\displaystyle{ \mathbf{x}_n }[/math]. Each data point has d features. And these n data points consist of the [math]\displaystyle{ X }[/math] matrix,
[math]\displaystyle{ X = \left[ \begin{matrix} x_{11} & x_{21} & \cdots & x_{n1} \\ x_{12} & x_{22} & \cdots & x_{n2} \\ \vdots & & \ddots & \vdots \\ x_{1d} & x_{2d} & \cdots & x_{nd} \end{matrix}\right] }[/math]
Also, we have feature space,
[math]\displaystyle{ \Phi^{T} = \left[ \begin{matrix} \phi_{1}(\mathbf{x_1}) & \phi_{1}(\mathbf{x_2})& \cdots & \phi_{1}(\mathbf{x_n})\\ \phi_{2}(\mathbf{x_1})& \phi_{2}(\mathbf{x_2})& \cdots & \phi_{2}(\mathbf{x_n}) \\ \vdots & & \ddots & \vdots \\ \phi_{M}(\mathbf{x_1}) & \phi_{M}(\mathbf{x_2}) & \cdots & \phi_{M}(\mathbf{x_n}) \end{matrix}\right] }[/math]
If we want to solve a regression problem for the input data, we don’t perform Least Square on this [math]\displaystyle{ \displaystyle X }[/math] matrix, we do Least Square on the feature space, i.e. on the [math]\displaystyle{ \displaystyle \Phi^{T} }[/math] matrix. The dimensionality of [math]\displaystyle{ \displaystyle \Phi^{T} }[/math] is M by n. We can add [math]\displaystyle{ \ \Phi_0=1 }[/math] which is not any function of [math]\displaystyle{ \ x_1 \cdot \cdot \cdot x_j }[/math]
Now, we still have n data points, but we define these n data points in terms of a new set of features. So, originally, we define our data points by d features, but now, we define them by M features. And what are those M features telling us?
Let us look at the first column of [math]\displaystyle{ \displaystyle \Phi^{T} }[/math] matrix. The first entry is [math]\displaystyle{ \displaystyle \phi_1 }[/math] applied to [math]\displaystyle{ \mathbf{x_1} }[/math], and so on, until the last entry is [math]\displaystyle{ \displaystyle \phi_M }[/math] applied to [math]\displaystyle{ \mathbf{x_1} }[/math]. Suppose each of these [math]\displaystyle{ \displaystyle \phi_j }[/math] is defined by
[math]\displaystyle{ \phi_{j}(x) = e^{\frac{-\Vert x - \mu_{j}\Vert ^2}{2\sigma_{j}^2}} }[/math].
Then, each [math]\displaystyle{ \displaystyle \phi_j }[/math] checks the similarity of the data point with its center. Hence, the new set of features are actually representing M centers in our data set, and for each data point, its new features check how this point is similar to the first center; how it is similar to the second center; and how it is similar to the [math]\displaystyle{ \displaystyle M^{th} }[/math] center. And this checking process will apply to all data points. Therefore, feature space gives another representation of our data set.


Methords for selecting center [math]\displaystyle{ \ \mu }[/math]:

  • Sub-sampling: Randomly-chosen training points are copied to the radial units. Since they are randomly selected, they will represent the distribution of the training data in a statistical sense.
  • K-Means algorithm: Given K radial units, it adjusts the positions of the centers so that: Each training point belongs to a cluster center, and is nearer to this center than to any other center; Each cluster center is the centroid of the training points that belong to it.

The size of the deviation (smoothing factor) determines how spiky the Gaussian functions are. Deviations should typically be chosen so that Gaussians overlap with a few nearby centers.

Methods for choosing deviation are:

  • Choose the deviation by oursleves.
  • Select the deviation to reflect the number of centers and the volume of space they occupy
  • K-Nearest Neighbor algorithm: Each unit's deviation is individually set to the mean distance to its K nearest neighbors.

If the Gaussians are too spiky, the network will not interpolate between known points, and the network loses the ability to generalize. If the Gaussians are very broad, the network loses fine detail.


useful resouces:

1.some examples, advantages & disadvantages:Basis Function (RBF) Networks

TWO-STAGE LEARNING NETWORKS: EXPLOITATION OF SUPERVISED DATA IN THE SELECTION OF HIDDEN UNIT PARAMETERS

2.comparision between BP & RBF:[26]

Model selection or complexity control for RBF Network - a brief introduction

In order to obtain a better fit for the training data, we often want to increase the complexity of our RBF Network. By its construction, we know that to change the complexity of a RBF Network, the only way is to add or decrease the number of basis functions. A large number of basis function yields a more complex network. In theory, if we add enough basis functions, the RFB Network would work for any training; however, it doesn't mean this RBF Network model can generalize well. Therefore, for the purpose of avoiding overfitting problem (see Notes below), we only want to increase the number of basis function to certain point, i.e. its optimal level.

For the model selection, what we usually do is estimate the training error. After working through the training error, we’ll see that the training error in fact can be decomposed, and one component of training error is called Mean Squared Error (MSE). In the later notes, we will find that our final goal is to get a good estimate of MSE. Moreover, in order to find an optimal model for our data, we select the model with the smallest MSE.

Now, let us introduce some notations that we will use in the analysis:

  • [math]\displaystyle{ \hat f }[/math] -- the prediction model estimated by a RBF network from the training data
  • [math]\displaystyle{ \displaystyle f }[/math] -- the real model (not null), and ideally, we want [math]\displaystyle{ \hat f }[/math] to be close to [math]\displaystyle{ \displaystyle f }[/math]
  • [math]\displaystyle{ \displaystyle err }[/math] -- the training error
  • [math]\displaystyle{ \displaystyle Err }[/math] -- the testing error
  • [math]\displaystyle{ \displaystyle MSE }[/math] -- the Mean Squared Error

Notes

File:overfitting.png
Figure 26.2: Overfitting
  • Being more complex isn’t always a good thing. Sometime, overfitting causes the model to lose its generality. For example in the graph on left hand side, the data points are sampled from the model [math]\displaystyle{ \displaystyle y_i= f(x_i)+\epsilon_i }[/math], where [math]\displaystyle{ \displaystyle f(x_i) }[/math] is a linear function, which is shown by the blue line, and [math]\displaystyle{ \displaystyle \epsilon_i }[/math]is additive Gaussian noise from [math]\displaystyle{ ~N(0,\sigma^2) }[/math]. The red curve displayed in the graph shows the over-fitted model. Clearly, this over-fitted model only works for any training data, and is useless for any further prediction when new data points are introduced.
> n<-20;
> x<-seq(1,10,length=n);
> alpha<-2.5;
> beta<-1.75;
> y<-alpha+beta*x+rnorm(n);
> plot(y~x, pch=16, lwd=3, cex=0.5, main='Overfitting');
> abline(alpha, beta, col='blue');
> lines(spline(x, y), col = 2);
  • More details on this topic later on.






Model Selection(Stein's Unbiased Risk Estimate)- November 11th, 2009

Model Selection

Model selection is a task of selecting a model of optimal complexity for a given data. Learning a radial basis function network from data is a parameter estimation problem. One difficulty with this problem is selecting parameters that show good performance on both training and testing data. In principle, a model is selected to have parameters associated with the best observed performance on training data, although our goal really is to achieve good performance on unseen testing data. Not surprisingly, a model selected on the basis of training data does not necessarily exhibit comparable performance on the testing data. When squared error is used as the performance index, a zero-error model on the training data can always be achieved by using a sufficient number of basis functions.


But, training error and testing error do not demonstrate a linear relationship. In particular, a smaller training error does do not necessarily result in a smaller testing error. In practice, one often observes that, up to a certain point, the model error on testing data tends to decrease as the training error decreases. However, if one attempts to decrease the training error too far by increasing model complexity, the testing error often can take a dramatic increase.


The basic reason behind this phenomenon is that in the process of minimizing training error, after a certain point, the model begins to over-fit the training set. Over-fitting in this context means fitting the model to training data at the expense of losing generality. In the extreme form, a set of [math]\displaystyle{ \displaystyle N }[/math] training data points can be modeled exactly with [math]\displaystyle{ \displaystyle N }[/math] radial basis functions. Such a model follows the training data perfectly. However, the model is not representative features of the true underlying data source, and this is why it fails to correctly model new data points.


In general, the training error rate will be less than the testing error on the new data. A model typically adapts to the training data, and hence the training error will be an overly optimistic estimate of the testing error. An obvious way to well estimate testing err is to add a penalty term to the training error to compensate. Actually, SURE is developed based on this.

Stein's unbiased risk estimate (SURE)

Important Notation[27]

Let:

  • [math]\displaystyle{ \hat f(X) }[/math] denote the prediction model, which is estimated from a training sample by the RBF neural network model.
  • [math]\displaystyle{ \displaystyle f(X) }[/math] denote the true model.
  • [math]\displaystyle{ \displaystyle err=\sum_{i=1}^N (\hat y_i-y_i)^2 }[/math] denote the training error,which is the average loss over the training sample.
  • [math]\displaystyle{ \displaystyle Err=\sum_{i=1}^M (\hat y_i-y_i)^2 }[/math] denote the test error, which is the expected prediction error on an independent test sample.
  • [math]\displaystyle{ \displaystyle MSE=E(\hat f-f)^2 }[/math] denote the mean squared error, where [math]\displaystyle{ \hat f(X) }[/math] is the estimated model and [math]\displaystyle{ \displaystyle f(X) }[/math] is the true model.

The Bias-Variance Decomposition:

[math]\displaystyle{ \begin{align} \displaystyle MSE = E(\hat f-f)^2 &= E[(\hat f-E(\ hat f))+(E(\hat f)-f)]^2\\ &= E[(\hat f-E(\hat f))^2+2*(\hat f-E(\hat f))*(E(\hat f)-f)+(E(\hat f)-f)^2]\\ &= E[(\hat f-E(\hat f))^2]+E[2*(\hat f-E(\hat f))*(E(\hat f)-f)]+E[(E(\hat f)-f)^2]\\ &= Var(\hat f)+Bias^2(\hat f) \end{align} }[/math]

Since, [math]\displaystyle{ \displaystyle E[2*(\hat f-E(\hat f))*(E(\hat f)-f)]=2*Cov[E(\hat f)-f, \hat f-E(\hat f)] }[/math], which is equal to zero.

Suppose the observations [math]\displaystyle{ \displaystyle y_i= f(x_i)+\epsilon_i }[/math], where [math]\displaystyle{ \displaystyle \epsilon_i }[/math]is additive Gaussian noise [math]\displaystyle{ ~N(0,\sigma^2) }[/math].We need to estimate [math]\displaystyle{ \hat f }[/math] from training data set [math]\displaystyle{ T=\{(x_i,y_i)\}^2_{i=1} }[/math]. Let [math]\displaystyle{ \hat f_i=\hat f_(x_i) }[/math] and [math]\displaystyle{ \displaystyle f_i= f(x_i) }[/math], then

[math]\displaystyle{ \displaystyle E[(\hat y_i-y_i)^2 ]=E[(\hat f_i-f_i-\epsilon_i)^2] }[/math][math]\displaystyle{ =E[(\hat f_i-f_i)^2]+E[\epsilon_i^2]-2E[\epsilon_i(\hat f_i-f_i)] }[/math]

[math]\displaystyle{ \displaystyle E[(\hat y_i-y_i)^2 ]=E[(\hat f_i-f_i)^2]+\sigma^2-2E[\epsilon_i(\hat f_i-f_i)] }[/math] [math]\displaystyle{ \displaystyle (1) }[/math]

The last term can be written as:

[math]\displaystyle{ \displaystyle E[\epsilon_i(\hat f_i-f_i)]=E[(y_i-f_i)(\hat f_i-f_i)]=cov(y_i,\hat f) }[/math], where[math]\displaystyle{ \displaystyle y_i }[/math] and [math]\displaystyle{ \hat f_i }[/math] both have same mean [math]\displaystyle{ \displaystyle f_i }[/math].

Stein's Lemma

If [math]\displaystyle{ \,Z }[/math] is [math]\displaystyle{ \,N(\mu,\sigma^2) }[/math] and if [math]\displaystyle{ \displaystyle g(Z) }[/math] is weakly differentiable,such that[math]\displaystyle{ \displaystyle E[\vert g'(Z)\vert]\lt \infty }[/math], then [math]\displaystyle{ \displaystyle E[g(Z)(Z-\mu)]=\sigma^2E(g'(Z)) }[/math].


According to Stein's Lemma, the last cross term of [math]\displaystyle{ \displaystyle (1) }[/math], [math]\displaystyle{ \displaystyle E[\epsilon_i(\hat f_i-f_i)] }[/math] can be written as [math]\displaystyle{ \sigma^2 E[\frac {\partial \hat f}{\partial y_i}] }[/math]. The derivation is as follows.

[math]\displaystyle{ \displaystyle Proof }[/math]: Let [math]\displaystyle{ \,Z = \epsilon }[/math]. Then [math]\displaystyle{ g(Z) = \hat f-f }[/math], since [math]\displaystyle{ \hat y = f + \epsilon }[/math], and [math]\displaystyle{ \,f }[/math] is a constant. So [math]\displaystyle{ \,\mu = 0 }[/math] and [math]\displaystyle{ \,\sigma^2 }[/math] is the variance in [math]\displaystyle{ \,\epsilon }[/math]. [math]\displaystyle{ \displaystyle E[g(Z)(Z-\mu)]=E[(\hat f-f)\epsilon]=\sigma^2E(g'(Z))=\sigma^2 E[\frac {\partial (\hat f-f)}{\partial y_i}]=\sigma^2 E[\frac {\partial \hat f}{\partial y_i}-\frac {\partial f}{\partial y_i}] }[/math]


Since [math]\displaystyle{ \displaystyle f }[/math] is the true model, not the function of the observations [math]\displaystyle{ \displaystyle y_i }[/math], then [math]\displaystyle{ \frac {\partial f}{\partial y_i}=0 }[/math].

So,[math]\displaystyle{ \displaystyle E[\epsilon_i(\hat f_i-f_i)]=\sigma^2 E[\frac {\partial \hat f}{\partial y_i}] }[/math] [math]\displaystyle{ \displaystyle (2) }[/math]

Two Different Cases

SURE in RBF, Automatic basis selection for RBF networks using Stein’s unbiased risk estimator,Ali Ghodsi Dale Schuurmans


Case 1

Consider the case in which a new data point has been introduced to the estimated model, i.e. [math]\displaystyle{ (x_i,y_i)\not\in\tau }[/math]; this new point belong to the validation set [math]\displaystyle{ \displaystyle \nu }[/math],i.e.[math]\displaystyle{ (x_i,y_i)\in\nu }[/math]. Since [math]\displaystyle{ \displaystyle y_i }[/math] is a new point, [math]\displaystyle{ \hat f }[/math] and [math]\displaystyle{ \displaystyle y_i }[/math] are independent. Therefore [math]\displaystyle{ \displaystyle cov(y_i,\hat f)=0 }[/math] (or think about [math]\displaystyle{ \frac{\partial \hat f}{\partial y_i} }[/math], when [math]\displaystyle{ \,y_i }[/math]is a new point, then it has nothing with [math]\displaystyle{ \hat f }[/math] because the estimation of [math]\displaystyle{ \hat f }[/math] is from the training data, so [math]\displaystyle{ \frac{\partial \hat f}{\partial y_i}=0 }[/math] ) and [math]\displaystyle{ \displaystyle (1) }[/math] in this case can be written as:

[math]\displaystyle{ \displaystyle E[(\hat y_i-y_i)^2 ]=E[(\hat f_i-f_i)^2]+\sigma^2 }[/math].

This expectation means [math]\displaystyle{ \frac {1}{m}\sum_{i=1}^m (\hat y_i-y_i)^2 = \frac {1}{m}\sum_{i=1}^m (\hat f_i-f_i)^2+ \sigma^2 }[/math].

[math]\displaystyle{ \sum_{i=1}^m (\hat y_i-y_i)^2 = \sum_{i=1}^m (\hat f_i-f_i)^2+ m\sigma^2 }[/math]

Based on the notation we denote above, then we obtain: [math]\displaystyle{ \displaystyle MSE=Err-m\sigma^2 }[/math]


This is the justification behind the technique of cross validation. since [math]\displaystyle{ \displaystyle \sigma^2 }[/math] is constant, to minimize [math]\displaystyle{ \displaystyle MSE }[/math] is equal to minimize the test err [math]\displaystyle{ \displaystyle Err }[/math]. In cross vaildation to avoid overfitting or underfitting, a validation data set is independent from the estimated model.


Case 2

A more interesting case is the case in which we do not use new data points to assess the performance of the estimated model. and the training data is used for both estimating and assessing a model [math]\displaystyle{ \hat f_i }[/math]. In this case the cross term in [math]\displaystyle{ \displaystyle (1) }[/math] cannot be ignored because [math]\displaystyle{ \hat f_i }[/math] and [math]\displaystyle{ \displaystyle y_i }[/math] are not independent. Therefore the cross term can be estimated by Stein's lemma, which was originally proposed to estimated the mean of a Guassian distribution.


Suppose [math]\displaystyle{ (x_i,y_i)\in\tau }[/math], then by applying Stein's lemma, we obtain [math]\displaystyle{ \displaystyle (2) }[/math] proved above.

[math]\displaystyle{ \displaystyle E[(\hat y_i-y_i)^2 ]=E[(\hat f_i-f_i)^2]+\sigma^2-2\sigma^2E[\frac {\partial \hat f}{\partial y_i}] }[/math].

This expectation means [math]\displaystyle{ \frac {1}{N}\sum_{i=1}^N (\hat y_i-y_i)^2 = \frac {1}{N}\sum_{i=1}^N (\hat f_i-f_i)^2+ \sigma^2-\frac {2}{N}\sum_{i=1}^N \frac {\partial \hat f}{\partial y_i} }[/math].


[math]\displaystyle{ \sum_{i=1}^N (\hat y_i-y_i)^2 = \sum_{i=1}^N (\hat f_i-f_i)^2+ N\sigma^2-2\sigma^2\sum_{i=1}^N \frac {\partial \hat f}{\partial y_i} }[/math].

[math]\displaystyle{ \displaystyle err=MSE+N\sigma^2-2\sigma^2\sum_{i=1}^N \frac {\partial \hat f}{\partial y_i} }[/math]

[math]\displaystyle{ \displaystyle MSE=err-N\sigma^2+2\sigma^2\sum_{i=1}^N \frac {\partial \hat f}{\partial y_i} }[/math] [math]\displaystyle{ \displaystyle (3) }[/math]

In statistics, this is known as Stein's unbiased risk estimate (SURE) is an unbiased estimator of the mean-squared error of a given estimator, in a deterministic estimation scenario. In other words, it provides an indication of the accuracy of a given estimator. This is important since, in deterministic estimation, the true mean-squared error of an estimator generally depends on the value of the unknown parameter, and thus cannot be determined completely.

SURE for RBF Network

Based on SURE, the optimum number of basis functions should be assigned to have the minimum generalization err [math]\displaystyle{ \displaystyle err }[/math]. Based on the Radial Basis Function Network, by setting [math]\displaystyle{ \frac{\partial err}{\partial W} }[/math] equal to zero , we get the least squared solution of[math]\displaystyle{ \ W = (\Phi^{T}\Phi)^{-1}\Phi^{T}Y }[/math].Then we have [math]\displaystyle{ \hat{Y} = \Phi W = \Phi(\Phi^{T}\Phi)^{-1})\Phi^{T}Y = HY }[/math],where [math]\displaystyle{ \ H = \Phi(\Phi^{T}\Phi)^{-1})\Phi^{T} }[/math], [math]\displaystyle{ \,H }[/math] is the hat matrix for this model.


[math]\displaystyle{ \hat f=\,H_{i1}y_2+\,H_{i2}y_2+\cdots+\,H_{in}y_n }[/math] [math]\displaystyle{ \displaystyle (3) }[/math]

where [math]\displaystyle{ \,H }[/math] depends on the input vector [math]\displaystyle{ \displaystyle x_i }[/math] but not on [math]\displaystyle{ \displaystyle y_i }[/math].

By taking the derivative of [math]\displaystyle{ \hat f_i }[/math] with respect to [math]\displaystyle{ \displaystyle y_i }[/math], we can easily obtain:

[math]\displaystyle{ \sum_{i=1}^N \frac {\partial \hat f}{\partial y_i}=\sum_{i=1}^N \,H_{ii} }[/math]

Now, substituing this into[math]\displaystyle{ \displaystyle (3) }[/math] ,we get

[math]\displaystyle{ \displaystyle MSE=err-N\sigma^2+2\sigma^2\sum_{i=1}^N \,H_{ii} }[/math]

Here, we can tell that [math]\displaystyle{ \sum_{i=1}^N \,H_{ii}= \,Trace(H) }[/math], the sum of the diagonal elements of [math]\displaystyle{ \,H }[/math]. Thus, we can obtain the further simplification that [math]\displaystyle{ \,Trace(H)= Trace(\Phi(\Phi^{T}\Phi)^{-1})\Phi^{T})= Trace(\Phi^{T}\Phi(\Phi^{T}\Phi)^{-1})=d }[/math], where[math]\displaystyle{ \displaystyle d }[/math] is the dimension of [math]\displaystyle{ \displaystyle \Phi }[/math]. Since [math]\displaystyle{ \displaystyle \Phi }[/math] is a projection of input matrix [math]\displaystyle{ \,X }[/math] onto a basis set spanned by [math]\displaystyle{ \,M }[/math], the number of basis functions. If considering intercept, then [math]\displaystyle{ \,Trace(H)= M+1 }[/math].

Then,[math]\displaystyle{ \displaystyle MSE=err-N\sigma^2+2\sigma^2(M+1) }[/math].

SURE Algorithm

Figure 27.1

We use this method to find the optimum number of basis function by choosing the model with smallest MSE over the set of models considered. Given a set of models [math]\displaystyle{ \hat f_M(x) }[/math] indexed by the number of basis functions, [math]\displaystyle{ \displaystyle err(M) }[/math].

Then, [math]\displaystyle{ \displaystyle MSE(M)=err(M)-N\sigma^2+2\sigma^2(M+1) }[/math]

where [math]\displaystyle{ \displaystyle N }[/math] is the number of training samples and the noise,[math]\displaystyle{ \sigma^2 }[/math], can be estimated from the training data as

[math]\displaystyle{ \hat \sigma^2=\frac {1}{N-1}\sum_{i=1}^N (\hat y-y)^2 }[/math].


By applying SURE algorithm to SPECT Heart data, we get the optimal number of basis functions is [math]\displaystyle{ \displaystyle M=4 }[/math].


Pls take a look at figure 27.1 on the right, which shows that[math]\displaystyle{ \displaystyle MSE }[/math] is smallest when [math]\displaystyle{ \displaystyle M=4 }[/math].


Calculating the SURE value is easy if you have access to [math]\displaystyle{ \,\sigma }[/math].

sure_Err = error - num_data_point * sigma .^ 2 + 2 * sigma .^2 * (num_basis_functions + 1);

If [math]\displaystyle{ \,\sigma }[/math] is not known, it can be estimated using the error.

error = (output - expected_output) .^ 2;
sigma = (dot(err, ones(1, num_data_point)) / (num_data_point)) / (num_data_point - 1);
sure_Err = error - num_data_point * sigma .^ 2 + 2 * sigma .^2 * (num_basis_functions + 1);

SURE for RBF network & Support Vector Machine - November 13th, 2009

SURE for RBF network

Minimizing MSE

By Stein's unbiased risk estimate (SURE) for Radial Basis Function (RBF) Network we get:

[math]\displaystyle{ \displaystyle MSE=err-N\sigma^2+2\sigma^2(M+1) }[/math] (28.1)

  • [math]\displaystyle{ \displaystyle MSE }[/math](mean square error)= [math]\displaystyle{ \sum_{i=1}^N (\hat y_i-y_i)^2 }[/math]
  • [math]\displaystyle{ \displaystyle err }[/math](training error)= [math]\displaystyle{ \sum_{i=1}^N (\hat f_i-f_i)^2 }[/math]
  • [math]\displaystyle{ \displaystyle (M+1) }[/math]( number of hidden units)= [math]\displaystyle{ \sum_{i=1}^N \frac {\partial \hat f}{\partial y_i} }[/math]


Goal: To minimize MSE

1. If [math]\displaystyle{ \displaystyle \sigma }[/math] is known, then it is no impact (i.e. a constant), and we can ignore it. Only need to minimize [math]\displaystyle{ \displaystyle MSE=err +2\sigma^2(M+1) }[/math].

2. In reality, we do not know [math]\displaystyle{ \displaystyle \sigma }[/math], and the estimate [math]\displaystyle{ \,\hat \sigma }[/math] changes when [math]\displaystyle{ \displaystyle (M+1) }[/math] changes. However, we can estimate [math]\displaystyle{ \displaystyle \sigma }[/math].

[math]\displaystyle{ \displaystyle y_i= f(x_i)+\epsilon_i }[/math], where [math]\displaystyle{ \displaystyle \epsilon_i }[/math]is additive Gaussian noise [math]\displaystyle{ ~N(0,\sigma^2) }[/math]. Suppose we do not know the variance of [math]\displaystyle{ \displaystyle \epsilon }[/math]. Then,

[math]\displaystyle{ \displaystyle \sigma^2=\frac{1}{N-1}\sum_{i=1}^N (\hat y_i-y_i)^2 =\frac{1}{N-1}err }[/math] (28.2)

Substitute (28.2) into (28.1), get

[math]\displaystyle{ \displaystyle MSE=err-N\frac{1}{N-1}err+2\frac{1}{N-1}err(M+1) }[/math]

[math]\displaystyle{ \displaystyle MSE=err(1-\frac{N}{N-1}+\frac{2(M+1)}{N-1}) }[/math]

[math]\displaystyle{ \displaystyle MSE=err(\frac{N-1-N+2M+2}{N-1}) }[/math]

[math]\displaystyle{ \displaystyle MSE=err(\frac{2M+1}{N-1}) }[/math] (28.3)


Figure 28.1: MSE vs err

Figure 28.1: the training error will decrease and the MSE will increase when increasing the number of hidden units (i.e. the model is more complex).


When the number of hidden units gets larger and larger, the training error will decrease until it approaches to [math]\displaystyle{ \displaystyle 0 }[/math]. If training error equals [math]\displaystyle{ \displaystyle 0 }[/math], then no matter how large [math]\displaystyle{ \displaystyle (M+1) }[/math] is, from (28.3) we can see the estimate of MSE will approach [math]\displaystyle{ \displaystyle 0 }[/math] as well. However, in fact it does not happen since when training error is close to [math]\displaystyle{ \displaystyle 0 }[/math] overfitting happens, and MSE should increase instead of being close to [math]\displaystyle{ \displaystyle 0 }[/math]. We can see it from the Figure 28.1.


We can see [math]\displaystyle{ \displaystyle \sigma^2 }[/math] is the average of [math]\displaystyle{ \displaystyle err }[/math]. In order to deal with this problem, we can take the average for [math]\displaystyle{ \displaystyle err }[/math] of each hidden unit. For example: we can first take 1 hidden unit, and take 10 hidden units in the next. Since in reality the value of [math]\displaystyle{ \, \sigma^2 }[/math] is a constant adjustment to the data points, and doesn't depend on [math]\displaystyle{ \,M+1 }[/math], using the average [math]\displaystyle{ \,\sigma^2 }[/math] value for 1 to 10 hidden units has a firm theoretical basis.

We can also see that unlike the classical Cross Validation (CV) or Leave one out (LOO) techniques, the SURE technique does not need to do the validation to find the optimal model. Hence, SURE technique uses less data than CV or LOO. It is suitable for the case that there is not enough data for validation. However, to implement SURE we need to find [math]\displaystyle{ \frac {\partial \hat f}{\partial y_i} }[/math], which may not be trivial for models that do not have a closed-form solution.

Kmeans Clustering

Description:
Kmeans clustering is a method of cluster analysis which aims to partition n observations into k clusters in which each observation belongs to the cluster with the nearest mean.

  • The number of hidden units is equal to the number of clusters, which is [math]\displaystyle{ \displaystyle \phi }[/math]
  • [math]\displaystyle{ \displaystyle \phi_{j}(x) = e^{\frac{-\Vert x - \mu_{j}\Vert ^2}{2\sigma_{j}^2}} }[/math], we set it equal for all clusters.

The basic details for [math]\displaystyle{ K }[/math]-means clustering are given:

The [math]\displaystyle{ K }[/math] initial centers are randomly chosen from the training data.

Then the following two steps are iterated alternately until convergence.

1. For each existing center, we re-identify its cluster (every point in this cluster should be closer to this center than to others).

2. Compute the mean for each cluster and make it as the new center for each cluster.


Kmeans Clustering algorithm
  • For a given cluster assignment [math]\displaystyle{ \displaystyle C }[/math], the total cluster variance is minimized with respect to [math]\displaystyle{ \displaystyle \lbrace m_1,m_2,...,m_k \rbrace }[/math] yielding the means of the currently assigned clusters.
  • Given a current set of means[math]\displaystyle{ \displaystyle \lbrace m_1,m_2,...,m_k \rbrace }[/math], [math]\displaystyle{ \displaystyle min \sum_{k=1}^K \N_k \sum_{C(i)=k} \|x_i-m_k \|^2 }[/math] is minimized by assigning each observation to the closet (current) cluster mean. That is, [math]\displaystyle{ \C(i)=argmin \Vert x_i-m_k \Vert ^2 }[/math].
  • Steps 1 and 2 are iterated until the assignments do not change.
Example:

Partition data into 2 clusters (2 hidden values)


   >> X=rand(30,80);                
   >> [IDX,C,sumD,D]=kmeans(X,2);     
   >> size(IDX)                     
   >>   30     1
   >> size(C)                    
   >>     2    80
   >> size(sumD)                                
   >>     2     1
   >> c1=sum(IDX==1)
   >>    14
   >> c2=sum(IDX==2)
   >>    16
   >> sumD
   >>   85.6643
   >>   101.0419
   >> v1=sumD(1,1)/c1              
   >>  6.1189
   >> v2=sumD(2,1)/c2              
   >>  6.3151      


Comments:

We create [math]\displaystyle{ X }[/math] randomly as a training set with 80 data points and 30 dimensions, and then apply “kmeans” method to separate X into 2 clusters. IDX is a vector contains 1 or 2 which indicates 2 clusters, and its size is 30*1. [math]\displaystyle{ \displaystyle C }[/math] is the center (mean) of each cluster with size 2*80; sumD is sum of the square distance between the data points and center of its cluster. The [math]\displaystyle{ \displaystyle c1 }[/math] and [math]\displaystyle{ \displaystyle c2 }[/math] indicate the number of data points in cluster 1 and 2. [math]\displaystyle{ \displaystyle V1 }[/math] is the variance of the first cluster [math]\displaystyle{ \displaystyle (v1=\sigma_1) }[/math]; [math]\displaystyle{ \displaystyle V2 }[/math] is the variance of the second cluster [math]\displaystyle{ \displaystyle (v2=\sigma_2) }[/math]. Now we can get [math]\displaystyle{ \displaystyle \phi }[/math], [math]\displaystyle{ \displaystyle w }[/math], hat matrix [math]\displaystyle{ \displaystyle H }[/math] and [math]\displaystyle{ \displaystyle \hat Y }[/math] by following equations. Finally, we will get the [math]\displaystyle{ \displaystyle MSE }[/math] and predict the test set.

[math]\displaystyle{ \displaystyle \phi_{j}(x) = e^{\frac{-\Vert x - \mu_{j}\Vert ^2}{2\sigma_{j}^2}} }[/math]

[math]\displaystyle{ \displaystyle W = (\phi^{T}\Phi)^{-1}\phi^{T}Y }[/math]

[math]\displaystyle{ \displaystyle H = \phi(\Phi^{T}\Phi)^{-1}\phi^{T} }[/math]

[math]\displaystyle{ \displaystyle \hat{Y} = \phi W = \phi(\phi^{T}\phi)^{-1}\phi^{T}Y = HY }[/math]


Aside:

Similar in spirit to [math]\displaystyle{ K }[/math]-means, there is EM algorithm with respect to Gaussian mixture model. Generally speaking, the Gaussian mixture model is referred to as a soft clustering while [math]\displaystyle{ K }[/math]-means is hard clustering.

Similar to [math]\displaystyle{ K }[/math]-means, the following two steps are iterated alternately until convergernce.

E-step, each point is assigned a weight for each cluster based on the likelihood of each of the corresponding Gaussians. Observations is assigned 1 for one cluster if they are closer to the center of that cluster, and is assigned 0 for other clusters.

M-step, compute the weighted means and covariances and make them as the new means and covariances for every cluster.

    >>[P,mu,phi,1Pxtr]=mdgEM(X,2,200,0);

Support Vector Machine

Introduction

We have seen that linear discriminant analysis and logistic regression both estimate linear decision boundaries in similar but slightly different ways. Separating hyperplane classifiers provide the basis for the support vector machine (SVM). An SVM constructs linear decision boundaries that explicitly try to separate the data into different classes while maximizing the margin of separation. The techniques that make the extensions to the non-separable case, where the classes overlap, are generalized to what is known as the support vector machine. It produces nonlinear boundaries by constructing a linear boundary in a high-dimensional, transformed version of the feature space. It is also calculated based on only a fraction of the data rather than a calculation on every point in the data, much like the difference between the median and the mean.

The original basis for SVM was published in the 1960s by Vapnik, Chervonenkis et al., however the ideas did not gain any attention until strong results were shown in the early 1990s.

Definition:
Support Vector Machines (SVM) are a set of related supervised learning methods used for classification and regression. A support vector machine constructs a maximum margin hyperplane or set of hyperplanes in a higher or infinite dimensional space. The set of points near the class boundaries, support vectors, define the model which can be used for classification, regression or other tasks.

Optimal Seperating Hyperplane

Figure 28.2

Figure 28.2 An example with two classes separated by a hyperplane. The blue line is the least squares solution, which misclassifies one of the training points. Also shown are the black separating hyperplanes found by the perceptron learning algorithm with different random starts.

We can see the data points can be separated by a linear boundary are in two classes in [math]\displaystyle{ \displaystyle \mathbb{R}^{2} }[/math]. Suppose a dataset is indeed linearly separable, then there exits infinitely many possible separating hyperplanes including the black lines in the figure as two of them for training data. However, which solution is the best when we introduce the new data.

Aside:
The blue line is the least squares solution to the problem,obtained by regressing the [math]\displaystyle{ \displaystyle -1/+1 }[/math] response [math]\displaystyle{ \displaystyle Y }[/math] on [math]\displaystyle{ \displaystyle X }[/math] (with intercept); the line is given by [math]\displaystyle{ \displaystyle {X:\hat\beta_0+\hat\beta_1X_1+\hat\beta_2X_2=0} }[/math]. This least squares solution does not do a perfect job in separating the points, and makes one error. This is the same boundary found by linear discriminant analysis, in light of its equivalence with linear regression in the two-class case.

Classifiers such as (28.4) that compute a linear combination of the input features and return the sign were called perceptrons in the engineering literature in the late 1950s.


Identifications:

  • Hyperplane: separate two classes

[math]\displaystyle{ \displaystyle x^{T}\beta+\beta_0=0 }[/math]

  • Margin: the distance between the hyperplane and the closest point.

[math]\displaystyle{ \displaystyle d_i=x_i^{T}\beta+\beta_0 }[/math] where [math]\displaystyle{ \displaystyle i=1,....,N }[/math]

Note: since distance is positive, if the data is on [math]\displaystyle{ \displaystyle +1 }[/math] side the distance is [math]\displaystyle{ \displaystyle d_i(+1) }[/math]. If the data is on the [math]\displaystyle{ \displaystyle -1 }[/math] side the distance is [math]\displaystyle{ \displaystyle d_i(-1) }[/math].

  • Data points: [math]\displaystyle{ \displaystyle y_i\in\{-1,+1\} }[/math]we can classify points as[math]\displaystyle{ \displaystyle sign\{d_i\} }[/math] if [math]\displaystyle{ \displaystyle \beta,\beta_0 }[/math] is known.

Maximum Margin Classifiers in the Linearly separable case

Choose the line farthest from both classes or choose the line which has the maximum distance from the closest point. (i.e maximize the margin)

[math]\displaystyle{ \displaystyle Margin=min\{y_id_i\} }[/math] [math]\displaystyle{ \displaystyle i=1,2,....,N }[/math] where [math]\displaystyle{ \displaystyle y_i }[/math] is label and [math]\displaystyle{ \displaystyle d_i }[/math] is distance

Figure 28.3 The linear algebra of a hyperplane


Fiqure 28.3 depicts a hyperplane defined by the equation [math]\displaystyle{ \displaystyle x^{T}\beta+\beta_0=0 }[/math]. Since they are in [math]\displaystyle{ \displaystyle \mathbb{R}^{2} }[/math], the hyperplane is a line.


Let us rewrite [math]\displaystyle{ \displaystyle Margin=min\{y_id_i\} }[/math] by using the following properties:

1. [math]\displaystyle{ \displaystyle \beta }[/math] is orthogonal to the hyperplane

Two points[math]\displaystyle{ \displaystyle x_1,x_2 }[/math] lying on the hyperplane.

[math]\displaystyle{ \displaystyle \beta^{T}x_1+\beta_0=0 }[/math]

[math]\displaystyle{ \displaystyle \beta^{T}x_2+\beta_0=0 }[/math]

[math]\displaystyle{ \displaystyle \beta^{T}x_1+\beta_0 - (\beta^{T}x_2+\beta_0)=0 }[/math]

[math]\displaystyle{ \displaystyle \beta^{T}(x_1-x_2)=0 }[/math]

Hence,[math]\displaystyle{ \displaystyle \beta }[/math] is orthogonal to [math]\displaystyle{ \displaystyle (x_1-x_2) }[/math], and[math]\displaystyle{ \displaystyle \beta^*=\frac{\beta}{\|\beta\|} }[/math] is the vector normal to the hyperplane.

2. For any point [math]\displaystyle{ \displaystyle x_1 }[/math] on the hyperplane,

[math]\displaystyle{ \displaystyle \beta^{T}x_1+\beta_0=0 }[/math]

[math]\displaystyle{ \displaystyle \beta^{T}x_1=-\beta_0 }[/math] For any point on the hyperplane, multiplying by [math]\displaystyle{ \displaystyle \beta^{T} }[/math] gives negative value of the intercept of the hyperplane.


3. The signed distance for any point [math]\displaystyle{ \displaystyle x }[/math] to the hyperplane is [math]\displaystyle{ \displaystyle d_i=\beta^{T}(x_i-x_0) }[/math].
Since the length of [math]\displaystyle{ \displaystyle \beta }[/math] is not known, let it be unit vector.

[math]\displaystyle{ \displaystyle d_i=\frac{\beta^{T}(x_i-x_0)}{\|\beta\|} }[/math] [math]\displaystyle{ \displaystyle i=1,2,....,N }[/math]

[math]\displaystyle{ \displaystyle d_i=\frac{\beta^{T}x_i-\beta^{T}x_0}{\|\beta\|} }[/math]

by property 2

[math]\displaystyle{ \displaystyle d_i=\frac{\beta^{T}x_i+\beta_0}{\|\beta\|} }[/math]


Figure 28.4


We had [math]\displaystyle{ \displaystyle Margin=min(y_id_i) }[/math] [math]\displaystyle{ \displaystyle i=1,2,....,N }[/math], and since we now know how to compute [math]\displaystyle{ \displaystyle d_i \Rightarrow }[/math]

[math]\displaystyle{ \displaystyle Margin=min\{y_i\frac{\beta^{T}x_i+\beta_0}{\|\beta\|}\} }[/math]

Suppose [math]\displaystyle{ \displaystyle x_i }[/math] is not on the hyperplane

[math]\displaystyle{ \displaystyle y_i(\beta^{T}x_i+\beta_0)\gt 0 }[/math]

[math]\displaystyle{ \displaystyle y_i(\beta^{T}x_i+\beta_0)\geq c }[/math]for [math]\displaystyle{ \displaystyle c\gt 0 }[/math]


[math]\displaystyle{ \displaystyle y_i(\frac{\beta^{T}x_i}{c}+\frac{\beta_0}{c})\geq1 }[/math]

This is known as the canonical representation of the decision hyperplane.

For [math]\displaystyle{ \displaystyle \beta^{T} }[/math] only the direction is important, so [math]\displaystyle{ \displaystyle \frac{\beta^{T}}{c} }[/math] does not change its direction, the hyperplane will be the same.

[math]\displaystyle{ \displaystyle y_i(\beta^{T}x_i+\beta_0)\geq1 }[/math]

[math]\displaystyle{ \displaystyle y_i\frac{\beta^{T}x_i+\beta_0}{\|\beta\|}\geq\frac{1}{\|\beta\|} }[/math]

[math]\displaystyle{ \displaystyle Margin=min(\frac{1}{\|\beta\|} ) }[/math]

which is equivalent as [math]\displaystyle{ \displaystyle Margin=max(\|\beta\| ) }[/math]

Reference:
Hastie,T.,Tibshirani,R., Friedman,J.,(2008).The Elements of Statistical Learning:129-130

Extension--Multi-class SVM[28]

SVM is only directly applicable for two-class case. We want to generalize this algorithm to multi-class tasks.

Multiclass SVM aims to assign labels to instances by using support vector machines, where the labels are drawn from a finite set of several elements. The dominating approach for doing so is to reduce the single multiclass problem into multiple binary problems. Each of the problems yields a binary classifier, which is assumed to produce an output function that gives relatively large values for examples from the positive class and relatively small values for examples belonging to the negative class. Two common methods to build such binary classifiers are where each classifier distinguishes between (i) one of the labels to the rest (one-versus-all) or (ii) between every pair of classes (one-versus-one). Classification of new instances for one-versus-all case is done by a winner-takes-all strategy, in which the classifier with the highest output function assigns the class (it is important that the output functions be calibrated to produce comparable scores). For the one-versus-one approach, classification is done by a max-wins voting strategy, in which every classifier assigns the instance to one of the two classes, then the vote for the assigned class is increased by one vote, and finally the class with most votes determines the instance classification.

Optimizing The Support Vector Machine - November 16th, 2009

We currently derive Support Vector Machine for the case where two classes are separable in the given feature space. This margin can be written as [math]\displaystyle{ \,min\{y_id_i\} }[/math], or the distance of each point from the hyperplane, where [math]\displaystyle{ \,d_i }[/math] is the distance and [math]\displaystyle{ \,y_i }[/math] is used as the sign.

Margin Maximizing Problem for the Support Vector Machine

[math]\displaystyle{ \,Margin=min\{y_id_i\} }[/math] can be rewritten as [math]\displaystyle{ \,min\left\{\frac{y_i\left(\beta^Tx_i+\beta_0\right)}{|\beta|}\right\} }[/math]. <br\>Note that the term [math]\displaystyle{ \,y_i\left(\beta^Tx_i+\beta_0\right) = 0 }[/math] if [math]\displaystyle{ \,x_i }[/math] is on the hyperplane, but [math]\displaystyle{ \,y_i\left(\beta^Tx_i+\beta_0\right) \gt 0 }[/math] if [math]\displaystyle{ \,x_i }[/math] is not on the hyperplane.

This implies [math]\displaystyle{ \,\exists C\gt 0 }[/math] such that [math]\displaystyle{ \,y_i\left(\beta^Tx_i+\beta_0\right) \geq C }[/math].

Divide through by C to produce [math]\displaystyle{ \,y_i\left(\frac{\beta^T}{C}x_i + \frac{\beta_0}{C}\right) \geq 1 }[/math].

[math]\displaystyle{ \,\beta, \beta_0 }[/math] compose a hyperplane that can have different values but we care about the direction, dividng through by a constant does not change the direction of the hyperplane. Thus, by assuming scaled values for [math]\displaystyle{ \,\beta, \beta_0 }[/math] we eliminate C, so that [math]\displaystyle{ \,y_i\left(\beta^Tx_i+\beta_0\right) \geq 1 }[/math]. Implying that the lower bound on [math]\displaystyle{ \,y_i\left(\beta^Tx_i+\beta_0\right) }[/math] is [math]\displaystyle{ \displaystyle 1 }[/math]

Now in order to maximize the margin, we simply need to find [math]\displaystyle{ \,min\left\{\frac{1}{|\beta|}\right\} }[/math].

In other words, our optimization problem is now to find maximum [math]\displaystyle{ \,|\beta| }[/math], under the constraint that [math]\displaystyle{ \,min_i\{y_i(\beta^Tx_i+\beta_0)\} = 1 }[/math].

Note that we're dealing with the norm of [math]\displaystyle{ \,\beta }[/math]. There are many different choices of possible norms, in general p-norm. The 1-norm of a vector is simply the sum of the absolute value of each element (also known as the taxicab or Manhattan distance), and is apparently more accurate, but also has a discontinuity in the derivative. 2-norm or the Euclidean norm (the intuitive measure of the length of a vector), is easier to work with - that is [math]\displaystyle{ \,\|\beta\|_2 = (\beta^T\beta)^{1/2} }[/math]. For convenience, we will maximize [math]\displaystyle{ \,\frac{1}{2}\|\beta\|^2 = \frac{1}{2}\beta^T\beta }[/math] where the constant 1/2 has been added for simplification and that the maximizing the function is the same as maximizing the square root of that function.

This is an example of a quadratic programming problem and we will minimize a quadratic function subject to linear inequality constraints.


Writing Lagrangian Form of Support Vector Machine

The Lagrangian form is introduced to ensure that the optimization conditions are satisfied, as well as finding an optimal solution (the optimal saddle point of the Lagrangian for the classic quadratic optimization). The problem will be solved in dual space by introducing [math]\displaystyle{ \,\alpha_i }[/math] as dual constraints, this is in contrast to solving the problem in primal space as function of the betas. A simple algorithm for iteratively solving the Lagrangian has been found to run well on very large data sets, making SVM more usable. Note that this algorithm is intended to solve Support Vector Machines with some tolerance for errors - not all points are necessarily classified correctly. Several papers by Mangasarian explore different algorithms for solving SVM.

[math]\displaystyle{ \,L(\beta,\beta_0,\alpha) = \frac{1}{2}\|\beta\|^2 - \sum_{i=1}^n{\alpha_i\left(y_i(\beta^Tx_i+\beta_0)-1\right)} }[/math]. To find the optimal value, set the derivative equal to zero.

[math]\displaystyle{ \,\frac{\partial L}{\partial \beta} = 0 }[/math], [math]\displaystyle{ \,\frac{\partial L}{\partial \beta_0} = 0 }[/math]. Note that [math]\displaystyle{ \,\frac{\partial L}{\partial \alpha_i} }[/math] is equivalent to the constraints [math]\displaystyle{ \left(y_i(\beta^Tx_i+\beta_0)-1\right) \geq 0, \,\forall\, i }[/math]

First, [math]\displaystyle{ \,\frac{\partial L}{\partial \beta} = \frac{\partial}{\partial \beta}\frac{1}{2}\|\beta\|^2 - \sum_{i=1}^n{\left\{\frac{\partial}{\partial \beta}(\alpha_iy_i\beta^Tx_i)+\frac{\partial}{\partial \beta}\alpha_iy_i\beta_0-\frac{\partial}{\partial \beta}\alpha_iy_i\right\}} }[/math]

[math]\displaystyle{ \frac{\partial}{\partial \beta}\frac{1}{2}\|\beta\|^2 = \beta }[/math].
[math]\displaystyle{ \,\frac{\partial}{\partial \beta}(\alpha_iy_i\beta^Tx_i) = \alpha_iy_ix_i }[/math]
[math]\displaystyle{ \,\frac{\partial}{\partial \beta}\alpha_iy_i\beta_0 = 0 }[/math].
[math]\displaystyle{ \,\frac{\partial}{\partial \beta}\alpha_iy_i = 0 }[/math].

So this simplifies to [math]\displaystyle{ \,\frac{\partial L}{\partial \beta} = \beta - \sum_{i=1}^n{\alpha_iy_ix_i} = 0 }[/math]. In other words,

[math]\displaystyle{ \,\beta = \sum_{i=1}^n{\alpha_iy_ix_i} }[/math], [math]\displaystyle{ \,\beta^T = \sum_{i=1}^n{\alpha_iy_ix_i^T} }[/math]

Similarly, [math]\displaystyle{ \,\frac{\partial L}{\partial \beta_0} = -\sum_{i=1}^n{\alpha_iy_i} = 0 }[/math].

This allows us to rewrite the Lagrangian without [math]\displaystyle{ \,\beta }[/math].

[math]\displaystyle{ \,\frac{1}{2}\sum_{i=1}^n{\sum_{j=1}^n{\alpha_i\alpha_jy_iy_jx_i^Tx_j}} - \sum_{i=1}^n{\alpha_iy_i(\sum_{j=1}^n{\alpha_jy_jx_j^Tx_i} + \beta_0) - 1} }[/math].

Because [math]\displaystyle{ \,\sum_{i=1}^n{\alpha_iy_i} = 0 }[/math], and [math]\displaystyle{ \,\beta_0 }[/math] is constant, [math]\displaystyle{ \,\sum_{i=1}^n{\alpha_iy_i\beta_0} = 0 }[/math]. So this simplifies further, to

[math]\displaystyle{ L(\alpha) = \,-\frac{1}{2}\sum_{i=1}^n{\sum_{j=1}^n{\alpha_i\alpha_jy_iy_jx_i^Tx_j}} + \sum_{i=1}^n{\alpha_i} }[/math] A dual representation of the maximum margin.

Because [math]\displaystyle{ \,\alpha_i }[/math] is the Lagrange multiplier, [math]\displaystyle{ \,\alpha_i \geq 0 \forall i }[/math].

This is a much simpler optimization problem

Extension:Global Optimization of Support Vector Machines(Using Genetic Algorithms for Bankruptcy Prediction)

One of the most important research issues in finance is building accurate corporate bankruptcy prediction models since they are essential for the risk management of financial institutions. Thus, researchers have applied various data-driven approaches to enhance prediction performance including statistical and artificial intelligence techniques. Recently, support vector machines(SVMs) are becoming popular because they use a risk function consisting of the empirical error and a regularized term which is derived from the structural risk minimization principle. In addition, they don’t require huge training samples and have little possibility of overfitting. However, in order to use SVM, a user should determine several factors such as the parameters of a kernel function, appropriate feature subset, and proper instance subset by heuristics, which hinders accurate prediction results when using SVM.

There is a paper [[29]]proposing a novel approach to enhance the prediction performance of SVM for the prediction of financial distress. Their suggestion is the simultaneous optimization of the feature selection and the instance selection as well as the parameters of a kernel function for SVM by using genetic algorithms (GAs). They apply their model to a real-world case. Experimental results show that the prediction accuracy of conventional SVM may be improved significantly by using their model.

Extension: Finding Optimal Parameter Values

The accuracy of an SVM model dependents on the selection of the model parameters. DTREG provides two methods for finding optimal parameter values. Grid search: tries values of each parameter across the specified search range using geometric steps. Pattern search/compass search/line search:starts at the center of the search range and makes trial steps in each direction for each parameter.

If the fit of the model improves, the search center moves to the new point and the process is repeated. If no improvement is found, the step size is reduced and the search is tried again. The pattern search stops when the search step size is reduced to a specified tolerance.

Positives and Negatives When Optimizing SVM[30]

  • (Pos) Appears to avoid overfitting in high dimensional spaces and generalize well using a small training set (the complexity of SVM is characterized by the number of support vectors rather than the dimensionality of the transformed space -- no formal theory to justify this).
  • (Pos) Global optimization method, no local optima (SVM are based on exact optimization, not approximate methods).
  • (Neg) Applying trained classifiers can be expensive.

The Support Vector Machine algorithm - November 18, 2009

Lagrange Duality

In convex optimization, consider the primal optimization problem:

[math]\displaystyle{ \,\min_\beta }[/math] [math]\displaystyle{ \,f(\omega) }[/math]

[math]\displaystyle{ \,s.t. }[/math] [math]\displaystyle{ \,g_i(\omega) \leq 0 }[/math]

[math]\displaystyle{ \,h_i(\omega) = 0 }[/math]

Define the generalized Lagrangian to be

[math]\displaystyle{ L(\omega,\alpha,\beta) = f(\omega)+\sum_{i=1}^k\alpha_ig_i(\omega)+\sum_{i=1}^k\beta_ih_i(\omega) }[/math]

Then the dual optimization problem is

[math]\displaystyle{ \ \max{\alpha,\beta} }[/math] [math]\displaystyle{ \ \min_{\omega} }[/math] [math]\displaystyle{ \ L(\omega,\alpha,\beta) }[/math]

Now instead of solving the primal problem, we can solve the dual problem without changing the solution as long as it subjects to the Karush-Kuhn-Tucker (KKT) conditions:

[math]\displaystyle{ \frac{\partial}{\partial\omega_i}L(\omega,\alpha,\beta)=0 }[/math]

[math]\displaystyle{ \frac{\partial}{\partial\beta_i}L(\omega,\alpha,\beta)=0 }[/math]

[math]\displaystyle{ \,\alpha_ig_i(\omega)=0 }[/math]

[math]\displaystyle{ g_i(\omega) \leq 0 }[/math]

[math]\displaystyle{ \,\alpha_i \geq 0 }[/math]

We are interested in the dual form because it gives bound on the primal problem and in some cases is easier to solve. For more information about convex optimization, see the book by Boyd: [31]

Solving the Lagrangian

Continuing from the above derivation, we now have the equation that we need to minimize, as well as two constraints.

The Support Vector Machine problem boils down to:

[math]\displaystyle{ \min_{\alpha} L(\alpha) = \sum_{i=1}^n{\alpha_i} - \frac{1}{2}\sum_{i=1}^n{\sum_{j=1}^n{\alpha_i\alpha_jy_iy_jx_i^Tx_j}} }[/math]

such that [math]\displaystyle{ \alpha_i \geq 0 }[/math]
and [math]\displaystyle{ \sum_{i=1}^n{\alpha_i y_i} = 0 }[/math]

We are looking to minimize [math]\displaystyle{ \,\alpha }[/math], which is our only unknown. Once we know [math]\displaystyle{ \,\alpha }[/math], we can easily find [math]\displaystyle{ \,\beta }[/math] and [math]\displaystyle{ \,\beta_0 }[/math] (see the Support Vector algorithm below for complete details).

If we examine the Lagrangian equation, we can see that [math]\displaystyle{ \,\alpha }[/math] is multiplied by itself; that is, the Lagrangian is quadratic with respect to [math]\displaystyle{ \,\alpha }[/math]. Our constraints are linear. This is therefore a problem that can be solved through quadratic programming techniques. We will examine how to do this in Matlab shortly.

We can write the Lagrangian equation in matrix form:

[math]\displaystyle{ L(\alpha) = \underline{\alpha}^T\underline{1} - \frac{1}{2}\underline{\alpha}^TS\underline{\alpha} }[/math]

such that [math]\displaystyle{ \underline{\alpha} \geq \underline{0} }[/math]
and [math]\displaystyle{ \underline{\alpha}^T\underline{y} = 0 }[/math]

Where:

  • [math]\displaystyle{ \underline{\alpha} }[/math] denotes an [math]\displaystyle{ \,n \times 1 }[/math] vector; [math]\displaystyle{ \underline{\alpha}^T = [\alpha_1, ..., \alpha_n] }[/math]
  • Matrix [math]\displaystyle{ S = y_iy_jx_i^Tx_j = (y_ix_i)^T(y_ix_i) }[/math]
  • [math]\displaystyle{ \,\underline{0} }[/math] and [math]\displaystyle{ \,\underline{1} }[/math] are vectors containing all 0s or all 1s respectively

Using this matrix notation, we can use Matlab's built in quadratic programming routine, quadprog.

Quadprog example

Let's use quadprog to find the solution to [math]\displaystyle{ \,L(\alpha) }[/math].

Matlab's quadprog function minimizes an equation of the following form:

[math]\displaystyle{ \min_x\frac{1}{2}x^THx+f^Tx }[/math]
such that: [math]\displaystyle{ \,A \cdot x \leq b }[/math], [math]\displaystyle{ \,A_{eq} \cdot x = b_{eq} }[/math] and [math]\displaystyle{ \,lb \leq x \leq ub }[/math]

We can now see why we kept the [math]\displaystyle{ \frac{1}{2} }[/math] constant in the original derivation of the equation.

The function is called as such: x = quadprog(H,f,A,b,Aeq,beq,lb,ub). The variables correspond to values in the equation above.

We can now try to find the solution to [math]\displaystyle{ \,L(\alpha) }[/math] (though, it should be noted, that in [math]\displaystyle{ \,L(\alpha) }[/math] we subtract the first term rather than add it; I'm not sure if this difference would change how we call quadprog).

We'll use a simple one-dimensional data set, which is essentially y = -1 or 1 + Gaussian noise. (Note: you could easily put the values straight into the quadprog equation; they are separated for clarity.)

x = [mvnrnd([-1],[0.01],100); mvnrnd([1],[0.01],100)]';
y = [-ones(100,1); ones(100,1)];
S = (y*x)' * (y*x);
f = ones(200,1);
A = -ones(1,200);
b = 0;
Aeq = y';
beq = 0;
lb = 0;
ub = []; % There is no upper bound
alpha = quadprog(S,f,A,b,Aeq,beq,lb,ub);

This gives us the optimal [math]\displaystyle{ \,\alpha }[/math]... or at least I think it should, but it does not appear to work for me (that is, despite setting the lower boundary of 0, a number of values are still negative. Whether this is just the nature of quadprog or an error on my part is an exercise for the reader).

Examining K.K.T. conditions

Karush-Kuhn-Tucker conditions (more info) give us a closer look into the Lagrangian equation and the associated conditions.

Suppose we are looking to minimize [math]\displaystyle{ \,f(x) }[/math] such that [math]\displaystyle{ \,g_i(x) \geq 0, \forall{x} }[/math]. If [math]\displaystyle{ \,f }[/math] and [math]\displaystyle{ \,g }[/math] are differentiable, then the necessary conditions for [math]\displaystyle{ \hat{x} }[/math] to be a local minimum are:

  1. At the optimal point, [math]\displaystyle{ \frac{\partial L}{\partial \hat{x}} = 0 }[/math]; i.e. [math]\displaystyle{ f'(\hat{x}) - \sum{\alpha_ig'(\hat{x})}=0 }[/math]
  2. [math]\displaystyle{ \alpha_i \geq 0 }[/math]. (Dual Feasibility)
  3. [math]\displaystyle{ \alpha_ig_i(\hat{x}) = 0, \forall{i} }[/math] (Complementary Slackness)
  4. [math]\displaystyle{ g_i(\hat{x}) \geq 0 }[/math] (Primal Feasibility)

If any of these conditions are violated, then the problem is deemed not feasible.

These are all trivial except for condition 3. Let's examine it further in our support vector machine problem.

Support Vectors

Support vectors are the training points that determine the optimal separating hyperplane that we seek. Also, they are the most difficult points to classify or the most informative for the classification.

In our case, the [math]\displaystyle{ g_i(\hat{x}) }[/math] function is:

[math]\displaystyle{ \,g_i(x) = y_i(\beta^Tx_i+\beta_0)-1 }[/math]

Substituting [math]\displaystyle{ \,g_i }[/math] into KKT condition 3, we get [math]\displaystyle{ \,\alpha_i[y_i(\beta^Tx_i+\beta_0)-1] = 0 }[/math]. <br\>In order for this condition to be satisfied either
[math]\displaystyle{ \,\alpha_i= 0 }[/math] or
[math]\displaystyle{ \,y_i(\beta^Tx_i+\beta_0)=1 }[/math]

All points [math]\displaystyle{ \,x_i }[/math] will be either 1 or greater than 1 distance away from the hyperplane.

Case 1: a point [math]\displaystyle{ \displaystyle x_i \gt 1 }[/math] away from the margin

If [math]\displaystyle{ \,y_i(\beta^Tx_i+\beta_0) \gt 1 \Rightarrow \alpha_i = 0 }[/math].

If point [math]\displaystyle{ \, x_i }[/math] is not on the margin, then the corresponding [math]\displaystyle{ \,\alpha_i=0 }[/math].

Case 2: a point [math]\displaystyle{ \displaystyle x_i = 1 }[/math] away from the margin

If [math]\displaystyle{ \,\alpha_i \gt 0 \Rightarrow y_i(\beta^Tx_i+\beta_0) = 1 }[/math] <br\>If point [math]\displaystyle{ \, x_i }[/math] is on the margin, then the corresponding [math]\displaystyle{ \,\alpha_i\gt 0 }[/math].


Points on the margin, with corresponding [math]\displaystyle{ \,\alpha_i \gt 0 }[/math], are called support vectors.

Using support vectors

Support vectors are important because they allow the support vector machine algorithm to be insensitive to outliers. If [math]\displaystyle{ \,\alpha = 0 }[/math], then the cost function is also 0, and won't contribute to the solution of the SVM problem; only points on the margin — support vectors — contribute. Hence the model given by SVM in entirely defined by the set of support vectors, a subset of the entire training set. This is interesting because in the NN methods(and can be generalize to classical statistical learning) previous to this the configuration of the network needed to be specified. In this case we have a data-driven or 'nonparametric' model in which is the training set and algorithm will determine the support vectors, instead of fitting a set of parameters using CV or other error minimization functions.

References: Wang, L, 2005. Support Vector Machines: Theory and Applications, Springer, 3

The support vector machine algorithm

  1. Solve the quadratic programming problem:[math]\displaystyle{ \min_{\alpha} L(\alpha) = \sum_{i=1}^n{\alpha_i} - \frac{1}{2}\sum_{i=1}^n{\sum_{j=1}^n{\alpha_i\alpha_jy_iy_jx_i^Tx_j}} }[/math] such that [math]\displaystyle{ \alpha_i \geq 0 }[/math] and [math]\displaystyle{ \sum_{i=1}^n{\alpha_i y_i} = 0 }[/math]
    1. Use Matlab's quadprog to find the optimal [math]\displaystyle{ \,\underline{\alpha} }[/math]
  2. Find [math]\displaystyle{ \beta = \sum_{i=1}^n{\alpha_iy_i\underline{x_i}} }[/math]
  3. Find [math]\displaystyle{ \,\beta_0 }[/math] by choosing a support vector (a point with [math]\displaystyle{ \,\alpha_i \gt 0 }[/math]) and solving [math]\displaystyle{ \,y_i(\beta^Tx_i+\beta_0) = 1 }[/math]

Example in Matlab

The following code, taken verbatim from the lecture, shows how to use Matlab built-in SVM routines (found in the Bioinformatics toolkit) to do classification through support vector machines.

load 2_3;
[U,Y] = princomp(X');
data = Y(:,1:2);
l = [-ones(1,200) ones(1,200)];
[train,test] = crossvalind('holdOut',400);
% Gives indices of train and test; so, train is a matrix of 0 or 1, 1 where the point should be used as part of the training set
svmStruct = svmtrain(data(train,:), l(train), 'showPlot', true);
The plot produced by training on some of the 2_3 data's first two features.
yh = svmclassify(svmStruct, data(test,:), 'showPlot', true);
The plot produced by testing some of the 2_3 data.

SVM in Gene Selection

DNA micro-arrays now permit scientists to screen thousands of genes simultaneously and determine whether those genes are active, hyperactive or silent in normal or cancerous tissue. Because these new micro-array devices generate bewildering amounts of raw data, new analytical methods must be developed to sort out whether cancer tissues have distinctive signatures of gene expression over normal tissues or other types of cancer tissues.

Extention:Support Vector Machines for Pattern Recognition

[32] This paper talks about linear Support Vector Machines for separable and non-separable data by working through a non-trivial example in detail, and also it describes a mechanical analog and when SVM solutions are unique and when they are global. From this paper we can know support vector training can be practically implemented, and the kernel mapping technique which is used to construct SVM solutions which are nonlinear in the data.

Results of some experiments which were inspired by these arguments are also presented. The writer gives numerous examples and proofs of most of the key theorems, he hopes the people can find old material is cast in a fresh light since the paper includes some new material.

Limitation of SVM algorithm [33]

  • The biggest limitation of SVM lies in the choice of the kernel (the best choice of kernel for a given problem is still a research problem).
  • A second limitation is speed and size (mostly in training - for large training sets, it typically selects a small number of support vectors, thereby minimizing the computational requirements during testing).

Non-linear hypersurfaces and Non-Separable classes - November 20, 2009

Kernel Trick

We talked about the curse of dimensionality at the beginning of this course, however, we now turn to the power of high dimensions in order to find a linearly separable hyperplane between two classes of data points. To understand this, imagine a two dimensional prison where a two dimensional person is constrained. Suppose magically we give the person a third dimension, then he can escape from the prison. In other words, the prison and the person are linearly separable now with respect to the third dimension. The intuition behind the "kernel trick" is basically to map data to a higher dimension so that they are linearly separable by a hyperplane.

Imagine the point is a person. They're stuck.
Escape through the third dimension!
It's not possible to put a hyperplane through these points.
After a simple transformation, a perfect classification plane can be found.

The original optimal hyperplane algorithm proposed by Vladimir Vapnik in 1963 was a linear classifier. However, in 1992, Bernhard Boser, Isabelle Guyon and Vapnik suggested a way to create non-linear classifiers by applying the kernel trick to maximum-margin hyperplanes. The algorithm is very similar, except that every dot product is replaced by a non-linear kernel function as below. This allows the algorithm to fit the maximum-margin hyperplane in a transformed feature space. We have seen SVM as a linear classification problem that finds the maximum margin hyperplane in the given input space. However, for many real world problems a more complex decision boundary is required. The following simple method was devised in order to solve the same linear classification problem but in a higher dimensional space, a 'feature space', under which the maximum margin hyperplane is better suited.

Let [math]\displaystyle{ \,\phi }[/math] be a mapping,

[math]\displaystyle{ \phi:\mathbb{R}^d \rightarrow \mathbb{R}^D }[/math]

We wish to find a [math]\displaystyle{ \,\phi }[/math] such that our data will be suited for separation by a hyperplane. Given this function, we are lead to solve the previous constrained quadratic optimization on the transformed dataset,

[math]\displaystyle{ \min_{\alpha} L(\alpha) = \sum_{i=1}^n{\alpha_i} - \frac{1}{2}\sum_{i=1}^n{\sum_{j=1}^n{\alpha_i\alpha_jy_iy_j\phi(x_i)^T\phi(x_j)}} }[/math] such that [math]\displaystyle{ \alpha_i \geq 0 }[/math] and [math]\displaystyle{ \sum_{i=1}^n{\alpha_i y_i} = 0 }[/math]

The solution to this optimization problem is now well known; however a workable [math]\displaystyle{ \,\phi }[/math] must be determined. Possibly the largest drawback in this method is that we must compute the inner product of two vectors in the high dimensional space. As the number of dimensions in the initial data set increases, the inner product becomes computationally intensive or impossible.

However, we have a very useful result that says that there exists a class of functions, [math]\displaystyle{ \,\Phi }[/math], which satisfy the above requirements and that for any function [math]\displaystyle{ \,\phi \in \Phi }[/math],

[math]\displaystyle{ \,\phi(x_i)^T\phi(x_j) = K(x_i,x_j) }[/math]

Where K is the kernel function in the input space satisfying Mercer's condition (to guarantee that it indeed corresponds to certain mapping function [math]\displaystyle{ \,\phi }[/math]). As a result, if the objective function depends on inner products but not on coordinates, we can always use the kernel function to implicitly calculate in the feature space without storing the huge data. Not only does this solve the computation problems but it no longer requires us to explicitly determine a specific mapping function in order to use this method. In fact, it is now possible to use an infinite dimensional feature space in SVM without even knowing the function [math]\displaystyle{ \,\phi }[/math].


Three popular kernel choices in the SVM

The SVM only relies on the inner-product between vectors [math]\displaystyle{ \ x_ix_j }[/math] If every data point is mapped into high-dimensional space via some transformation [math]\displaystyle{ \ \phi }[/math], the inner-product becomes: [math]\displaystyle{ \ K(x_i,x_j)= \phi(x_i)\phi (x_j) }[/math] [math]\displaystyle{ \ K(x_i,x_j) }[/math] is called the kernel function. For SVM, we only need specify the kernel [math]\displaystyle{ \ K(x_i,x_j) }[/math], without need to know the corresponding non-linear mapping, [math]\displaystyle{ \ \phi (x) }[/math].

There are many types of kernels that can be used in Support Vector Machines models. These include linear, polynomial, radial basis function (RBF) and sigmoid functions.

Linear: [math]\displaystyle{ \ K(\underline{x}_{i},\underline{x}_{j})= \underline{x}_{i}^T\underline{x}_{j} }[/math],

Polynomial: [math]\displaystyle{ \ K(\underline{x}_{i},\underline{x}_{j})= (\gamma\underline{x}_{i}^T\underline{x}_{j}+r)^{d}, \gamma \gt 0 }[/math],

Radial Basis: [math]\displaystyle{ \ K(\underline{x}_{i},\underline{x}_{j})= exp(-\gamma \|\underline{x}_i - \underline{x}_j\|^{2}), \gamma \gt 0 }[/math],

Gaussian kernel: [math]\displaystyle{ \ K(x_i,x_j)=exp(\frac{-||x_i-x_j||^2}{2\sigma^2 }) }[/math],

Two-layer perceptron: [math]\displaystyle{ \ K(x_i,x_j)=tanh(\alpha x_ix_j+\beta) }[/math],

Sigmoid: [math]\displaystyle{ \ K(\underline{x}_{i},\underline{x}_{j})= tanh(\gamma\underline{x}_{i}^T\underline{x}_{j}+r) }[/math].

Here, [math]\displaystyle{ \ \gamma }[/math], [math]\displaystyle{ \ r }[/math], and [math]\displaystyle{ \ d }[/math] are all kernel parameters.

The RBF is by far the most popular choice of kernel types used in Support Vector Machines. This is mainly because of their localized and finite responses across the entire range of the real x-axis.The art of flexible modeling using basis expansions consists of picking an appropriate family of basis functions, and then controlling the complexity of the representation by selection, regularization, or both. Some of the families of basis functions have elements that are defined locally; for example, [math]\displaystyle{ \displaystyle B }[/math]-splines are defined locally in [math]\displaystyle{ \displaystyle R }[/math]. If more flexibility is desired in a particular region, then that region needs to be represented by more basis functions(which in the case of [math]\displaystyle{ \displaystyle B }[/math]-splines translates to more knots).Kernel methods achieve flexibility by fitting simple models in a region local to the target point [math]\displaystyle{ \displaystyle x_0 }[/math]. Localization is achieved via a weighting kernl [math]\displaystyle{ \displaystyle K }[/math] and individual observations receive weights [math]\displaystyle{ \displaystyle K(x_0,x_i) }[/math]. RBF combine these ideas, by treating the kernel functions as basis functions.


Once we have chosen the Kernel function, we don't need to figure out what [math]\displaystyle{ \,\phi }[/math] is, just use [math]\displaystyle{ \,\phi(\underline{x}_i)^T\phi(\underline{x}_j) = K(\underline{x}_i,\underline{x}_j) }[/math] to replace [math]\displaystyle{ \,\underline{x}_i^T\underline{x}_j }[/math]

Since the transformation chosen is dependent on the shape of the data, the only automated way to choose an appropriate kernel is by trial and error. Otherwise it is chosen manually.

Mercer's Theorem in detail

Let [math]\displaystyle{ \,\phi }[/math] be a mapping to a high dimensional Hibert Space [math]\displaystyle{ \,H }[/math]


[math]\displaystyle{ \phi:x \in \mathbb{R}^d \rightarrow H }[/math]

The transformed coordinates can be defined as,

[math]\displaystyle{ \phi_1(x)\dots\phi_d(x)\dots }[/math]

By Hilbert - Schmidt theory we can represent an inner product in Hilbert space as,

[math]\displaystyle{ \,\phi(x_i)^T\phi(x_j) = \sum_{r=1}^{\infty}a_k\phi_r(x_i)\phi(x_j) \Leftrightarrow K(x_i,x_j), \ a_r \ge 0 }[/math]

where K is symmetric, then Mercer's theorem gives necessary and sufficient conditions on K for it to satisfy the above relation.

Mercer's Theorem

Let C be a compact subset of [math]\displaystyle{ \mathbb{R}^d }[/math] and K a function [math]\displaystyle{ \in L^2(C) }[/math], if

[math]\displaystyle{ \, \int_C\int_C K(u,v)g(u)g(v)dudv \ge 0, \ \forall g \in L^2(C) }[/math]

then,

[math]\displaystyle{ \sum_{r=1}^{\infty}a_k\phi_r(u)\phi(v) }[/math] converges absolutely and uniformly to a symmetric function [math]\displaystyle{ \,K(u,v) }[/math]

References: Vapnik, V., 1998. Statistical Learning Theory. John Wiley & Sons, {423}

Mercer, J., 1909. Functions of positive and negative type and their connection with the theory of integral equations. Philos. Trans. Roy. Soc. London, A 209:415{446}

Kernel Functions

There are various kernel functions, for example:

  • Linear kernel: [math]\displaystyle{ \,k(x,y)=x \cdot y }[/math]
  • Polynomial kernel: [math]\displaystyle{ \,k(x,y)=(x \cdot y)^d }[/math]
  • Gaussian kernel: [math]\displaystyle{ e^{-\frac{|x-y|^2}{2\sigma^2}} }[/math]

If [math]\displaystyle{ \,X }[/math] is a [math]\displaystyle{ \,d \times n }[/math] matrix in the original space, and [math]\displaystyle{ \,\phi(X) }[/math] is a [math]\displaystyle{ \,D \times n }[/math] matrix in the Hilbert space (good explanation video: part 1 part 2), then [math]\displaystyle{ \,\phi^T(X) \cdot \phi(X) }[/math] is an [math]\displaystyle{ \,n \times n }[/math] matrix. The inner product is also illustrated as correlation, which measures the similarity between data points. This gives us some insight in how to choose the kernel. The choice depends on certain prior knowledge of the problem and on how we believe the similarity of our data should be measured. In practice, the Gaussian (RBF) kernel usually works best. Besides the most common kernel functions mentioned above, many novel kernels are also suggested for different problem domains like text classification, gene classification and so on.

These kernel functions can be applied to many algorithms to derive the "kernel version". For example, kernel PCA, kernel LDA, etc.

SVM: non-separable case

We have seen how SVMs are able to find an optimally separating hyperplane of two separable classes of data, in which case the margin contains no data points. However, in the real world, data of different classes are usually mixed together at the boundary and it's hard to find a perfect boundary to totally separate them. To address this problem, we slacken the classification rule to allow data cross the margin. Mathematically the problem becomes,

[math]\displaystyle{ \min_{\beta, \beta_0} \frac{1}{2}|\beta|^2 }[/math]
[math]\displaystyle{ \,y_i(\beta^Tx_i+\beta_0) \geq 1-\xi_i }[/math]
[math]\displaystyle{ \xi_i \geq 0 }[/math]

Now each data point can have some error [math]\displaystyle{ \,\xi_i }[/math]. However, we only want data to cross the boundary when they have to and make the minimum sacrifice; thus, a penalty term is added correspondingly in the objective function to constrain the number of points that cross the margin. The optimization problem now becomes:

File:non-separable.JPG
Figure non-separable case
[math]\displaystyle{ \min_{\alpha} \frac{1}{2}|\beta|^2+\gamma\sum_{i=1}^n{\xi_i} }[/math]
[math]\displaystyle{ \,s.t. }[/math] [math]\displaystyle{ y_i(\beta^Tx+\beta_0) \geq 1-\xi_i }[/math]
[math]\displaystyle{ \xi_i \geq 0 }[/math]

<br\>Note that [math]\displaystyle{ \,\xi_i }[/math] is not necessarily smaller than one, which means data can not only enter the margin but can also cross the separating hyperplane.

<br\>Note that [math]\displaystyle{ \,\gamma \Rightarrow \infty }[/math] is feasible in the separable case, as all [math]\displaystyle{ \,\xi_i = 0 }[/math]. In general, for higher [math]\displaystyle{ \,\gamma }[/math], the sets are more separable.

Aside: More information about SVM and Kernal

SVMs are currently among the best performers for many benchmark datasets and have been extended to a number of tasks such as regression. It seems the kernel trick is the most attracting site of SVMs. This idea has now been applied to many other learning models where the inner-product is concerned, and they are called ‘kernel’ methods. Tuning SVMs remains to be the main research focus: how to an optimal kernel? Kernel should match the smooth structure of the data.

Support Vector Machine algorithm for non-separable cases - November 23, 2009

With the program formulation above, we can form the lagrangian, apply KKT conditions, and come up with a new function to optimize. As we will see, the equation that we will attempt to optimize in the SVM algorithm for non-separable data sets is the same as the optimization for the separable case, with slightly different conditions.

Forming the Lagrangian

In this case we have have two constraints in the Lagrangian and therefore we optimize with respect to two dual variables [math]\displaystyle{ \,\alpha }[/math] and [math]\displaystyle{ \,\lambda }[/math],

[math]\displaystyle{ L: \frac{1}{2} |\beta|^2 + \gamma \sum_{i=1}^n \xi_i - \sum_{i=1}^n \alpha_i[y_i(\beta^T x_i+\beta_0)-1+\xi_i]-\sum_{i=1}^n \lambda_i \xi_i }[/math]
[math]\displaystyle{ \alpha_i \geq 0, \lambda_i \geq 0 }[/math]

Applying KKT conditions[34]

  1. [math]\displaystyle{ \frac{\partial L}{\partial p} = 0 }[/math] at an optimal solution [math]\displaystyle{ \, \hat p }[/math], for each primal variable [math]\displaystyle{ \,p = \{\beta, \beta_0, \xi\} }[/math]
    [math]\displaystyle{ \frac{\partial L}{\partial \beta}=\beta - \sum_{i=1}^n \alpha_i y_i x_i = 0 \Rightarrow \beta=\sum_{i=1}^n\alpha_i y_i x_i }[/math] <br\>[math]\displaystyle{ \frac{\partial L}{\partial \beta_0}=-\sum_{i=1}^n \alpha_i y_i =0 \Rightarrow \sum_{i=1}^n \alpha_i y_i =0 }[/math] since the sign does not make a difference
    [math]\displaystyle{ \frac{\partial L}{\partial \xi_i}=\gamma - \alpha_i - \lambda_i \Rightarrow \gamma = \alpha_i+\lambda_i }[/math]. This is the only new condition added here
  2. [math]\displaystyle{ \,\alpha_i \geq 0, \lambda_i \geq 0 }[/math], dual feasibility
  3. [math]\displaystyle{ \alpha_i g_i(\hat p) = 0 }[/math], complementary slackness: [math]\displaystyle{ \,\alpha_i[y_i(\beta^T x_i+\beta_0)-1+\xi_i]=0 }[/math] and [math]\displaystyle{ \,\alpha_i \xi_i=0 }[/math]
  4. [math]\displaystyle{ \, g_i(\hat p) \geq 0 }[/math], primal feasibility. [math]\displaystyle{ \,y_i( \beta^T x_i+ \beta_0)-1+ \xi_i \geq 0 }[/math]

Putting it all together

With our KKT conditions and the Lagrangian equation, we can now use quadratic programming to find [math]\displaystyle{ \,\alpha }[/math]. <br\> Similar to what we did for the separable case after apply KKT conditions, replace the primal variables in terms of dual variables into the Lagrangian equations and simplify.


In matrix form, we want to solve the following optimization:

[math]\displaystyle{ L(\alpha) = \underline{\alpha}^T\underline{1} - \frac{1}{2}\underline{\alpha}^TS\underline{\alpha} }[/math]
[math]\displaystyle{ \,s.t. }[/math] [math]\displaystyle{ \underline{0} \leq \underline{\alpha} \leq \gamma }[/math], [math]\displaystyle{ \underline{\alpha}^T\underline{y} = 0 }[/math]

Solving this gives us [math]\displaystyle{ \,\underline{\alpha} }[/math], which we can use to find [math]\displaystyle{ \,\underline{\beta} }[/math] as before:

[math]\displaystyle{ \,\underline{\beta} = \sum{\alpha_i y_i \underline{x_i}} }[/math]

However, we cannot find [math]\displaystyle{ \,\beta_0 }[/math] in the same way as before, even if we choose a point with [math]\displaystyle{ \,\alpha_i \gt 0 }[/math], because we do not know the value of [math]\displaystyle{ \,\xi_i }[/math] in the equation

[math]\displaystyle{ \,y_i(\underline{\beta}^Tx_i + \beta_0) - 1 + \xi_i = 0 }[/math]

From our discussion on the KKT conditions, we know that [math]\displaystyle{ \,\lambda_i \xi_i = 0 }[/math] and [math]\displaystyle{ \,\gamma = \alpha_i + \lambda_i }[/math].

So, if [math]\displaystyle{ \,\alpha_i \lt \gamma }[/math] then [math]\displaystyle{ \,\lambda_i \gt 0 }[/math] and consequently [math]\displaystyle{ \,\xi_i = 0 }[/math].

Therefore, we can solve for [math]\displaystyle{ \,\beta_0 }[/math] if we choose a point where:

[math]\displaystyle{ \,0 \lt \alpha_i \lt \gamma }[/math]

Note

  • When [math]\displaystyle{ \,0 \lt \alpha_i \lt \gamma }[/math], we are considering a point that is on the margin.
  • If [math]\displaystyle{ \,0 \lt \alpha_i = \gamma }[/math] then [math]\displaystyle{ \,\lambda_i = 0 \Rightarrow \xi_i \gt 0 }[/math] and we're dealing with a point that has crossed the margin.
  • In this case, the local minimization is also the global minimization. Since [math]\displaystyle{ \,S }[/math] is positive semidefinite, then [math]\displaystyle{ \,L(\alpha) }[/math] is convex.

The SVM algorithm for non-separable data sets

The algorithm, then, for non-separable data sets is:

  1. Use quadprog (or another quadratic programming technique) to solve the above optimization and find [math]\displaystyle{ \,\alpha }[/math]
  2. Find [math]\displaystyle{ \,\underline{\beta} }[/math] by solving [math]\displaystyle{ \,\underline{\beta} = \sum{\alpha_i y_i x_i} }[/math]
  3. Find [math]\displaystyle{ \,\beta_0 }[/math] by choosing a point where [math]\displaystyle{ \,0 \lt \alpha_i \lt \gamma }[/math] and then solving [math]\displaystyle{ \,y_i(\underline{\beta}^Tx_i + \beta_0) - 1 = 0 }[/math]

Potential drawbacks

Potential drawbacks of the SVM are the following two aspects:

1.Uncalibrated Class membership probabilities[35]

2.The SVM is only directly applicable for two-class tasks. Therefore, algorithms that reduce the multi-class task to several binary problems have to be applied, see the Multi-class SVM section.

3.How to select the kernel function parameters - for Gaussian kernels the width parameter [math]\displaystyle{ \,\sigma }[/math], and the value of [math]\displaystyle{ \,\epsilon }[/math] in the [math]\displaystyle{ \,\epsilon }[/math]-insensitive loss function have not entirely solved yet.


some resouces:

1. introduction of SVM[36]

2. SVM in computational biology[37]

Finishing up SVM - November 25, 2009

Does SVM find a global minimum?

When we discussed KKT conditions, we listed the necessary conditions for [math]\displaystyle{ \hat{x} }[/math] to be a local minimum. However, it would be ideal if we could show that SVM finds a global minimum (unlike, say, neural networks that find a local minimum).

Recall that our conditions, for the non-separable case, are [math]\displaystyle{ \,0 \leq \underline{\alpha} \leq \gamma }[/math] and [math]\displaystyle{ \,\underline{\alpha}^T\underline{y} = 0 }[/math]. These are both convex.

Our lagrangian equation is [math]\displaystyle{ L(\alpha) = \underline{\alpha}^T\underline{1} - \frac{1}{2}\underline{\alpha}^TS\underline{\alpha} }[/math]. Since this is quadratic, it might be convex, but it also may not be; it depends on the matrix [math]\displaystyle{ \,S }[/math]. If [math]\displaystyle{ \,S }[/math] is a positive semi-definite matrix, then the lagrangian equation is convex.

Recall that [math]\displaystyle{ \,S }[/math] is the product of [math]\displaystyle{ \,L^TL }[/math], where [math]\displaystyle{ L_{d\times n} = \begin{bmatrix} y_1x_{11}& & y_nx_{1n} \\ \vdots&\cdots& \vdots\\ y_1x_{d1}& & y_nx_{dn} \\ \end{bmatrix} }[/math]. Similar to the notion that squaring any number will give us a positive number in the end, a matrix that is the product of a matrix transposed times itself will result in a positive semi-definite matrix.

So, we know that [math]\displaystyle{ \,S }[/math] is positive semi-definite. The lagrangian equation is convex, and therefore, the SVM algorithm finds a global minimum.

Naive Bayes, Decision Trees, K Nearest Neighbours, Boosting, and Bagging - November 25, 2009

Now that we've covered a number of more advanced classification algorithms, we can look at some of the simpler classification algorithms that are usually discussed at the beginning of a discussion on classification.

Naive Bayes Classifiers

Recall that one of the major drawbacks of the Bayes classifier was the difficulty in estimating a joint density in a multidimensional space. Naive Bayes classifiers are one possible solution to the problem. They are especially popular for problems with high-dimension feature problems.

A naive Bayes classifier applies a strong independence assumption to the class density [math]\displaystyle{ \,f_{k}(x) }[/math]. It assumes that inputs within each class are conditionally independent. In other words, it assumes that the value of one feature in a class is unrelated to that of any other feature.

[math]\displaystyle{ \ f_{k}(x) = \prod_{j=1}^d f_{jk}(x_{j}) }[/math]

Each of the d marginal densities can be estimated separately using one-dimensional density estimates. If one of the components [math]\displaystyle{ \,x_{j} }[/math] is discrete then its density can be estimated using a histogram. We can thus mix discrete and continuous variables in a naive Bayes classifier.

Naive Bayes classifiers often perform extremely well in practice despite these 'naive' and seemingly optimistic assumptions. This is because while individual class density estimates could be biased, the bias does not carry through to the posterior probabilities.

It is also possible to train naive Bayes classifiers using maximum likelihood estimation.

Decision Trees

Decision trees[38] are highly intuitive learning methods that can be thought of as partitioning the feature space into a number of rectangles. Trees can be used for classification, regression, or both. Trees map features of a decision problem onto a conclusion, or label.

We fit a tree model by minimizing some measure of impurity. For a single covariate [math]\displaystyle{ \,X_{1} }[/math] we choose a point t on the real line that splits the real line into two sets R1 = [math]\displaystyle{ (-\infty,t] }[/math], R2 = [math]\displaystyle{ [t,\infty) }[/math] in a way that minimizes impurity.

We denote by [math]\displaystyle{ \hat p_{s}(j) }[/math] the proportion of observations in [math]\displaystyle{ \ R_{s} }[/math] that [math]\displaystyle{ \ Y_{i} = j }[/math].


[math]\displaystyle{ \hat p_{s}(j) = \frac{\sum_{i = 1}^{n} I(Y_{i} = j,X_{i} \in R_{s})}{\sum_{i = 1}^{n} I(X_{i} \in R_{s})} }[/math]

Extension: Decision Tree Analysis Decision Trees from Mind Tools

useful link:

Algorithm, Overfitting, Examples:[39],[40],[41]

Common Node Impurity Measures

Some common node impurity measures are:

  • Misclassification error:


[math]\displaystyle{ 1 - \hat p_{s}(j) }[/math]


  • Gini Index:


[math]\displaystyle{ \sum_{j \neq i} \hat p_{s}(j)\hat p_{s}(i) }[/math]


  • Cross-entropy:


[math]\displaystyle{ - \sum_{j = 1}^{K} \hat p_{s}(j) log(\hat p_{s}(j)) }[/math]

K-Nearest Neighbours Classification

[math]\displaystyle{ K }[/math]-nearest neighbours is a very simple algorithm that classifies points based on a majority vote of the [math]\displaystyle{ \ k }[/math] nearest points in the feature space, with the object being assigned to the class most common among its [math]\displaystyle{ \ k }[/math] nearest neighbors. [math]\displaystyle{ \ k }[/math] is a positive integer, typically small which is chosen by cross validation. If [math]\displaystyle{ \ k=1 }[/math], then the object is simply assigned to the class of its nearest neighbor.

1. Ties are broken at random.

2. If we assume the features are real, we can use the Euclidean distance in feature space.

3. Since the features are measured in different units, we can standardize the features to have mean zero and variance 1.

Property[42]

K-mearest neighbor algorithm has some strong results. As the number of data points goes infinity, the algorithm is guaranteed to yield an error rate no worse than twice the Bayes error rate (the minimum achievable error rate given the distribution of the data). K-nearest neighbor is guaranteed to approach the Bayes error rate, for some value of k (where k increases as a function of the number of data points).

Boosting

Boosting algorithms are a class of machine learning meta-algorithms that can improve weak classifiers. If we have a weak classifier which slightly does better than random classification, then by assigning larger weights to points which are misclassified and trying to minimize the new cost function, we probably can get a new classifier which classifies with less error. This procedure can be repeated for a finite number of times and then a new classifier which is a weighed aggregation of the generated classifiers will be used as the boosted classifier. The better each generated classifier is the more its weight is in the final classifier.

Paper about Booting: Boosting is a general method for improving the accuracy of any given learning algorithm. This paper introduces the boosting algorithm AdaBoost, and explains the underlying theory of boosting, including an explanation of why boosting often does not suffer from overfitting as well as boosting’s relationship to support-vector machines. Finally, this paper gives some examples of applications of boosting recently.

AdaBoost Algorithm

Let's first look at the original boosting algorithm:

  1. Set all the weights of all points equal [math]\displaystyle{ w_i\leftarrow \frac{1}{n} }[/math] where we have [math]\displaystyle{ \,n }[/math] points.
  2. For [math]\displaystyle{ j=1,\dots, J }[/math]
    1. Find [math]\displaystyle{ h_j:X\rightarrow \{-1,+1\} }[/math] that minimizes the weighted error [math]\displaystyle{ \,L_j }[/math]
      [math]\displaystyle{ h_j=\mbox{argmin}_{h_j} L_j }[/math] where [math]\displaystyle{ L_j=\frac{\sum_{i=1}^n w_i I[y_i\neq h_j(x_i)]}{\sum_{i=1}^n w_i} }[/math]
    2. Let [math]\displaystyle{ \alpha_j\leftarrow\log(\frac{1-L_j}{L_j}) }[/math]
    3. Update the weights: [math]\displaystyle{ w_i\leftarrow w_i e^{a_j I[y_j\neq h_j(x_i)]} }[/math]
  3. The final classifier is [math]\displaystyle{ h(x)=\mbox{sign}\left(\sum_{j=1}^J \alpha_j h_j(x)\right) }[/math]

When applying boosting to different classifiers, the first step in 2 may be different since we can define the most proper misclassification error according to the problem. However, the major idea is to give higher weight to misclassified examples, which does not change across classifiers.

Boosting works very well in practice, and there are a lot of research and published works on why it works this well. One possible explanation is that it actually maximizes the margin of classifiers.

We can see that in AdaBoost if training points are accurately classified, then their weights of being used in the next classifier is kept unchanged, while if points are not accurately classified, their weights of being used again is raised. At a result easier examples get classified in the very first few classifiers and hard examples are learned later with increasing emphasis. Finally, all the classifiers are combined through a majority vote, which is also weighted by their accuracy, taking consideration of both the easy and hard points. Thus the AdaBoost focuses on the more informative or difficult points.

AnyBoost

Many boosting algorithms belong to a class called AnyBoost which are gradient descent algorithms for choosing linear combinations of elements of an inner product space in order to minimize some cost functional.

We are primarily interested in voted combinations of classifiers [math]\displaystyle{ H(x) = sgn(\sum_{j=1}^J \alpha_j h_j(x)) }[/math]

We want to find H such that the cost functional [math]\displaystyle{ C(F) = \frac{1}{m}\sum_{i=1}^m c(y_i F(x_i)) }[/math] is minimized for a suitable cost function [math]\displaystyle{ c }[/math]

[math]\displaystyle{ h_j:X\rightarrow \{-1,+1\} }[/math] are weak base classifiers from some class [math]\displaystyle{ \ H }[/math] and [math]\displaystyle{ \alpha_j }[/math] are classifier weights. The margin of an example [math]\displaystyle{ (x_i,y_i) }[/math] is defined by [math]\displaystyle{ y_i H(x_i) }[/math].

The base hypotheses h and their linear combinations H can be considered to be elements of an inner product function space [math]\displaystyle{ (S,\langle,\rangle) }[/math].

We define the inner product as [math]\displaystyle{ \langle F,G \rangle = \frac{1}{m}\sum_{i=1}^m F(x_i) G(x_i) }[/math] but the AnyBoost algorithm is valid for any cost function and inner product. We have a function [math]\displaystyle{ H }[/math] as a linear combination of base classifiers and wish to add a base classifier h to H so that cost [math]\displaystyle{ \ C(H + \epsilon h) }[/math] decreases for arbitrarily small [math]\displaystyle{ \epsilon }[/math]. The direction we seek is found by maximizing [math]\displaystyle{ -\langle\nabla C(H),h\rangle }[/math]


AnyBoost algorithm:

  1. [math]\displaystyle{ \ H_0(x) = 0 }[/math]
  2. For [math]\displaystyle{ j=0,\dots, J }[/math]
    1. Find [math]\displaystyle{ h_{j+1}:X\rightarrow \{-1,+1\} }[/math] that maximizes the inner product [math]\displaystyle{ -\langle\nabla C(H),h_{j+1}\rangle }[/math]
    2. If [math]\displaystyle{ -\langle\nabla C(H),h_{j+1}\rangle \leq 0 }[/math] then
      1. Return [math]\displaystyle{ \ H_j }[/math]
    3. Choose step size [math]\displaystyle{ \ \alpha_{j+1} }[/math]
    4. [math]\displaystyle{ \ H_{j+1} = H_j + \alpha_{j+1} h_{j+1} }[/math]
  3. The final classifier is [math]\displaystyle{ \ H_{J+1} }[/math]


Other voting methods, including AdaBoost, can be viewed as special cases of this algorithm.

Bagging

Bagging, or bootstrap aggregating, is another meta-technique used to reduce the variance of classifiers with high variability. It exploits the fact that a bootstrap mean is approximately equal to the posterior average. It is most effective for highly nonlinear classifiers such as decision trees. In particular because of the highly unstable nature of these classifiers, they stand most likely to benefit from bagging.

The idea is to train classifiers [math]\displaystyle{ \ h_{1}(x) }[/math] to [math]\displaystyle{ \ h_{B}(x) }[/math] using B bootstrap samples from the data set. The final classification is obtained using an average or 'plurality vote' of the B classifiers as follows:

[math]\displaystyle{ \, h(x)= \left\{\begin{matrix} 1 & \frac{1}{B} \sum_{i=1}^{B} h_{b}(x) \geq \frac{1}{2} \\ 0 & \mathrm{otherwise} \end{matrix}\right. }[/math]

Many classifiers, such as trees, already have underlying functions that estimate the class probabilities at [math]\displaystyle{ \,x }[/math]. An alternative strategy is to average these class probabilities instead of the final classifiers. This approach can produce bagged estimates with lower variance and usually better performance.

References: Breiman L, Bagging Predictors, Machine Learning, 24, 123–140 (1996)

Example

Random Forests

A classifier consisting of a collection of tree–structured classifiers [math]\displaystyle{ \displaystyle {h(x;\theta_k)k=1,2,…}, }[/math] where [math]\displaystyle{ {\theta_k } }[/math] are independently and identically distributed random vectors. The nature and dimensionality of [math]\displaystyle{ \theta }[/math] depends on its use in the tree construction. Bagging is such an example. Currently, random forest use randomly selected inputs or combinations of inputs at each node to grow the tree.

How to compare without a test set? Randomly partition the data into 10% and 90% sets. Use the 90% as training data, to grow tree model using cross validation (1-se rule) and also grow a random forest. Then predict the 10% test data. Repeat the procedure 100 times and average the result.R code is as follows.


>misv.tree<-rep(0,100);

>sizev.tree<-rep(0,100);

>misv.forest<-rep(0,100);

>for (j in 1:100)

{
 list<-sample(seq(1:683),70,replace=F)
 train<-data.frame(bc[- list,]);     
 test<-data.frame(bc[list,])
 tr0<-rpart(factor(Y)~.,data=train, control=rpart.control( minsplit=10, minbucket=5, cp=0.0,xval=10))
 x<-printcp(tr0)
 bs<-x[1,2]+1; 
 min<-x[1,4]; 
 cpv<-x[1,1]; 
 stde<-x[1,5]
 for (i in 1:length(x[,1])) 
    {         
        if(x[i,4]<min )
         {
              min<-x[i,4]
              bs<-x[i,2]+1
              cpv<-x[i,1]
              stde<-x[i,5]
           } 
    }
limit<-min+stde  ; 
index<-0  ;
for(i in 1:length(x[,1]))
 {
   if(index<1){
   if(x[i,4]>limit)
  {
    bs<-x[i+1,2]+1
    cpv<-x[i+1,1]
   }
else index<-2 } 
 }
tr<-prune(tr0,cp=cpv)  # prune tree
fity<-predict(tr,newdata=test,type='class')
table<-table(test[,1],fity)
mis<-table[1,2]+table[2,1]
misv.tree[j]<-mis
sizev.tree[j]<-bs 
forest<-randomForest(factor(Y)~.,data=train,mtry=4,ntree=100) # random forest
fity<-predict(forest,newdata=test,type='class')
table<-table(test[,1],fity)
mis<-table[1,2]+table[2,1]
misv.forest[j]<-mis

}