stat841f11
Proposal for Final Project
Presentation Sign Up
Editor Sign Up
STAT 441/841 / CM 463/763 - Tuesday, 2011/09/20
Wiki Course Notes
Students will need to contribute to the wiki for 20% of their grade. Access via wikicoursenote.com Go to editor sign-up, and use your UW userid for your account name, and use your UW email.
primary (10%) Post a draft of lecture notes within 48 hours. You will need to do this 1 or 2 times, depending on class size.
secondary (10%) Make improvements to the notes for at least 60% of the lectures. More than half of your contributions should be technical rather than editorial. There will be a spreadsheet where students can indicate what they've done and when. The instructor will conduct random spot checks to ensure that students have contributed what they claim.
Classification (Lecture: Sep. 20, 2011)
Introduction
Machine learning (ML) methodology in general is an artificial intelligence approach to establish and train a model to recognize the pattern or underlying mapping of a system based on a set of training examples consisting of input and output patterns. Unlike in classical statistics where inference is made from small datasets, machine learning involves drawing inference from an overwhelming amount of data that could not be reasonably parsed by manpower.
In machine learning, pattern recognition is the assignment of some sort of output value (or label) to a given input value (or instance), according to some specific algorithm. The approach of using examples to produce the output labels is known as learning methodology. When the underlying function from inputs to outputs exists, it is referred to as the target function. The estimate of the target function which is learnt or output by the learning algorithm is known as the solution of learning problem. In case of classification this function is referred to as the decision function.
In the broadest sense, any method that incorporates information from training samples in the design of a classifier employs learning. Learning tasks can be classified along different dimensions. One important dimension is the distinction between supervised and unsupervised learning. In supervised learning a category label for each pattern in the training set is provided. The trained system will then generalize to new data samples. In unsupervised learning , on the other hand, training data has not been labeled, and the system forms clusters or natural grouping of input patterns based on some sort of measure of similarity and it can then be used to determine the correct output value for new data instances.
The first category is known as pattern classification and the second one as clustering. Pattern classification is the main focus in this course.
Classification problem formulation : Suppose that we are given n observations. Each observation consists of a pair: a vector
The classification task is now looking for a function
Definitions
The true error rate for classifier
The empirical error rate is the error of our classification function
So in this case,
For example, suppose we have 100 new data points with known (true) labels
To calculate the empirical error, we count how many times our function
Bayes Classifier
The principle of the Bayes Classifier is to calculate the posterior probability of a given object from its prior probability via Bayes' Rule, and then assign the object to the class with the largest posterior probability<ref> http://www.wikicoursenote.com/wiki/Stat841#Bayes_Classifier </ref>.
First recall Bayes' Rule, in the format
P(Y|X) : posterior , probability of
P(X|Y) : likelihood, probability of
P(Y) : prior, probability of
P(X) : marginal, probability of obtaining
We will start with the simplest case:
Bayes' rule can be approached by computing either one of the following:
1) The posterior:
2) The likelihood:
The former reflects a Bayesian approach. The Bayesian approach uses previous beliefs and observed data (e.g., the random variable
The latter reflects a Frequentist approach. The Frequentist approach assumes that the probability distribution (including the mean, variance, etc.) is fixed for the parameter of interest (e.g., the variable
There was some class discussion on which approach should be used. Both the ease of computation and the validity of both approaches were discussed. A main point that was brought up in class is that Frequentists consider X to be a random variable, but they do not consider Y to be a random variable because it has to take on one of the values from a fixed set (in the above case it would be either 0 or 1 and there is only one correct label for a given value X=x). Thus, from a Frequentist's perspective it does not make sense to talk about the probability of Y. This is actually a grey area and sometimes Bayesians and Frequentists use each others' approaches. So using Bayes' rule doesn't necessarily mean you're a Bayesian. Overall, the question remains unresolved.
The Bayes Classifier uses
P(Y=1) : the prior, based on belief/evidence beforehand
denominator : marginalized by summation
The set
which defines a decision boundary.
Theorem: The Bayes Classifier is optimal, i.e., if
Proof: Consider any classifier
To minimize this last expression, it suffices to minimize the inner expectation. Expanding this expectation:
which, in the two-class case, simplifies to
where
Why then do we need other classification methods? Because X densities are often/typically unknown. I.e.,
Therefore, we rely on some data to estimate quantities.
Three Main Approaches
1. Empirical Risk Minimization:
Choose a set of classifiers H (e.g., line, neural network) and find
2. Regression:
Find an estimate (
The
In general regression refers to finding a continuous, real valued y. The problem here is more difficult, because of the restricted domain (y is a set of discrete label values).
3. Density Estimation:
Estimate
Define
It is possible that there may not be enough data to estimate from for density estimation. But the main problem lies with high dimensional spaces, as the estimation results may not be good (high error rate) and sometimes even infeasible. The term curse of dimensionality was coined by Bellman <ref>R. E. Bellman, Dynamic Programming. Princeton University Press, 1957</ref> to describe this problem.
As the dimension of the space goes up, the learning requirements go up exponentially.
To Learn more about methods for handling high-dimensional data <ref> https://docs.google.com/viewer?url=http%3A%2F%2Fwww.bios.unc.edu%2F~dzeng%2FBIOS740%2Flecture_notes.pdf</ref>
The simplest approach to classification is to use the third approach.
Multi-Class Classification
Generalize to case Y takes on k>2 values.
Theorem:
where
Examples of Classification
- Face detection in images.
- Medical diagnosis.
- Detecting credit card fraud (fraudulent or legitimate).
- Speech recognition.
- Handwriting recognition.
LDA and QDA
Discriminant function analysis finds features that best allow discrimination between two or more classes. The approach is similar to analysis of Variance (ANOVA) in that discriminant function analysis looks at the mean values to determine if two or more classes are very different and should be separated. Once the discriminant functions (that separate two or more classes) have been determined, new data points can be classified (i.e. placed in one of the classes) based on the discriminant functions <ref> StatSoft, Inc. (2011). Electronic Statistics Textbook. [Online]. Available: http://www.statsoft.com/textbook/discriminant-function-analysis/. </ref>. Linear discriminant analysis (LDA) and Quadratic discriminant analysis (QDA) are methods of discriminant analysis that are best applied to linearly and quadradically separable classes, respectively. Fisher discriminant analysis (FDA) another method of discriminant analysis that is different from linear discriminant analysis, but oftentimes both terms are used interchangeably.
LDA
The simplest method is to use approach 3 (above) and assume a parametric model for densities. Assume class conditional is Gaussian.
1) Assume Gaussian distributions
must compare
To find the decision boundary, set
2) Assume
3) Cancel
4) Take log of both sides.
5) Subtract one side from both sides, leaving zero on one side.
Cancelling out the terms quadratic in
We can see that the first pair of terms is constant, and the second pair is linear in x.
Therefore, we end up with something of the form
LDA and QDA Continued (Lecture: Sep. 22, 2011)
If we relax assumption 2 (i.e.
Generalizing LDA and QDA
Theorem:
Suppose that
Where
When the Gaussian variances are equal
(To compute this, we need to calculate the value of
In practice
We estimate the prior to be the chance that a random item from the collection belongs to class k, e.g.
The mean to be the average item in set k, e.g.
and calculate the covariance of each class e.g.
If we wish to use LDA we must calculate a common covariance, so we average all the covariances e.g.
Where:
Computation
For QDA we need to calculate:
Lets first consider when
Case 1
When
We have:
but
and
Thus in this condition, a new point can be classified by its distance away from the center of a class, adjusted by some prior.
Further, for two-class problem with equal prior, the discriminating function would be the bisector of the 2-class's means.
Case 2
When
Using the Singular Value Decomposition (SVD) of
For
If we think of
and,
For applying the above method with classes that have different covariance matrices (for example the covariance matrices
The difference between Case 1 and Case 2 (i.e. the difference between using the Euclidean and Mahalanobis distance) can be seen in the illustration below.
As can be seen from the illustration above, the Mahalanobis distance takes into account the distribution of the data points, whereas the Euclidean distance would treat the data as though it has a spherical distribution. Thus, the Mahalanobis distance applies for the more general classification in Case 2, whereas the Euclidean distance applies to the special case in Case 1 where the data distribution is assumed to be spherical.
Generally, we can conclude that QDA provides a better classifier for the data then LDA because LDA assumes that the covariance matrix is identical for each class, but QDA does not. QDA still uses Gaussian distribution as a class conditional distribution. In our real life, this distribution can not be happened each time, so we have to use other distribution as a complement.
The Number of Parameters in LDA and QDA
Both LDA and QDA require us to estimate some parameters. Here is a comparison between the number of parameters needed to be estimated for LDA and QDA:
LDA: Since we just need to compare the differences between one given class and remaining
QDA: For each of the differences,

Trick: Using LDA to do QDA
There is a trick that allows us to use the linear discriminant analysis (LDA) algorithm to generate as its output a quadratic function that can be used to classify data. This trick is similar to, but more primitive than, the Kernel trick that will be discussed later in the course.
In this approach the feature vector is augmented with the quadratic terms (i.e. new dimensions are introduced) where the original data will be projected to that dimensions. We then apply LDA on the new higher-dimensional data.
The motivation behind this approach is to take advantage of the fact that fewer parameters have to be calculated in LDA , as explained in previous sections, and therefore have a more robust system in situations where we have fewer data points.
If we look back at the equations for LDA and QDA, we see that in LDA we must estimate
Theoretically
Suppose we have a quadratic function to estimate:
Using this trick, we introduce two new vectors,
and
We can then apply LDA to estimate the new function:
Note that we can do this for any
Principal Component Analysis (PCA) (Lecture: Sep. 27, 2011)
Principal Component Analysis (PCA) is a method of dimensionality reduction/feature extraction that transforms the data from a D dimensional space into a new coordinate system of dimension d, where d <= D ( the worst case would be to have d=D). The goal is to preserve as much of the variance in the original data as possible when switching the coordinate systems. Give data on D variables, the hope is that the data points will lie mainly in a linear subspace of dimension lower than D. In practice, the data will usually not lie precisely in some lower dimensional subspace.
The new variables that form a new coordinate system are called principal components (PCs). PCs are denoted by
The first PC,
Let
We would like to find the
Note: we require the constraint
Lagrange Multiplier
Before we proceed, we should review Lagrange multipliers.

Lagrange multipliers are used to find the maximum or minimum of a function
we define a new constant
If
In addition
where,
Example :
Suppose we want to maximize the function
We want the partial derivatives equal to zero:
Solving the system we obtain two stationary points:
Determining w :
Use the Lagrange multiplier conversion to obtain:
Take the derivative and set it to zero:
To obtain:
Rearrange to obtain:
where
Note that the PCs decompose the total variance in the data in the following way :
Principal Component Analysis (PCA) Continued (Lecture: Sep. 29, 2011)
As can be seen from the above expressions,
such that
Alternative Derivation
Another way of looking at PCA is to consider PCA as a projection from a higher D-dimension space to a lower d-dimensional subspace that minimizes the squared reconstruction error. The squared reconstruction error is the difference between the original data set
Reconstruction Error
Minimize Reconstruction Error
Suppose
Let
Fit the model to the data and minimize the reconstruction error:
Differentiate with respect to
we can rewrite reconstruction-error as :
since
which are independent (orthogonal to each other) we can conclude that:
Find the orthogonal matrix
PCA Implementation Using Singular Value Decomposition
A unique solution can be obtained by finding the Singular Value Decomposition (SVD) of
For each rank d,
Simply put, by subtracting the mean of each of the data point features and then applying SVD, one can find the principal components:
Where
As the
Examples
Note that in the Matlab code in the examples below, the mean was not subtracted from the datapoints before performing SVD. This is what was shown in class. However, to properly perform PCA, the mean should be subtracted from the datapoints.
Example 1
Consider a matrix of data points
>> % start with a 560 by 1965 matrix X that contains the data points >> load(noisy.mat); >> >> % set the colors to grayscale >> colormap gray >> >> % show image in column 10 by reshaping column 10 into a 20 by 28 matrix >> imagesc(reshape(X(:,10),20,28)') >> >> % perform SVD, if X matrix if full rank, will obtain 560 PCs >> [S U V] = svd(X); >> >> % reconstruct X ( project X onto the original space) using only the first ten principal components >> Y_pca = U(:, 1:10)'*X; >> >> % show image in column 10 of X_hat which is now a 560 by 1965 matrix >> imagesc(reshape(X_hat(:,10),20,28)')
The reason why the noise is removed in the reconstructed image is because the noise does not create a major variation in a single direction in the original data. Hence, the first ten PCs taken from
Example 2
Consider a matrix of data points
The corresponding Matlab commands for performing PCA on the data points are shown below:
>> % start with a 64 by 400 matrix X that contains the data points >> load 2_3.mat; >> >> % set the colors to grayscale >> colormap gray >> >> % show image in column 2 by reshaping column 2 into a 8 by 8 matrix >> imagesc(reshape(X(:,2),8,8)) >> >> % perform SVD, if X matrix if full rank, will obtain 64 PCs >> [U S V] = svd(X); >> >> % project data down onto the first two PCs >> Y = U(:,1:2)'*X; >> >> % show Y as an image (can see the change in the first PC at column 200, >> % when the handwritten number changes from 2 to 3) >> imagesc(Y) >> >> % perform PCA using Matlab build-in function (do not use for assignment) >> % also note that due to the Matlab convention, the transpose of X is used >> [COEFF, Y] = princomp(X'); >> >> % again, use the first two PCs >> Y = Y(:,1:2); >> >> % use plot digits to show the distribution of images on the first two PCs >> images = reshape(X, 8, 8, 400); >> plotdigits(images, Y, .1, 1);
Using the plotdigits function in Matlab, clearly illustrates that the first PC captured the differences between the numbers 2 and 3 as they are projected onto different regions of the axis for the first PC. Also, the second PC captured the tilt of the handwritten numbers as numbers tilted to the left or right were projected onto different regions of the axis for the second PC.
Example 3
(Not discussed in class) In the news recently was a story that captures some of the ideas behind PCA. Over the past two years, Scott Golder and Michael Macy, researchers from Cornell University, collected 509 million Twitter messages from 2.4 million users in 84 different countries. The data they used were words collected at various times of day and they classified the data into two different categories: positive emotion words and negative emotion words. Then, they were able to study this new data to evaluate subjects' moods at different times of day, while the subjects were in different parts of the world. They found that the subjects generally exhibited positive emotions in the mornings and late evenings, and negative emotions mid-day. They were able to "project their data onto a smaller dimensional space" using PCS. Their paper, "Diurnal and Seasonal Mood Vary with Work, Sleep, and Daylength Across Diverse Cultures," is available in the journal Science.<ref>http://www.pcworld.com/article/240831/twitter_analysis_reveals_global_human_moodiness.html</ref>.
Assumptions Underlying Principal Component Analysis can be found here<ref>http://support.sas.com/publishing/pubcat/chaps/55129.pdf</ref>
Example 4
(Not discussed in class) A somewhat well known learning rule in the field of neural networks called Oja's rule can be used to train networks of neurons to compute the principal component directions of data sets. <ref>A Simplified Neuron Model as a Principal Component Analyzer. Erkki Oja. 1982. Journal of Mathematical Biology. 15: 267-273</ref> This rule is formulated as follows
where
[math]\displaystyle{ \mathbf{p} = }[/math] a random vector do c times: [math]\displaystyle{ \mathbf{t} = 0 }[/math] (a vector of length m) for each row [math]\displaystyle{ \mathbf{x} \in \mathbf{X^T} }[/math] [math]\displaystyle{ \mathbf{t} = \mathbf{t} + (\mathbf{x} \cdot \mathbf{p})\mathbf{x} }[/math] [math]\displaystyle{ \mathbf{p} = \frac{\mathbf{t}}{|\mathbf{t}|} }[/math] return [math]\displaystyle{ \mathbf{p} }[/math]
Comparing this with the neuron learning rule we can see that the term
Observations
Some observations about the PCA were brought up in class:
- PCA assumes that data is on a linear subspace or close to a linear subspace. For non-linear dimensionality reduction, other techniques are used. Amongst the first proposed techniques for non-linear dimensionality reduction are Locally Linear Embedding (LLE) and Isomap. More recent techniques include Maximum Variance Unfolding (MVU) and t-Distributed Stochastic Neighbor Embedding (t-SNE). Kernel PCAs may also be used, but they depend on the type of kernel used and generally do not work well in practice. (Kernels will be covered in more detail later in the course.)
- Finding the number of PCs to use is not straightforward. It requires knowledge about the instrinsic dimentionality of data. In practice, oftentimes a heuristic approach is adopted by looking at the eigenvalues ordered from largest to smallest. If there is a "dip" in the magnitude of the eigenvalues, the "dip" is used as a cut off point and only the large eigenvalues before the "dip" are used. Otherwise, it is possible to add up the eigenvalues from largest to smallest until a certain percentage value is reached. This percentage value represents the percentage of variance that is preserved when projecting onto the PCs corresponding to the eigenvalues that have been added together to achieve the percentage.
- It is a good idea to normalize the variance of the data before applying PCA. This will avoid PCA finding PCs in certain directions due to the scaling of the data, rather than the real variance of the data.
- PCA can be considered as an unsupervised approach, since the main direction of variation is not known beforehand, i.e. it is not completely certain which dimension the first PC will capture. The PCs found may not correspond to the desired labels for the data set. There are, however, alternate methods for performing supervised dimensionality reduction.
- (Not in class) Even though the traditional PCA method does not work well on data set that lies on a non-linear manifold. A revised PCA method, called c-PCA, has been introduced to improve the stability and convergence of intrinsic dimension estimation. The approach first finds a minimal cover (a cover of a set X is a collection of sets whose union contains X as a subset<ref>http://en.wikipedia.org/wiki/Cover_(topology)</ref>) of the data set. Since set covering is an NP-hard problem, the approach only finds an approximation of minimal cover to reduce the complexity of the run time. In each subset of the minimal cover, it applies PCA and filters out the noise in the data. Finally the global intrinsic dimension can be determined from the variance results from all the subsets. The algorithm produces robust results.<ref>Mingyu Fan, Nannan Gu, Hong Qiao, Bo Zhang, Intrinsic dimension estimation of data by principal component analysis, 2010. Available: http://arxiv.org/abs/1002.2050</ref>
- (Not in class) While PCA finds the mathematically optimal method (as in minimizing the squared error), it is sensitive to outliers in the data that produce large errors PCA tries to avoid. It therefore is common practice to remove outliers before computing PCA. However, in some contexts, outliers can be difficult to identify. For example in data mining algorithms like correlation clustering, the assignment of points to clusters and outliers is not known beforehand. A recently proposed generalization of PCA based on a Weighted PCA increases robustness by assigning different weights to data objects based on their estimated relevancy.<ref>http://en.wikipedia.org/wiki/Principal_component_analysis</ref>
- (Not in class) Comparison between PCA and LDA: Principal Component Analysis (PCA)and Linear Discriminant Analysis (LDA) are two commonly used techniques for data classification and dimensionality reduction. Linear Discriminant Analysis easily handles the case where the within-class frequencies are unequal and their performances has been examined on randomly generated test data. This method maximizes the ratio of between-class variance to the within-class variance in any particular data set thereby guaranteeing maximal separability. ... The prime difference between LDA and PCA is that PCA does more of feature classification and LDA does data classification. In PCA, the shape and location of the original data sets changes when transformed to a different space whereas LDA doesn’t change the location but only tries to provide more class separability and draw a decision region between the given classes. This method also helps to better understand the distribution of the feature data." <ref> Balakrishnama, S., Ganapathiraju, A. LINEAR DISCRIMINANT ANALYSIS - A BRIEF TUTORIAL. http://www.isip.piconepress.com/publications/reports/isip_internal/1998/linear_discrim_analysis/lda_theory.pdf </ref>
Summary
The PCA algorithm can be summarized into the following steps:
- Recover basis
- Encode training data
- Reconstruct training data
.
- Encode test example
.
- Reconstruct test example
.
Dual PCA
Singular value decomposition allows us to formulate the principle components algorithm entirely in terms of dot products between data points and limit the direct dependence on the original dimensionality d. Now assume that the dimensionality d of the d × n matrix of data X is large (i.e., d >> n). In this case, the algorithm described in previous sections become impractical. We would prefer a run time that depends only on the number of training examples n, or that at least has a reduced dependence on n.
Note that in the SVD factorization
Now Replacing
Fisher Discriminant Analysis (FDA) (Lecture: Sep. 29, 2011 - Oct. 04, 2011)
Fisher Discriminant Analysis (FDA) is sometimes called Fisher Linear Discriminant Analysis (FLDA) or just Linear Discriminant Analysis (LDA). This causes confusion with the Linear Discriminant Analysis (LDA) technique covered earlier in the course. The LDA technique covered earlier in the course has a normality assumption and is a boundary finding technique. The FDA technique outlined here is a supervised feature extraction technique. FDA differs from PCA as well because PCA does not use the class labels,
- PCA
- Find a rank d subspace which minimize the squared reconstruction error:
where
One main drawback of the PCA technique is that the direction of greatest variation may not produce the classification we desire. For example, imagine if the data set above had a lightening filter applied to a random subset of the images. Then the greatest variation would be the brightness and not the more important variations we wish to classify. As another example , if we imagine 2 cigar like clusters in 2 dimensions, one cigar has
We first consider the cases of two-classes. Denote the mean and covariance matrix of class
The new means and variances are actually scalar values now, but we will use vector and matrix notation and arguments throughout the following derivation as the multi-class case is then just a simpler extension.
Goals of FDA
As will be shown in the objective function, the goal of FDA is to maximize the separation of the classes (between class variance) and minimize the scatter within each class (within class variance). That is, our ideal situation is that the individual classes are as far away from each other as possible and at the same time the data within each class are as close to each other as possible (collapsed to a single point in the most extreme case). An interesting note is that R. A. Fisher who FDA is named after, used the FDA technique for purposes of taxonomy, in particular for categorizing different species of iris flowers. <ref name="RAFisher">R. A. Fisher, "The Use of Multiple measurements in Taxonomic Problems," Annals of Eugenics, 1936</ref>. It is very easy to visualize what is meant by within class variance (i.e. differences between the iris flowers of the same species) and between class variance (i.e. the differences between the iris flowers of different species) in that case.
First, we need to reduce the dimensionality of covariate to one dimension (two-class case) by projecting the data onto a line. That is take the d-dimensional input values x and project it to one dimension by using
Goal: choose the vector
1) Our first goal is to minimize the individual classes' covariance. This will help to collapse the data together.
We have two minimization problems
and
.
But these can be combined:
Define
2) Our second goal is to move the minimized classes as far away from each other as possible. One way to accomplish this is to maximize the distances between the means of the transformed data i.e.
Simplifying:
Recall that
This matrix, called the between class variance matrix, is a rank 1 matrix, so an inverse does not exist. Altogether, we have two optimization problems we must solve simultaneously:
- 1)
- 2)
- 1)
There are other metrics one can use to both minimize the data's variance and maximizes the distance between classes, and other goals we can try to accomplish (see metric learning, below...one day), but Fisher used this elegant method, hence his recognition in the name, and we will follow his method.
We can combine the two optimization problems into one after noting that the negative of max is min:
The
This optimization problem can be shown<ref> http://www.socher.org/uploads/Main/optimizationTutorial01.pdf </ref> to be equivalent to the following optimization problem:
(optimized function)
subject to:
(constraint)
A heuristic understanding of this equivalence is that we have two degrees of freedom: direction and scalar. The scalar value is irrelevant to our discussion. Thus, we can set one of the values to be a constant. We can use Lagrange multipliers to solve this optimization problem:
Setting the partial derivative to 0 gives us a generalized eigenvalue problem:
This is a generalized eigenvalue problem and
It is very likely that
In fact we can simplify the above expression further in the case of two classes. Recall the definition of
This second term is a scalar value, let's denote it
(this equation indicates the direction of the separation).
All we are interested in the direction of
Extensions to Multiclass Case
If we have
so our new mean and covariances for class k are:
What are our new optimization sub-problems? As before, we wish to minimize the within class variance. This can be formulated as:
Again, denoting
Similarly, the second optimization problem is:
What is
Next, if we express
where
The first (k-1) eigenvector of
subject to:
Again, in order to solve the above optimization problem, we can use the Lagrange multiplier <ref> http://en.wikipedia.org/wiki/Lagrange_multiplier </ref>:
.
where
Then, we differentiating with respect to
.
Thus:
where,
The above equation is of the form of an eigenvalue problem. Thus, for the solution the k-1 eigenvectors corresponding to the k-1 largest eigenvalues should be chosen as the projection matrix,
Summary
FDA has two optimization problems:
- 1)
- 2)
- 1)
where
Every column of
The two optimization problems are combined as follows:
By adding a constraint as shown:
subject to:
Lagrange multipliers can be used and essentially the problem becomes an eigenvalue problem:
And
Variations
Some adaptations and extensions exist for the FDA technique (Source: <ref>R. Gutierrez-Osuna, "Linear Discriminant Analysis" class notes for Intro to Pattern Analysis, Texas A&M University. Available: [1]</ref>):
1) Non-Parametric LDA (NPLDA) by Fukunaga
This method does not assume that the Gaussian distribution is unimodal and it is actually possible to extract more than k-1 features (where k is the number of classes).
2) Orthonormal LDA (OLDA) by Okada and Tomita
This method finds projections that are orthonormal in addition to maximizing the FDA objective function. This method can also extract more than k-1 features (where k is the number of classes).
3) Generalized LDA (GLDA) by Lowe
This method incorporates additional cost functions into the FDA objective function. This causes classes with a higher cost to be placed further apart in the lower dimensional representation.
Optical Character Recognition (OCR) using FDA
Optical Character Recognition (OCR) is a method to translate scanned, human-readable text into machine-encoded text. In class, we have employed FDA to recognize digits. A paper <ref>Manjunath Aradhya, V.N., Kumar, G.H., Noushath, S., Shivakumara, P., "Fisher Linear Discriminant Analysis based Technique Useful for Efficient Character Recognition", Intelligent Sensing and Information Processing, 2006.</ref> describes the use of FDA to recognize printed documents written in English and Kannada, the fifth most popular language in India. The researchers conducted two types of experiments: one on printed Kannada and English documents and another on handwritten English characters. In the first type of experiments, they conducted four experiments: i) clear and degraded characters in specific fonts; ii) characters in various size; iii) characters in various fonts; iv) characters with noise. In experiment i, FDA achieved 98.2% recognition rate with 12 projection vectors in 21,560 samples. In experiment ii, it achieved 96.9% recognition rate with 10 projection vectors in 11,200 samples. In experiment iii, it achieved 93% recognition rate with 17 projection vectors in 19,850 samples. In experiment iv, it achieved 96.3% recognition rate with 14 projection vectors in 20,000 samples. Overall, the recognition by FDA was very satisfying. In the second type of experiment, a total of 12,400 handwriting samples from 200 different writers were collected. With 175 samples for training purpose, the recognition rate by FDA is 92% with 35 projection vectors.
Facial Recognition using FDA
The Fisherfaces method of facial recognition uses PCA and FDA in a similar way to using just PCA. However, it is more advantageous than using on PCA because it minimizes variation within each class and maximizes class separation. The PCA only method is, therefore, more sensitive to lighting and pose variations. In studies done by Belhumeir, Hespanda, and Kiregeman (1997) and Turk and Pentland (1991), this method had a 96% recognition rate. <ref>Bagherian, Elham. Rahmat, Rahmita. Facial Feature Extraction for Face Recognition: a Review. International Symposium on Information Technology, 2008. ITSim2 article number 4631649.</ref>
Linear and Logistic Regression (Lecture: Oct. 06, 2011)
Linear Regression
Both Regression and Classification are aimed to find a function h which maps data X to feature Y. In regression,
More formally: a more direct approach to classification is to estimate the regression function
Here is a simple example. If
Basically, we can use a linear function
For convenience,
where
where
Here
Using regression to solve classification problems is not mathematically correct, if we want to be true to classification. However, this method works well in practice, if the problem is not complicated. When we have only two classes (for which the target values are encoded as
Matlab Example
The following is the code and the explanation for each step.
Again, we use the data in 2_3.m.
>>load 2_3; >>[U, sample] = princomp(X'); >>sample = sample(:,1:2);
We carry out Principal Component Analysis (PCA) to reduce the dimensionality from 64 to 2.
>>y = zeros(400,1); >>y(201:400) = 1;
We let y represent the set of labels coded as 0 and 1.
>>x=[sample;ones(1,400)];
Construct x by adding a row of vector 1 to data.
>>b=inv(x*x')*x*y;
Calculate b, which represents
>>x1=x'; >>for i=1:400 if x1(i,:)*b>0.5 plot(x1(i,1),x1(i,2),'.') hold on elseif x1(i,:)*b < 0.5 plot(x1(i,1),x1(i,2),'r.') end end
Plot the fitted y values.
Practical Usefulness
Linear regression in general is not very useful for classification purposes. One of the main problems is that new data may not always have a positive ("more successful") impact on the linear regression learning algorithm due to the non-linear "binary" form of the classes. Consider the following simple example:
The boundary decision at
This datum actually skews linear regression to the point that it misclassifies some of the data points that should be labelled '1'. This shows how linear regression cannot adapt well to binary classification problems.
Logistic Regression
Logistic regression is a more advanced method for classification, and is more commonly used. In statistics, logistic regression (sometimes called the logistic model or logit model) is used for prediction of the probability of occurrence of an event by fitting data to a logit function logistic curve. It is a generalized linear model used for binomial regression. Like many forms of regression analysis, it makes use of several predictor variables that may be either numerical or categorical. For example, the probability that a person has a heart attack within a specified time period might be predicted from knowledge of the person's age, sex and body mass index. Logistic regression is used extensively in the medical and social sciences fields, as well as marketing applications such as prediction of a customer's propensity to purchase a product or cease a subscription.<ref>http://en.wikipedia.org/wiki/Logistic_regression</ref>
We can define a function

This is a valid conditional density function since the two components (
It looks similar to a step function, but we have relaxed it so that we have a smooth curve, and can therefore take the derivative.
The range of this function is (0,1) since
As shown on this graph of
Then we compute the complement of f1(x), and get

Function
As shown on this graph of
Since
Thus y takes value 1 with success probability
In general, we can think of the problem as having a box with some knobs. Inside the box is our objective function which gives the form to classify our input (
Since we need to find the parameters that maximize the chance of having our observed data coming from the distribution of
Maximum Likelihood Estimation
Given iid data points
There was some discussion in class regarding the notation. In literature, Bayesians use
Our goal is to find theta to maximize
In many cases, it’s more convenient to work with the natural logarithm of the likelihood. (Recall that the logarithm preserves minumums and maximums.)
Applying Maximum Likelihood Estimation to
This is the likelihood function we want to maximize. Note that
Now set
Newton's Method
Newton's Method (or Newton-Raphson method) is a numerical method to find better approximations to the solutions of real-valued function. The function usually does not have an analytical form.
The goal is to find
It takes an initial guess
Matlab Example
Below is the Matlab code to find a root of the function
x=1:100; y=x.^2 - 2500; %function to find root of plot(x,y); x_opt=90; %starting guess x_traversed=[]; y_traversed=[]; error=[]; for i=1:6, y_opt=x_opt^2-2500; y_prime_opt=2*x_opt; %save results of each iteration x_traversed=[x_traversed x_opt]; y_traversed=[y_traversed y_opt]; error=[error abs(y_opt)]; %update minimum x_opt=x_opt-(y_opt/y_prime_opt); end hold on; plot(x_traversed,y_traversed,'r','LineWidth',2); title('Progressions Towards Root of y=x^2 - 2500'); legend('y=x^2 - 2500','Progression'); xlabel('x'); ylabel('y'); hold off; figure(); semilogy(1:6,error); title('Error vs Iteration'); xlabel('Iteration'); ylabel('Absolute Y Error');
In this example the Newton method converges to an optimum to within machine precision in only 6 iterations as can be seen from the plot of the Y deviate below.
File:newton error.png File:newton progression.png
Advantages of Logistic Regression
Logistic regression has several advantages over discriminant analysis:
- It is more robust: the independent variables don't have to be normally distributed, or have equal variance in each group.
- It does not assume a linear relationship between the IV and DV.
- It may handle nonlinear effects.
- You can add explicit interaction and power terms.
- The DV need not be normally distributed.
- There is no homogeneity of variance assumption.
- Normally distributed error terms are not assumed.
- It does not require that the independent variables be interval.
- It does not require that the independent variables be unbounded.
Comparison Between Logistic Regression And Linear Regression
Linear regression is a regression where the explanatory variable X and response variable Y are linearly related. Both X and Y can be continuous variables, and for every one unit increase in the explanatory variable, there is a set increase or decrease in the response variable Y. A closed form solution exists for the least squares estimate of
Logistic regression is a regression where the explanatory variable X and response variable Y are not linearly related. The response variable provides the probability of occurrence of an event. X can be continuous but Y must be a categorical variable (e.g., can only assume two values, i.e. 0 or 1). For every one unit increase in the explanatory variable, there is a set increase or decrease in the probability of occurrence of the event. No closed form solution exists for the least squares estimate of
In terms of making assumptions on the data set: In LDA, we assumed that the probability density function (PDF) of each class and priors were Gaussian and Bernoulli, respectively. However, in Logistic Regression, we assumed that the PDF of each class had a parametric form and we ignored the priors. Therefore, we may conclude that Logistic regression has less assumptions than LDA.
Newton-Raphson Method (Lecture: Oct 11, 2011)
Previously we had derivated the log likelihood function for the logistic function.
After taking log, we can have:
This implies that:
Our goal is to find the
Newton's Method
Here is how we usually implement Newton's Method:
In practice, the convergence speed depends on |F'(x*)|, where F(x) =
The first derivative is typically called the score vector.
where
The negative of the second derivative is typically called the information matrix.
again where
We then use the following update formula to calcalute continually better estimates of the optimal
Matrix Notation
Let
Let
Let
Let
The score vector, information matrix and update equation can be rewritten in terms of this new matrix notation, so the first derivative is
And the second derivative is
Therfore, we can fit a regression problem as follows
Iteratively Re-weighted Least Squares
If we reorganize this updating formula we can see it is really iteratively solving a least squares problem each time with a new weighting.
where
Recall that linear regression by least squares finds the following minimum:
Similarly, we can say that
Fisher Scoring Method
Fisher Scoring is a method very similiar to Newton-Raphson. It uses the expected Information Matrix as opposed to the observed information matrix. This distinction simplifies the problem and in perticular the computational complexity. To learn more about this method & logistic regression in general you can take Stat431/831 at the University of Waterloo.
Multi-class Logistic Regression
In a multi-class logistic regression we have K classes. For 2 classes K and l
(this is resulting from
We call
For each class from 1 to K-1 we then have:
Note that choosing Y=K is arbitrary and any other choice is equally valid.
Based on the above the posterior probabilities are given by:
Logistic Regression Vs. Linear Discriminant Analysis (LDA)
Logistic Regression Model and Linear Discriminant Analysis (LDA) are widely used for classification. Both models build linear boundaries to classify different groups. Also, the categorical outcome variables (i.e. the dependent variables) must be mutually exclusive.
LDA used more parameters.
However, these two models differ in their basic approach. While Logistic Regression is more relaxed and flexible in its assumptions, LDA assumes that its explanatory variables are normally distributed, linearly related and have equal covariance matrices for each class. Therefore, it can be expected that LDA is more appropriate if the normality assumptions and equal covariance assumption are fulfilled in its explanatory variables. But in all other situations Logistic Regression should be appropriate.
Also, the total number of parameters to compute is different for Logistic Regression and LDA. If the explanatory variables have d dimensions and there are two classes to categorize, we need to estimate
Note that the number of parameters also corresponds to the minimum number of observations needed to compute the coefficients of each function. Techniques do exist though for handling high dimensional problems where the number of parameters exceeds the number of observations. Logistic Regression can be modified using shrinkage methods to deal with the problem of having less observations than parameters. When maximizing the log likelihood, we can add a
Because the Logistic Regression model has the form
In terms of the performance speed, since LDA is non-iterative, unlike Logistic Regression which uses the iterative Newton-Raphson method, LDA can be expected to be faster than Logistic Regression.
Example
(Not discussed in class.) One application of logistic regression that has recently been used is predicting the winner of NFL games. Previous predictors, like Yards Per Carry (YPC), were used to build probability models for games. Now, the Success Rate (SR), defined as the percentage of runs in which the a team’s point expectancy has improved, is shown to be a better predictor of a team's performance. SR is based on down, distance and yard line and is less susceptible to rare breakaway plays that can be considered outliers. More information can be found at [2].
Perceptron


Perceptron is a simple, yet effective, linear separator classifier. The perceptron is the building block for neural networks. It was invented by Rosenblatt in 1957 at Cornell Labs, and first mentioned in the paper "The Perceptron - a perceiving and recognizing automaton". The perceptron is used on linearly separable data sets. The LS computes a linear combination of factor of input and returns the sign.
For a 2 class problem, and a set of inputs with d features, a perceptron will use a weighted sum and it will classify the information using the sign of the result (i.e it uses a step function as it's activation function ). The figures on the right give an example of a perceptron. In these examples,
Perceptrons are generally trained using gradient descent. This type of learning can have 2 side effects:
- If the data sets are well separated, the training of the perceptron can lead to multiple valid solutions.
- If the data sets are not linearly separable, the learning algorithm will never finish.
Perceptrons are the simplest kind of a feedforward neural network. A perceptron is the building block for other neural networks such as Multi-Layer Perceptron (MLP) which uses multiple layers of perceptrons with nonlinear activation functions so that it can classify data that is not linearly separable.
History of Perceptrons and Other Neural Models
One of the first perceptron-like models is the "McCulloch-Pitts Neuron" model developed by McCulloch and Pitts in the 1940's <ref> W. Pitts and W. S. McCulloch, "How we know universals: the perception of auditory and visual forms," Bulletin of Mathematical Biophysics, 1947.</ref>. It uses a weighted sum of the inputs that is fed through an activation function, much like the perceptron. However, unlike the perceptron, the weights in the "McCulloch-Pitts Neuron" model are not adjustable, so the "McCulloch-Pitts Neuron" is unable to perform any learning based on the input data.
As stated in the introduction of the perceptron section, the Perceptron was developed by Rosenblatt around 1960. Around the same time as the perceptron was introduced, the Adaptive Linear Neuron (ADALINE) was developed by Widrow <ref name="Widrow"> B. Widrow, "Generalization and information storage in networks of adaline 'neurons'," Self Organizing Systems, 1959.</ref>. The ADALINE differs from the standard perceptron by using the weighted sum (the net) to adjust the weights in the learning phase. The standard perceptron uses the output to adjust its weights (i.e. the net after it passed through the activation function).
Since both the perceptron and ADALINE are only able to handle data that is linearly separable Multiple ADALINE (MADALINE) was introduced <ref name="Widrow"/>. MADALINE is a two layer network to process multiple inputs. Each layer contains a number of ADALINE units. The lack of an appropriate learning algorithm prevented more layers of units to be cascaded at the time and interest in "neural networks" receded until the 1980's when the backpropagation algorithm was applied to neural networks and it became possible to implement the Multi-Layer Perceptron (MLP).
Many importand advances have been boosted by the use of inexpensive computer emulations. Following an initial period of enthusiasm, the field survived a period of frustration and disrepute. During this period when funding and professional support was minimal, important advances were made by relatively few reserchers. These pioneers were able to develop convincing technology which surpassed the limitations identified by Minsky and Papert. Minsky and Papert, published a book (in 1969) in which they summed up a general feeling of frustration (against neural networks) among researchers, and was thus accepted by most without further analysis. Currently, the neural network field enjoys a resurgence of interest and a corresponding increase in funding.<ref> http://www.doc.ic.ac.uk/~nd/surprise_96/journal/vol4/cs11/report.html#Historical background </ref>
Perceptron Learning Algorithm (Lecture: Oct. 13, 2011)
Perceptron learning is done by minimizing the cost function,
The logic is as follows:

1) Because a hyper-plane
For any two arbitrary points
such that
Therefore,
2) For any point
3) We set
Where
4) The distance from a misclassified data point
where
Note that the added negative sign in front of the
Perceptron Learning using Gradient Descent
The gradient descent is an optimization method that finds the minimum of an objective function by incrementally updating its parameters in the negative direction of the derivative of this function.
In our case, the objective function to be minimized is classification error and the parameters of this function are the weights associated with the inputs,
The Learning Rate
The classification error is defined as the distance of misclassified observations to the decision boundary:
To minimize the cost function
Therefore, the gradient is
Using the gradient descent algorithm to solve these two equations, we have
If the data is linearly-separable, the solution is theoretically guaranteed to converge to a separating hyperplane in a finite number of iterations. In this situation the number of iterations depends on the learning rate and the margin. However, if the data is not linearly separable there is no guarantee that the algorithm converges.
Note that we consider the offset term
A major concern about gradient descent is that it may get trapped in local optimal solutions. Many works such as this paper by Cetin et al. and this paper by Atakulreka et al. have been done to tackle this issue.
Features
- A Perceptron can only discriminate between two classes at a time.
- When data is (linearly) separable, there are many solutions depending on the starting point.
- Even though convergence to a solution is guaranteed if the solution exists, the finite number of steps until convergence can be very large.
- The smaller the gap between the two classes, the longer the time of convergence.
- When the data is not separable, the algorithm will not converge.
- A learning rate that is too high will make the perceptron periodically oscillate around the solution unless additional steps are taken.
- The L.S compute a linear combination of feature of input and return the sign.
- This were called Perceptron in the engineering literate in late 1950.
- Learning rate affects the accuracy of the solution and the number of iterations directly.
Separability and convergence
The training set D is said to be linearly separable if there exists a positive constant
Novikoff (1962) proved that the perceptron algorithm converges after a finite number of iterations if the data set is linearly separable. The idea of the proof is that the weight vector is always adjusted by a bounded amount in a direction that it has a negative dot product with, and thus can be bounded above by
Choosing a Proper Learning Rate

Choice of a learning rate value will affect the final result of gradient descent algorithm. If the learning rate is too small then the algorithm would take too long to converge which could cause problems for the situations where time is an important factor. If the learning rate is chosen too be too large, then the optimal point can be skipped and never converge. In fact, if the step size is too large, larger than twice the largest eigenvalue of the second derivative matrix (Hessian) of cost function, then gradient steps will go upward instead of downward. However, the step size is not the only factor than can cause these kind of situations: even with the same learning rate and different initial values algorithm might end up in different situations. In general it can be said that having some prior knowledge could help in choice of initial values and learning rate.
There are different methods of choosing the step size in an gradient descent optimization problem. The most common method is choosing a fixed learning rate and finding a proper value for it by trial and error. This for sure is not the most sophisticated method, but the easiest one. Learning rate can also be adaptive; that means the value of learning rate can be different at each step of the algorithm. This can be specially a helpful approach when one is dealing with on-line training and non-stationary environments (i.e. when data characteristics vary over time). In such a case learning rate has to be adapted at each step of the learning algorithm. Different approaches and algorithms for learning rate adaptation can be found in <ref> V P Plagianakos, G D Magoulas, and M N Vrahatis, Advances in convex analysis and global optimization Pythagorion 2000 (2001), Volume: 54, Publisher: Kluwer Acad. Publ., Pages: 433-444. </ref>.
The learning rate leading to a local error minimum in the error function in one learning step is optimal. <ref>[Duda, Richard O., Hart, Peter E., Stork, David G. "Pattern Classification". Second Edition. John Wiley & Sons, 2001.]</ref>
Application of Perceptron: Branch Predictor
Perceptron could be used for both online and batch learning. Online learning tasks take place in a sequence of trials. In each round of trial, the learner is given an instance and is asked to use his current knowledge to predict a label for the point. In online learning, the true label of the point is revealed to learner at each round after he makes a prediction. At the last stage of each round the learner has a chance to use the feedback he received on the true label of the instance to help improve his belief about the data for future trials.
Instruction pipelining is a technique to increase the throughput in modern microprocessor architecture. A microprocessor instruction can be broken into several independent steps. In a single CPU clock cycle, several instructions at different stage can be executed at the same time. However, a problem arises with a branch, e.g. if-and-else- statement. It is not known whether the instructions inside the if- or else- statements will be executed until the condition is executed. This stalls the pipeline.
A branch predictor is used to address this problem. Using a predictor the pipelined processor predicts the execution path and speculatively executes instructions in the branch. Neural networks are good technique for prediction; however, they are expensive for microprocessor architecture. A research studied the use of perceptron, which is less expensive and simpler to implement, as the branch predictor. The inputs are the history of binary outcomes of the executed branches. The output of the predictor is whether a particular branch will be taken. Every time a branch is executed and its true outcome is known, it can be used to train the predictor. The experiments showed that with a 4 Kb hardware, a global perceptron predictor has a misprediction rate of 1.94%, a superior accuracy. <ref>Daniel A. Jimenez , Calvin Lin, "Neural Methods for Dynamic Branch Prediction", ACM Transactions on Computer Systems, 2002</ref>
Feed-Forward Neural Networks
- The term 'neural networks' is used because historically, it was used to describe the processes of the brain (e.g. synapses).
- A neural network is a multistate regression model which is typically represented by a network diagram (see right).

- The feedforward neural network was the first and arguably simplest type of artificial neural network devised. In this network, the information moves in only one direction, forward, from the input nodes, through the hidden nodes (if any) and to the output nodes. There are no cycles or loops in the network.<ref>http://en.wikipedia.org/wiki/Feedforward_neural_network</ref>
- For regression, typically k = 1 (the number of nodes in the last layer), there is only one output unit
at the end.
- For c-class classification, there are typically c units at the end with the cth unit modelling the probability of class c, each
is coded as 0-1 variable for the cth class.
- Neural networks are known as universal approximators, where a two-layer feed-forward neural network can approximate any continuous function to an arbitrary accuracy (assuming sufficient hidden nodes exist and that the necessary parameters for the neural network can be found) <ref name="CMBishop">C. M. Bishop, Pattern Recognition and Machine Learning. Springer, 2006</ref>. It should be noted that fitting training data to a very high accuracy may lead to overfitting, which is discussed later in this course.
- We often use Perceptron to blocks in Feed-Forward neural networks. We can easily to solve the problem by using Perceptron in many different classes. Feed-Forward neural networks looks like a complicated system of Perceptrons. We can regard the neural networks as an unit or a subset of Neural Network. Feed-Forward neural networks include many hidden layers of perceptron.
Backpropagation (Finding Optimal Weights)
There are many algorithms for calculating the weights in a feed-forward neural network. One of the most used approaches is the backpropagation algorithm. The application of the backpropagation algorithm for neural networks was popularized in the 1980's by researchers like Rumelhart, Hinton and McClelland (even though the backpropagation algorithm had existed before then). <ref>S. Seung, "Multilayer perceptrons and backpropagation learning" class notes for 9.641J, Department of Brain & Cognitive Sciences, MIT, 2002. Available: [3] </ref>
As the learning part of the network (the first part being feed-forward), backpropagation consists of "presenting an input pattern and changing the network parameters to bring the actual outputs closer to the desired teaching or target values." It is one of the "simplest, most general methods for the supervised training of multilayer neural networks." (pp. 288-289) <ref>[Duda, Richard O., Hart, Peter E., Stork, David G. "Pattern Classification". Second Edition. John Wiley & Sons, 2001.]</ref>
For the backpropagation algorithm, we consider three hidden layers of nodes
Refer to figure from October 18th lecture where
We want the output of the feed forward neural network
Instead of the sign function that has no derivative we use the so called logistic function (a smoothed form of the sign function):
"Notice that if σ is the identity function, then the entire model collapses to a linear model in the inputs. Hence a neural network can be thought of as a nonlinear generalization of the linear model, both for regression and classification." <ref>Friedman, J., Hastie, T. and Tibshirani, R. (2008) “The Elements of Statistical Learning”, 2nd ed, Springer.</ref>
Logistic function is a common sigmoid curve .It can model the S-curve of growth of some population
To solve the optimization problem, we take the derivative with respect to weight
where
Backpropagation Continued (Lecture: Oct. 18, 2011)

From the figure to the right it can be seen that the input (
The goal is to optimize the weights to reduce the L2-norm between the target output values
Since the L2-norm is differentiable, the optimization problem can be tackled by differentiating
where
The above equation essentially shows the effect of changes in the input
where
The above equation essentially shows the effect of changes in the input
The recursive definition of
Now considering
where
Since
Overview of Full Backpropagation Algorithm
The network weights are updated using the backpropagation algorithm when each training data point
- First arbitrarily choose some random weights (preferably close to zero) for your network.
- Apply
to the FFNN's input layer, and calculate the outputs of all input neurons.
- Propagate the outputs of each hidden layer forward, one hidden layer at a time, and calculate the outputs of all hidden neurons.
- Once
reaches the output layer, calculate the output(s) of all output neuron(s) given the outputs of the previous hidden layer.
- At the output layer, compute
for each output neuron(s).
- Then compute
for all weights , where .
- Then update
for all weights .
- Continue for next data points until weights converge.
Limitations
- The convergence obtained from backpropagation learning is very slow.
- The convergence in backpropagation learning is not guaranteed.
- The result may generally converge to any local minimum on the error surface, since stochastic gradient descent exists on a surface which is not flat.
- Backpropagation learning requires input scaling or normalization. Inputs are usually scaled into the range of +0.1f to +0.9f for best performance.<ref>http://en.wikipedia.org/wiki/Backpropagation</ref>
- Numerical problems may be encountered when there are a large number of hidden layers, as the errors at each layer may become very small and vanish.
Deep Neural Network
Increasing the number of units within a hidden layer can increase the "flexibility" of the neural network, i.e. the network is able to fit to more complex functions. Increasing the number of hidden layers on the other hand can increase the "generalizability" of the neural network, i.e. the network is able to generalize well to new data points that it was not trained on. A deep neural network is a neural network with many hidden layers. Deep neural networks were introduced in recent years by the same researchers (Hinton et al. <ref name="HintonDeepNN"> G. E. Hinton, S. Osindero and Y. W. Teh, "A Fast Learning Algorithm for Deep Belief Nets", Neural Computation, 2006. </ref>) that introduced the backpropagation algorithm to neural networks. The increased number of hidden layers in deep neural networks cannot be directly trained using backpropagation, because the errors at each layer will become very small and vanish as stated in the limitations section. To get around this problem, deep neural networks are trained a few layers at a time (i.e. two layers at a time). This process is still not straightforward as the target values for the hidden layers are not well defined (i.e. it is unknown what the correct target values are for the hidden layers given a data point and a label). Restricted Boltzmann Machines (RBM) and Greedy Learning Algorithms have been used to address this issue. For more information about how deep neural networks are trained, please refer to <ref name="HintonDeepNN"/>. A comparison of various neural network layouts including deep neural networks on a database of handwritten digits can be found at THE MNIST DATABASE.
An interesting structure of the deep neural network is where the number of nodes in each hidden layer decreases towards the "center" of the network and then increases again. See figure below for an illustration.

The central part with the least number of nodes in the hidden layer can be seen a reduced dimensional representation of the input data features. It would be interesting to compare the dimensionality reduction effect of this kind of deep neural network to a cascade of PCA.
It is known that training DNNs is hard <ref>http://ecs.victoria.ac.nz/twiki/pub/Courses/COMP421_2010T1/Readings/TrainingDeepNNs.pdf</ref> since randomly initializing weights for the network and applying gradient descent can find poor local minimums. In order to better train DNNs, Exploring Strategies for Training Deep Neural Networks looks at 3 principles to better train DNNs:
- Pre-training one layer at a time in a greedy way,
- Using unsupervised learning at each layer,
- Fine-tuning the whole network with respect to the ultimate criterion.
Their experiments show that by providing hints at each layer for the representation, the weights can be initialized such that a more optimal minimum can be reached.
Applications of Neural Networks
- Sales forecasting
- Industrial process control
- Customer research
- Data validation
- Risk management
- Target marketing
<ref> Reference:http://www.doc.ic.ac.uk/~nd/surprise_96/journal/vol4/cs11/report.html#Applications of neural networks </ref>
Model Selection (Complexity Control)
Selecting a proper model complexity is a well-known problem in pattern recognition and machine learning. Systems with the optimal complexity have a good generalization to yet unobserved data. In the complexity control problem, we are looking for an appropriate model order which gives us the best generalization capability for the unseen data points. Model complexity here can be defined in terms of over-fitting and under-fitting situations defined in the following section.
Over-fitting and Under-fitting
There are two situations which should be avoided in classification and pattern recognition systems:
- Overfitting
- Underfitting
Suppose there is no noise in the training data, then we would face no problem with over-fitting, because in this case every training data point lies on the underlying function, and the only goal is to build a model that is as complex as needed to pass through every training data point.
However, in the real-world, the training data are noisy, i.e. they tend to not lie exactly on the underlying function, instead they may be shifted to unpredictable locations by random noise. If the model is more complex than what it needs to be in order to accurately fit the underlying function, then it would end up fitting most or all of the training data. Consequently, it would be a poor approximation of the underlying function and have poor prediction ability on new, unseen data.
The danger of overfitting is that the model becomes susceptible to predicting values outside of the range of training data. It can cause wild predictions in multilayer perceptrons, even with noise-free data. The best way to avoid overfitting is to use lots of training data. Unfortunately, that is not always useful. Increasing the training data alone does not guarantee that over-fitting will be avoided. The best strategy is to use a large-enough size training set, and control the complexity of the model. The training set should have a sufficient number of data points which are sampled appropriately, so that it is representative of the whole data space.
In a Neural Network, if the number of hidden layers or nodes is too high, the network will have many degrees of freedom and will learn every characteristic of the training data set. That means it will fit the training set very precisely, but will not be able to generalize the commonality of the training set to predict the outcome of new cases.
Underfitting occurs when the model we picked to describe the data is not complex enough, and has a high error rate on the training set. There is always a trade-off. If our model is too simple, underfitting could occur and if it is too complex, overfitting can occur.
Different Approaches for Complexity Control
We would like to have a classifier that minimizes the true error rate

Because the true error rate cannot be determined directly in practice, we can try using the empirical true error rate (i.e. training error rate):
However, the empirical true error rate (i.e. training error rate) is biased downward. Minimizing this error rate does not find the best classifier model, but rather ends up overfitting to the training data. Thus, this error rate cannot be used.
The complexity of a fitting model depends on the degree of the fitting function. According to the graph, the area on the LHS of the critical point is considered as under-fitting. This inaccuracy is resulted by the low complexity of fitting. The area on the RHS of the critical point is over-fitting, because it's not generalized.
As illustrated in the figure to the right, the training error rate is always less than the true error rate, i.e. "biased downward". Also, the training error will always decrease with an increase in the complexity of the model used to fit the data. This does not reflect the behavior of the true error rate. The true error rate will have a unique minimum as the model complexity changes.
So, if the training error rate is the only criteria used for picking a model, overfitting will occur. Overfitting is when a model that is too complex for the data is picked. The training data error rate may become very low or even zero, but the model will not be able to generalize well to new test data points. On the other hand, underfitting can occur when a model that is not complex enough is picked (e.g. using a first order model for data that follows a second order trend). Both training and test error rates will be high in that case. The best choice for the model complexity is where the true error rate reaches its minimum point. Thus, model selection involves controlling the complexity of the model. The true error rate can be approximated using the test error rate, i.e. the test error follows the same trend that the true error rate does when the model complexity is changed.
In this case, we assume there is a test data set
To estimate
This estimaor is unbiased.
which means that we just need to minimize
Thus given the Mean Squared Error (MSE), if we have a low bias, then we will have a high variance and vice versa.
In order to avoid overfitting, there are two main strategies:
- Estimate the error rate
- Cross-validation
- Computing error bound ( probability in-equality )
- Regulazition
- We basically make the function (model) smooth by limiting the complexity or by limiting the size of weights.
Cross Validation
Cross-validation is an approach for avoiding overfitting while modelling data that bases the choice of model parameters on a portion of the training set, while using the rest of the set for validation, i.e., some of the data is left out when fitting the model. One round of the process involves partitioning the data set into two complementary subsets, fitting the model to one subset (called the training set), and testing the model against the other subset (called the validation or testing subset). This is usually repeated several times using different partitions in order to reduce variability, and the validation results are then averaged over the rounds.
K-Fold Cross Validation
Instead of minimizing the training error, here we minimize the validation error.
A common type of cross-validation that is used for relatively small data sets is K-fold cross-validation, the algorithm for which can be stated as follows:
Let h denote a classification model to be fitted to a given data set.
- Randomly partition the original data set into K subsets of approximately the same size. A common choice for K is K = 10.
- For k = 1 to K do the following
- Remove subset k from the data set
- Estimate the parameters of each different classification model based only on the remaining data points. Denote the resulting function by h(k)
- Use h(k) to predict the data points in subset k. Denote by
the observed error rate.
- Compute the average error
The best classifier is the model that results in the lowest average error rate.
A common variation of k-fold cross-validation uses a single observation from the original sample as the validation data, and the remaining observations as the training data. This is then repeated such that each sample is used once for validation. It is the same as a K-fold cross-validation with K being equal to the number of points in the data set, and is referred to as leave-one-out cross-validation. <ref> stat.psu.edu/~jiali/course/stat597e/notes2/percept.pdf</ref>
Alternatives to Cross Validation for model selection:
- Akaike Information Criterion (AIC): This approach ranks models by their AIC values. The model with the minimum AIC is chosen. The formula of AIC value is:
, where is the number of parameters and is the maximum value of the likelihood function of the model. This selection method penalizes the number of parameters.<ref>http://en.wikipedia.org/wiki/Akaike_information_criterion</ref> - Bayesian Information Criterion (BIC): It is similar to AIC but penalizes the number of parameters even more. The formula of BIC value is:
, where is the sample size.<ref>http://en.wikipedia.org/wiki/Bayesian_information_criterion</ref>
Model Selection Continued (Lecture: Oct. 20, 2011)
Error Bound Computation
Apart from cross validation, another approach for estimating the error rates of different models is to find a bound to the error. This works well theoretically to compare different models, however, in practice the error bounds are not a good indication of which model to pick because the error bounds are not tight. This means that the actual error observed in practice may be a lot better than what was indicated by the error bounds. This is because the error bounds indicate the worst case errors and by only comparing the error bounds of different models, the worst case performance of each model is compared, but not the overall performance under normal conditions.
Penalty Function
Another approach for model selection to avoid overfitting is to use regularization which minimizes the squared error plus a penalty function.
This means minimizing the following new objective function:
where
Thus, we want
Some Matrix Differentiation Properties
Example: Penalty Function in Neural Network Model Selection
In MLP neural networks, the activation function is of the form of a logistic function, where the function behaves almost linearly when the input is close to zero (i.e., the weights of the neural network are close to zero), while the function behaves non-linearly as the magnitude of the input increases (i.e., the weights of the neural network become larger). In order to penalize additional model complexity (i.e., unnecessary non-linearities in the model), large weights will be penalized by the penalty function.
The objective function to minimize with respect to the weights
The derivative of the objective function with respect to the weights
This objective function is used during gradient descent. In practice, cross validation is used to determine the value of
Radial Basis Function Neural Network (RBF NN)
Radial Basis Function Network(RBF) NN is a type of neural network with only one hidden layer in addition to an input and output layer. Each node within the hidden layer uses a radial basis activation function, hence the name of the RBF NN. The weights from the input layer to the hidden layer are always "1" in a RBF NN, while the weights from the hidden layer to the output layer are adjusted during training. The output unit implements a weighted sum of hidden unit outputs. The input into an RBF NN is nonlinear while the output is linear. Due to their nonlinear approximation properties, RBF NNs are able to model complex mappings, which perceptron based neural networks can only model by means of multiple hidden layers. It can be trained without back propagation since it has a closed-form solution. RBF NNs have been successfully applied to a large diversity of applications including interpolation, chaotic time series modeling, system identification, control engineering, electronic device parameter modeling, channel equalization, speech recognition, image restoration, shape-form-shading, 3-D object modeling, motion estimation and moving object segmentation, data fusion, etc. <ref>www-users.cs.york.ac.uk/adrian/Papers/Others/OSEE01.pdf</ref>
The Network System
1. Input:
n data points
2. Basis function (the single hidden layer):
There are many choices for the basis function. The commonly used is radial basis:
3. Weights associated with the last layer:
4. Output:
Alternatively, the output
where
is the matrix of output variables.
is the matrix of Radial Basis Functions.
is the matrix of weights.
Here,
Network Training
To construct m basis functions, first cluster data points into m groups. Then find the centre of each cluster
Clustering: the K-means algorithm <ref>This section is taken from Wikicourse notes stat441/841 fall 2010.</ref>
K-means is a commonly applied technique in clustering observations into groups by computing the distance from each of individual observations to the cluster centers. A typical K-means algorithm can be described as follows:
Step1: Select the number of clusters m
Step2: Randomly select m observations from the n observations, to be used as m initial centers.
Step3: For each data point from the rest of observations, compute the distance to each of the initial centers and classify it into the cluster with the minimum distance.
Step4: Obtain updated cluster centers by computing the mean of all the observations in the corresponding clusters.
Step5: Repeat Step 3 and Step 4 until all of the differences between the old cluster centers and new cluster centers are acceptable.
Having constructed the basis functions, next minimize the objective function with respect to
The solution to the problem is
Single Basis Function vs. Multiple Basis Functions
Suppose the data points belong to a mixture of Gaussian distributions.
Under single basis function approach, every class in
Compare
with
Here, the basis function
Under multiple basis function approach, a layer of j basis functions are placed between
Here, instead of using single Gaussian to represent each class, we use a "mixture of Gaussian" to represent.
The probability funcion of
Multiplying both the nominator and the denominator by
where
since
If the weights in the radial basis neural network have proper properties of probability function, then the basis function
In conclusion, given a mixture of Gaussian distributions, multiple basis function approach is better than single basis function, since the former produces a non-linear boundary.
RBF Network Complexity Control (Lecture: Oct. 25, 2011)
When performing model selection, overfitting is a common issue. As model complexity increases, there comes a point where the model becomes worse and worse at fitting real data even though it fits the training data better. It becomes too sensitive to small perturbations in the training data that should be treated as noise to allow flexibility in the general case. In this section we will show that training error (empiricial error from the training data) is a poor estimator for true error and that minimizing training error will increase complexity and result in overfitting. We will show that test error (empirical error from the test data) is a better estimator of true error.
First, some notation is defined.
The assumption for the training data set is that it consists of the true model values
Important Notation
Let:
denote the true model. denote the prediction/estimated model, which is generated from a training data set . The observation is not accurate.
Remark:
denote the empirical error based on actual data points. This can be either test error or training error depending on the data points used.It's the difference between denote the true error or generalization error, and is what we are trying to minimize.It is the difference between denote the mean squared error.
We use the training data to estimate our model parameters.
For a given point
This is the formula partitions the training error into the true error and others errors. Our goal is to select the model that minimizes the true error so we must try to understand the effects of these other error terms if we are to use training error as a estimate for the true error.
Moreover,
The first term is essentially true error. The second term is a constant. The third term is problematic, since in general this expectation is not 0. We will break this into 2 cases to simplify the third term.
Case 1: Estimating Error using Data Points from Test Set
In Case 1, the empirical error is test error and the data points used to calculate test error are from the test set, not the training set. That is,
We can rewrite the third term in the following way, since both
(The reason why covariance is here since
Consider
Since
The equation for the expectation of empirical error simplifies to the following:
This result applies to every output value in the test data set, so we can generalize this equation by summing over all m data points that have NOT been seen by the model:
Rearranging to solve for true error, we get
We see that test error is a good estimator for true error, since they only differ by a constant. Minimizing test error will minimize true error. There is no term adding unnecessary complexity. This is the justification for Cross Validation.
Case 2: Estimating Error using Data Points from Training Set
In Case 2, the data points used to calculate error are from the training set, so
Now
Stein's Lemma states that if
Substitute
Remark:
The reason why
where
We take
Generalizing the equation for all n data points in the training set:
This equation for the true error is called Stein's unbiased risk estimator (SURE). SURELY someone will try to slip a terrible pun into the notes before it gets edited out. Note that
RBF Network Complexity Control
Problem: Assuming we want to fit our data using a radial basis function network, how many radial basis functions should be used? Too few RBFs and the model will have limited classification capabilities. Too many RBFs and the model could over fit the training data. For further information about RBF network complexity control check [4]
We can use Stein's unbiased risk estimator (SURE) to give us an approximation for how many RBFs to use.
The SURE equation is
where
Optimal Number of Basis in RBF
The optimal number of basis functions should be rearranged in order to minimize the generalization error
The formula for an RBF network is:
where
Given the training labels
For simplification let
Our optimal output then becomes
We calculate
Consider only one node of the network. In this case we can write:
Note here that
By taking the derivative of
Here we recall that
Sketch of Trace Cyclical Property Proof:
For
With that in mind, for
Note that since
The SURE equation then becomes
As the number of RBFs
One way to estimate the noise variance is
This application of SURE is straightforward because minimizing Radial Basis Function error reduces to a simple least squares estimator problem with a linear solution. This makes computing
RBF Network Complexity Control (Alternate Approach)
An alternate approach (not covered in class) to tackling RBF Network complexity control is controlling the complexity by similarity <ref name="Eickhoff">R. Eickhoff and U. Rueckert, "Controlling complexity of RBF networks by similarity," Proceedings of European Symposium on Artificial Neural Networks, 2007</ref>. In <ref name="Eickhoff" />, the authors suggest looking at the similarity between the basis functions multiplied by their weight by determining the cross-correlations between the functions. The cross-correlation is calculated as follows:
where
If the cross-correlation between two functions is high, <ref name="Eickhoff" /> suggests that the two basis functions be replaced with one basis function that covers the same region of both basis functions and that the corresponding weight of this new basis function be the average of the weights of the two basis functions. For the case of Gaussian radial basis functions, the equations for finding the new weight (
where
This process is repeated until the cross-correlation between the basis functions falls below a certain threshold, which is a tunable parameter.
Note 1) Though not extensively discussed in <ref name="Eickhoff" />, this approach to RBF Network complexity control presumably requires a starting RBF Network with a large number basis functions.
Note 2) This approach does not require the repeated implementation of differently sized RBF Networks to determine the empirical error, unlike the approach using SURE. However, the SURE approach is backed up by theory to find the number of radial basis functions that optimizes the true error and does not rely on some tunable threshold. It would be interesting to compare the results of both approaches (in terms of the resulting RBF Network obtained and the test error).
Support Vector Machines (Lecture: Oct. 27, 2011)
Support vector machines (SVMs) are learning systems that use a hypothesis space of linear functions in a high dimensional feature space, trained with a learning algorithm from opti- mization theory that implements a learning bias derived from statistical learning theory. SVMs are kernel machines based on the principle of structural risk minimization, which are used in applications of regression and classification; however, they are mostly used as binary classifiers. Although the subject can be said to have started in the late seventies (Vapnik, 1979), it is receiving increasing attention recently by researchers. It is such a powerful method that in the few years since its introduction has outperformed most other systems in a wide variety of applications, especially in pattern recognition.
Support Vector Machines are motivated by the idea of training linear machines with margins. It involves preprocessing the data to represent patterns in a high dimension (generally much higher than the original feature space). Note that using a suitable non-linear mapping to a sufficiently high dimensional space, the data will always be separable. (p. 263) <ref>[Duda, Richard O., Hart, Peter E., Stork, David G. "Pattern Classification". Second Edition. John Wiley & Sons, 2001.]</ref>
A suitible way to describe the interest in SVM can be seen in the following quote. "The problem which drove the initial development of SVMs occurs in several guises - the bias variance tradeoff (Geman, Bienenstock and Doursat, 1992), capacity control (Guyon et al., 1992), overfitting (Montgomery and Peck, 1992) - but the basic idea is the same. Roughly speaking, for a given learning task, with a given finite amount of training data, the best generalization performance will be achieved if the right balance is struck between the accuracy attained on that particular training set, and the “capacity” of the machine, that is, the ability of the machine to learn any training set without error. A machine with too much capacity is like a botanist with a photographic memory who, when presented with a new tree, concludes that it is not a tree because it has a different number of leaves from anything she has seen before; a machine with too little capacity is like the botanist’s lazy brother, who declares that if it’s green, it’s a tree. Neither can generalize well. The exploration and formalization of these concepts has resulted in one of the shining peaks of the theory of statistical learning (Vapnik, 1979). A Tutorial on Support Vector Machines for Pattern Recognition
Structural Risk Minimization and VC Dimension
Linear learning machines are the fundamental formulations of SVMs. The objective of the linear learning machine is to find the linear function that minimizes the generalization error from a set of functions which can approximate the underlying mapping between the input and output data. Consider a learning machine that implements linear functions in the plane as decision rules
With n given training data with input values
where
The generalization error can be expressed as
which measures the error for all input/output patterns that are generated from the underlying generator of the data characterized by the probability distribution
The term on left side represents generalization error. The first term on right hand side is empirical error calculated from the training data and the second term is called VC confidence which is associated with the VC dimension h of the learning machine. VC dimension is used to describe the complexity of the learning system. The relationship between these three items is illustrated in figure below:
Thus, even though we don’t know the underlying distribution based on which the data points are generated, it is possible to minimize the upper bound of the generalization error in place of minimizing the generalization error. That means one can minimize the expression in the right hand side of the inequality above.
Unlike the principle of Empirical Risk Minimization (ERM) applied in Neural Networks which aims to minimize the training error, SVMs implement Structural Risk Minimization (SRM) in their formulations. SRM principle takes both the training error and the complexity of the model into account and intends to find the minimum of the sum of these two terms as a trade-off solution (as shown in figure above) by searching a nested set of functions of increasing complexity.
Introduction
Support Vector Machineis a popular linear classifier. Suppose that we have a data set with two classes which could be separated using a hyper-plane. Support Vector Machine (SVM) is a method which will give us the "best" hyper-plane. There are other classifier that find a hyper-plane that separate the data, namely Perceptron. However, the output of Perceptron and many other algorithms depends on the input parameters, so every run of Percetron can give you a different output. On the other hand, SVM tries to find the hyper-plane that separates the data and have the farthest distance from the points. This is also known as the Max-Margin hyper-plane.
No matter whether the training data are linearly-separable or not, the linear boundary produced by any of the versions of SVM is calculated using only a small fraction of the training data rather than using all of the training data points. This is much like the difference between the median and the mean. SVM can also be considered a special case of Tikhonov regularization. A special property is that they simultaneously minimize the empirical classification error and maximize the geometric margin; hence they are also known as maximum margin classifiers. The key features of SVM are the use of kernels, the absence of local minima, the sparseness of the solution (i.e. few training data points are needed to construct the linear decision boundary) and the capacity control obtained by optimizing the margin.(Shawe-Taylor and Cristianini (2004)). Another key feature of SVM, as discussed below, is the use of slack variables to control the amount of tolerable misclassification on the training data, which form the soft margin SVM. This key feature can serve to improve the generalization of SVM to new data.
-
Infinitely many Perceptron solutions
-
Out of many how do we choose?
So with Perceptron, there can be infinitely many solutions you can get, where the training error will be zero. But the question is that among all these possible solution which one is the best. Intuitively we choose the solution with the largest margin between both classes. So if we choose the hyper-plane with the maximum margin between the two classes that could be a better solution. This makes sense because at test time, more points will be observed and they may be closer to the other class, so the safest choice for the hyper-plane would be the one farthest from both classes.
One of the great things about SVM is that not only it has solid theoretical guarantees, but also it works very well in practice.
To summarize:

If the data is Linearly separable, then there exists infinitely many solution hyperplanes. Of those, infinitely many hyperplanes one of them is the best choice for the solution. Then the best decision to make is the hyperplane which is furthest from both classes. So our goal is to find a hyperplane amoung all possible hyperplanes which is furthest from both classes. which is to find a Hyperplane that has maximum margin.
What we mean by margin is the distance between the hyperplane and the closest point in a class.

Setting

- We assume that the data is linearly separable
- Our classifier will be of the form
- We will assume that our labels are
The goal is to classify the point
Now we are going to check how far this point is from the hyperplane, and the parts on one side of the hyperplane will have a negative value and the parts on the other side will have a positive value. Points are classified by the sign of the data point. So
Side Note: A memory from the past of Dr. Ali Ghodsi
When the aforementioned Professor was a small child, grade 2. He was often careless with the accuracy of certain curly brackets, when writing what one can only assume was math proofs. One day, his teacher grew impatient and demanded that a page of perfect curly brackets be produced by the young Dr. (He may or may not have been a doctor at the time) And now, whenever Dr. Ghodsi writes a tidy curly bracket, he is reminded of this and it always brings a smile to his face.
From memories of the past.
(the number 20 was involved in the story, either the number of pages or the number of lines)
Case 1: Linearly Separable (Hard Margin)
In this case, the classifier will be
Objective Function

Observation 1:
So
Observation 2: If
Observation 3: Let
Observation 4: Let margin be the distance between the hyper-plane and the closest point. Since
Our goal is to maximize the margin. This is also known as the Max/Min problem in Optimization. When defining the hyperplane, what is important is the direction of
So with a bit of abuse of notation we can assume that
Therefore, the problem translates to:
So, it is possible to re-interpret the problem as:
s.t.
Support Vectors
Support vectors are the training points that determine the optimal separating hyperplane that we seek. Also, they are the most difficult points to classify and at the same time the most informative for classification.
Writing Lagrangian Form of Support Vector Machine
The Lagrangian form using Lagrange multipliers and constraints that are discussed below is introduced to ensure that the optimization conditions are satisfied, as well as finding an optimal solution (the optimal saddle point of the Lagrangian for the classic quadratic optimization). The problem will be solved in dual space by introducing
Dual form of the optimization problem:
To find the optimal value, we set the derivatives equal to zero:
To get the dual form of the optimization problem we replace the above two equations in definition of
The above function is a dual objective function, so we should minimize it:
The dual function is a quadratic function of several variables subject to linear constraints. This optimization problem is called Quadratic Programming and is much easier than the primal function. It is possible to to write to dual form using matrices:
Since
Discussion on the Dual of the Lagrangian
As mentioned in the previous section, solving the dual form of the Lagrangian requires quadratic programming. Quadratic programming can be used to minimize a quadratic function subject to a set of constraints. In general, for a problem with N variables, the quadratic programming solution has a computational complexity of
Support Vector Machine Continued (Lecture: Nov.1, 2011)
Previous lecture we considered the case when data is linearly separable. The goal of the Support Vector Machine classifier is to find the hyper-plane that maximizes the margin from each of the two classes. We derived the following optimization problem based on the SVM methodology:
Subject to the constraint:
This is the primal form of the optimization problem. Then we derived the dual of this problem:
Subject to constraints:
The is quadratic programming problem. QP problems have been thoroughly studied and that can be efficiently solved. This particular problem has a convex objective function as well as convex constraints. This makes solving the problem easier and ensures the computed solution is the global maximum. These properties are of significant important for classifiers and thus are one of the most important strengths of the SVM classifier.
We are able to find
But in order to find the hyper-plane uniquely we need also to find
When finding the dual objective function, there are set of conditions called KKT that should be satisfied.
KKT Conditions
KKT stands for Karush-Kuhn-Tucker (initially named after Kuhn and Tucker's work in the 1950's, however, it was later discovered that Karush had stated the conditions back in the late 1930's) <ref name="CMBishop" />
Suppose we want to find
If
1)
2)
3)
4)
If any of the above four conditions are not satisfied, then the primal function is not feasible.
Looking more closely at condition 3:
We see that this is at the optimal point.
It is obvious that we can have either
If
If
Note here that
To compute
We can compute
where
Whatever we derived so far was based on the assumption that the data is linearly separable but that might not be the case always and it might be impossible to find a hyper-plane that completely separates two class of data . This case is called hard margin hyper-plane.
Kernel Trick
In most of the real world cases the data points are not linearly separable. How can the above methods be generalized to the case where the decision function is not a linear function of the data? Boser, Guyon and Vapnik, 1992, showed that a rather old trick (Aizerman, 1964) can be used to accomplish this in an astonishingly straightforward way. First notice that the only way in which the data appears in the dual-form optimization problem is in the form of dot products:
In other words, a linear learning machine can be employed in the higher dimensional feature space to solve the original non-linear problem. Then of course the training algorithm would only depend on the data through dot products in
So the hypothesis in this case would be
which is linear in terms of the new space that
where
subject to
The cost function
where
Several typical choices of kernels are linear, polynomial, Multi-Layer Perceptron (MLP) and Gaussian or Radial Basis Function (RBF) kernel. Their expressions are as following:
Linear kernel:
Polynomial kernel:
MLP kernel:
Gaussian (RBF) kernel:
Radial Basis:
Sigmoid:
where,
Kernel functions satisfying Mercer's conditions not only enables implicit mapping of data from input space to feature space but also ensure the convexity of the cost function which leads to the unique optimum. Mercer condition states that a continuous symmetric function
In the case of Gaussian or RBF kernel for example,
The choice of which kernel would be best for a particular application has to be determined through trial and error. Normally, the Gaussian or RBF kernel are best suited for classification tasks including SVM.
The video below shows a graphical illustration of how a polynomial kernel works to a get better sense of kernel concept:
Mapping data points to a higher dimensional space using a polynomial kernel
Kernel Properties
Kernel functions must be continuous, symmetric, and most preferably should have a positive (semi-) definite Gram matrix. Kernels which are said to satisfy the Mercer's theorem are positive semi-definite, meaning their kernel matrices have no non-negative Eigen values. The use of a positive definite kernel insures that the optimization problem will be convex and solution will be unique. <ref> Reference:http://crsouza.blogspot.com/2010/03/kernel-functions-for-machine-learning.html#kernel_properties </ref>
Case 2: Linearly Non-Separable Data (Soft Margin)
The original SVM was specifically made for separable data. But, this is a very strong requirement, so it was suggested by Vladimir Vapnik and Corinna Cortes later on to remove this requirement. This is called Soft Margin Support Vector Machine. One of the advantages of SVM is that it is relatively easy to generalize it to the case that the data is not linearly separable.
In the case when 2 data sets are not linearly separable, it is impossible to have a hyperplane that completely separates 2 classes of data. In this case the idea is to minimize the number of points that cross the margin and are miss-classified .So we are going to minimize that are going to violate the constraint:
Hence we allow some of the points to cross the margin (or equivalently violate our constraint) but on the other hand we penalize our objective function (so that the violations of the original constraint remains low):
And now our constraint is as follows:
We have to check that all KKT conditions are satisfied:
Now we have to write this into a Lagrangian form.
Support Vector Machine Continued (Lecture: Nov. 3, 2011)
Case 2: Linearly Non-Separable Data (Soft Margin [6]) Continued
Recall from last time that soft margins are used instead of hard margins when we are using SVM to classify data points that are not linearly separable.
Soft Margin SVM derivation of dual
The soft-margin SVM optimization problem is defined as:
In other words, we have relaxed the constraint for each
Now we can define our Lagrangian:
Our K.K.T. conditions are as follows:
So, simplifying the Lagrangian the same way we did with the hard margin case, we get the following:
subject to
Thus, we can derive that the only difference with the soft margin case is the constraint
Soft Margin SVM formulation summary
In summary, the primal form of the soft-margin SVM is given by:
The corresponding dual form which we derived above is:
Note, the soft-margin dual objective is identical to hard margin dual objective! The only difference is now
Also note,
Finally note that as
Soft Margin SVM problem interpretation
Like in the case of hard-margin the dual formulation for soft-margin given above allows us to interpret the role of certain points as support vectors.
We consider three cases:
Case 1:
From KKT condition 1 (third part),
From KKT condition 3 (second part)
Thus this is a point that violates the margin, and we say
Case 2:
From KKT condition 1 (third part),
From KKT condition 3 (second part)
Finally, from KKT condition 3 (first part),
Case 3:
From KKT condition 1 (third part),
From KKT condition 3 (second part)
Finally, from KKT condition 3 (first part),
These three scenarios are depicted in Fig..
Soft Margin SVM with kernel
Like hard-margin SVM, we can use the kernel trick to find a non-linear classifier using the dual formulation.
In particular, we define a non-linear mapping for
The dual problem we solve is:
where
Soft Margin SVM implementation
The SVM optimization problem is a quadratic program and we can use any quadratic solver to accomplish this. For example, matlab's optimization toolbox provides quadprog
. Alternatively, CVX (by Stephen Boyd) is an excellent optimization toolbox that integrates with matlab and allows one to enter convex optimization problems as though they are written on paper (and it is free).
We prefer to solve the dual since it is an easier problem (and also allows to use a Kernel). Using CVX this would be coded as
K = X*X'; % Linear kernel H = (y*y') .* K; cvx_begin variable alpha(M,1); maximize (sum(alpha) - 0.5*alpha'*H*alpha) subject to y'*alpha == 0; alpha >= 0; alpha <= gamma cvx_end
which provides us with optimal
Now we can obtain
Note,
Finally, we can classify a new data point
Multiclass Support Vector Machines
A number of approaches exist for applying SVMs to classification tasks with more than two classes. The most straightforward approach given
Variations of the one-versus-the-rest approach exist to address its problems:
- The class labels of each SVM can be changed from +1 and -1 to +1 (for the
class) and (for the remaining classes) in an attempt to counterbalance the unbalanced number of data points for each class <ref>Y. Lee, Y. Lin, and G. Wahba, "Multicategory Support Vector Machines," Technical Report, 2001</ref>. - The
SVMs can be trained at the same time using one objective function <ref>J. Weston and C. Watkins, "Multi-class Support Vector Machines," Proceedings ESANN, 1999</ref>.
Advantages of Support Vector Machines
- SVMs provide a good out-of-sample generalization. This means that, by choosing an appropriate generalization grade,
SVMs can be robust, even when the training sample has some bias. This is mainly due to selection of optimal hyperplane.
- SVMs deliver a unique solution, since the optimality problem is convex. This is an advantage compared
to Neural Networks, which have multiple solutions associated with local minima and for this reason may not be robust over different samples.
- State-of-the-art accuracy on many problems.
- SVM can handle any data types by changing the kernel.
Disadvantages of Support Vector Machines
- Difficulties in choice of the kernel (Which we will study about in future).
- limitation in speed and size, both in training and testing
- Discrete data presents another problem, although with suitable rescaling excellent results have nevertheless been obtained.
- The optimal design for multiclass SVM classifiers is a further area for research.
- A problem with SVMs is the high algorithmic complexity and extensive memory requirements of the required quadratic programming in large-scale tasks.
The Naive Bayes Classifier
The naive Bayes classifier is a very simple (and often effective) classifier based on Bayes rule. For further reading check [7]
Bayes assumption is that all the features are conditionally independent given the class label. Even though this is usually false (since features are usually dependent), the resulting model is easy to fit and works surprisingly well.
Each feature or variable
Thus the Bayes classifier is
where
We can see this a direct application of Bayes rule
with
Note, earlier we assume class-conditional densitites which were multivariate normal with a dense covariance matrix. In this case we are forcing the covariance matrix to be a diagonal. This simplification, while not realistic, can provide a more robust model.
K-Nearest-Neighbors(k-NN)

Given a data point x, find the k nearest data points to x and classify x using the majority vote of these k neighbors (k is a positive integer, typically small.) If k=1, then the object is simply assigned to the class of its nearest neighbor.
- Ties can be broken randomly.
- k can be chosen by cross-validation
- k-nearest neighbor algorithm is sensitive to the local structure of the data<ref>
http://www.saylor.org/site/wp-content/uploads/2011/02/Wikipedia-k-Nearest-Neighbor-Algorithm.pdf</ref>.
- Nearest neighbor rules in effect compute the decision boundary in an implicit manner.
Requirements of k-NN:
<ref>http://courses.cs.tamu.edu/rgutier/cs790_w02/l8.pdf</ref>
- An integer k
- A set of labeled examples (training data)
- A metric to measure “closeness”
Advantages:
- Able to obtain optimal solution in large sample.
- Simple implementation
- There are some noise reduction techniques that work only for k-NN to improve the efficiency and accuracy of the classifier.
Disadvantages:
- If the training set is too large, it may have poor run-time performance.
- k-NN is very sensitive to irrelevant features since all features contribute to the similarity and thus to classification.<ref>
- small training data can lead to high misclassification rate.
Extensions and Applications
In order to improve the obtained results, we can do following:
- Preprocessing: smoothing the training data (remove any outliers and isolated points)
- Adapt metric to data
Besides classification, k-nearest-neighbours is useful for other tasks as well. For example, the k-NN has been used in Regression or Product Recommendation system<ref> http://www.cs.ucc.ie/~dgb/courses/tai/notes/handout4.pdf</ref>.
SVM is commonly used in Bioinformatics. Common uses include classification of DNA sequences and promoter recognition and identifying disease-related microRNAs. Promoters are short sequences of DNA that act as a signal for gene expression. In one paper, Robertas Damaševičius tries using a power series kernel function and 11 classification rules for data projection to classifty these sequences, to aid active gene location.<ref>Damaševičius, Robertas. "Analysis of Binary Feature Mapping Rules for Promoter Recognition in Imbalanced DNA Sequence Datasets using Support Vector Machine". Proceedings from 4th International IEEE Conference "Intelligent Systems". 2008.</ref> MicroRNAs are non-coding RNAs that target mRNAs for cleavage in protein synthesis. There is growing evidence suggesting that mRNAs "play important roles in human disease development, progression, prognosis, diagnosis and evaluation of treatment response". Therefore, there is increasing research in the role of mRNAs underlying human diseases. SVM has been proposed as a method of classifying positive mRNA disease-associations from negative ones.<ref>Jiang, Qinghua; Wang, Guohua; Zhang, Tianjiao; Wang, Yadong. "Predicting Human microRNA-disease Associations Based on Support Vector Machine". Proceedings from IEEE International Conference on Bioinformatics and Biomedicine. 2010.</ref>
References
<references />