f10 Stat841 digest: Difference between revisions

From statwiki
Jump to navigation Jump to search
m (Conversion script moved page F10 Stat841 digest to f10 Stat841 digest: Converting page titles to lowercase)
 
(34 intermediate revisions by 3 users not shown)
Line 1: Line 1:
== Classification - September 21, 2010 ==
== [http://www.wikicoursenote.com/wiki/Stat841f10#Classification_-_September_21.2C_2010 Classification - September 21, 2010] ==


* Classification is an area of [http://en.wikipedia.org/wiki/Supervised_learning supervised learning] that systematically assigns unlabeled novel data to their label through the characteristics and attributes obtained from observation.
* Classification is an area of [http://en.wikipedia.org/wiki/Supervised_learning supervised learning] that systematically assigns unlabeled novel data to their label through the characteristics and attributes obtained from observation.
Line 11: Line 11:
* PCA is an appropriate method when you have obtained measures on a number of observed variables and wish to develop a smaller number of artificial variables (called principal components) that will account for most of the variance in the observed variables. This is a powerful technique for dimensionally reduction. It has applications in data visualization, data mining, reducing the dimensionality of a data set and etc. It is mostly used for data analysis and for making predictive models.
* PCA is an appropriate method when you have obtained measures on a number of observed variables and wish to develop a smaller number of artificial variables (called principal components) that will account for most of the variance in the observed variables. This is a powerful technique for dimensionally reduction. It has applications in data visualization, data mining, reducing the dimensionality of a data set and etc. It is mostly used for data analysis and for making predictive models.


== Linear and Quadratic Discriminant Analysis cont'd - September 23, 2010 ==
== [http://www.wikicoursenote.com/wiki/Stat841f10#Linear_and_Quadratic_Discriminant_Analysis_cont.27d_-_September_23.2C_2010 Linear and Quadratic Discriminant Analysis cont'd - September 23, 2010] ==


In the second lecture, Professor Ali Ghodsi recapitulates that by calculating the class posteriors <math>\Pr(Y=k|X=x)</math> we have optimal classification. He also shows that by assuming that the classes have common covariance matrix <math>\Sigma_{k}=\Sigma \forall k </math>, the decision boundary between classes <math>k</math> and <math>l</math> is linear (LDA). However, if we do not assume same covariance between the two classes, the decision boundary is a quadratic function (QDA).  
In the second lecture, Professor Ali Ghodsi recapitulates that by calculating the class posteriors <math>\Pr(Y=k|X=x)</math> we have optimal classification. He also shows that by assuming that the classes have common covariance matrix <math>\Sigma_{k}=\Sigma \forall k </math>, the decision boundary between classes <math>k</math> and <math>l</math> is linear (LDA). However, if we do not assume same covariance between the two classes, the decision boundary is a quadratic function (QDA).  
Line 18: Line 18:




=== Principle Component Analysis -September 30, 2010 ===
== [http://www.wikicoursenote.com/wiki/Stat841f10#Principal_Component_Analysis_-_September_30.2C_2010 Principle Component Analysis -September 30, 2010] ==
Principal component analysis (PCA) is a dimensionality-reduction method invented by [http://en.wikipedia.org/wiki/Karl_Pearson Karl Pearson] in 1901 [http://stat.smmu.edu.cn/history/pearson1901.pdf]. Depending on where this methodology is applied, other common names of PCA include the [http://en.wikipedia.org/wiki/Karhunen%E2%80%93Lo%C3%A8ve_theorem Karhunen–Loève transform (KLT)] , the [http://en.wikipedia.org/wiki/Harold_Hotelling Hotelling transform], and the proper orthogonal decomposition (POD). PCA is the simplist [http://en.wikipedia.org/wiki/Eigenvector eigenvector]-based [http://en.wikipedia.org/wiki/Multivariate_analysis multivariate analysis]. It reduces the dimensionality of the data by revealing the internal structure of the data in a way that best explains the variance in the data. To this end, PCA works by using a user-defined number of the most important directions of variation (dimensions or '''principal components''') of the data to project the data onto these directions so as to produce a lower-dimensional representation of the original data. The resulting lower-dimensional representation of our data is usually much easier to visualize and it also exhibits the most informative aspects (dimensions) of our data whilst capturing as much of the variation exhibited by our data as it possibly could.
Principal component analysis (PCA) is a dimensionality-reduction method invented by [http://en.wikipedia.org/wiki/Karl_Pearson Karl Pearson] in 1901 [http://stat.smmu.edu.cn/history/pearson1901.pdf]. Depending on where this methodology is applied, other common names of PCA include the [http://en.wikipedia.org/wiki/Karhunen%E2%80%93Lo%C3%A8ve_theorem Karhunen–Loève transform (KLT)] , the [http://en.wikipedia.org/wiki/Harold_Hotelling Hotelling transform], and the proper orthogonal decomposition (POD). PCA is the simplist [http://en.wikipedia.org/wiki/Eigenvector eigenvector]-based [http://en.wikipedia.org/wiki/Multivariate_analysis multivariate analysis]. It reduces the dimensionality of the data by revealing the internal structure of the data in a way that best explains the variance in the data. To this end, PCA works by using a user-defined number of the most important directions of variation (dimensions or '''principal components''') of the data to project the data onto these directions so as to produce a lower-dimensional representation of the original data. The resulting lower-dimensional representation of our data is usually much easier to visualize and it also exhibits the most informative aspects (dimensions) of our data whilst capturing as much of the variation exhibited by our data as it possibly could.


 
== [http://www.wikicoursenote.com/wiki/Stat841f10#Fisher.27s_.28Linear.29_Discriminant_Analysis_.28FDA.29_-_Two_Class_Problem_-_October_5.2C_2010 Fisher's (Linear) Discriminant Analysis (FDA) - Two Class Problem  - October 5, 2010]==
== Fisher's (Linear) Discriminant Analysis (FDA) - Two Class Problem  - October 5, 2010==


This lecture introduces Fisher's linear discrimination analysis (FDA), which is a supervised dimensionality reduction method. FDA does not assume any distribution of the data and it works by reducing the dimensionality of the data by projecting the data on a line. That is, given d-dimensional data FDA project it to one-dimensional representation by <math>z = \underline{w}^T \underline{x} </math> where <math>x \in \mathbb{R}^{d}</math> and <math> \underline{w} =  \begin{bmatrix}w_1 \\ \vdots \\w_d \end{bmatrix} _{d \times 1}</math><br />
This lecture introduces Fisher's linear discrimination analysis (FDA), which is a supervised dimensionality reduction method. FDA does not assume any distribution of the data and it works by reducing the dimensionality of the data by projecting the data on a line. That is, given d-dimensional data FDA project it to one-dimensional representation by <math>z = \underline{w}^T \underline{x} </math> where <math>x \in \mathbb{R}^{d}</math> and <math> \underline{w} =  \begin{bmatrix}w_1 \\ \vdots \\w_d \end{bmatrix} _{d \times 1}</math><br />
FDA derives a set of feature vectors by which high-dimensional data can be projected onto a low-dimensional feature space in the sense of maximizing class separability. Furthermore, the lecture clarifies a set of FDA basic concepts like Fisher’s ratio, ratio of between-class scatter matrix to within-class scatter matrix. It also discusses the goals specified by Fisher for his analysis then proceeding by mathematical formulation of these goals.
FDA derives a set of feature vectors by which high-dimensional data can be projected onto a low-dimensional feature space in the sense of maximizing class separability. Furthermore, the lecture clarifies a set of FDA basic concepts like Fisher’s ratio, ratio of between-class scatter matrix to within-class scatter matrix. It also discusses the goals specified by Fisher for his analysis then proceeding by mathematical formulation of these goals.


 
== [http://www.wikicoursenote.com/wiki/Stat841f10#Fisher.27s_.28Linear.29_Discriminant_Analysis_.28FDA.29_-_Multi-Class_Problem_-_October_7.2C_2010 Fisher's (Linear) Discriminant Analysis (FDA) - Multi-Class Problem  - October 7, 2010]==
== Fisher's (Linear) Discriminant Analysis (FDA) - Multi-Class Problem  - October 7, 2010==


This lecture describes a generalization of Fisher's discriminant analysis to more than 2 classes. For the multi-class, or <math>k</math>-class problem, we are trying to find a projection from a <math>d</math>-dimensional space to a <math> (k-1)</math>-dimensional space.  
This lecture describes a generalization of Fisher's discriminant analysis to more than 2 classes. For the multi-class, or <math>k</math>-class problem, we are trying to find a projection from a <math>d</math>-dimensional space to a <math> (k-1)</math>-dimensional space.  
Line 37: Line 35:
i.e. <math>\mathbf{S}_{W}^{-1}\mathbf{S}_{B}\mathbf{w}_{i} =\lambda_{i}\mathbf{w}_{i}</math>.
i.e. <math>\mathbf{S}_{W}^{-1}\mathbf{S}_{B}\mathbf{w}_{i} =\lambda_{i}\mathbf{w}_{i}</math>.


== Linear and Logistic Regression - October 12, 2010==
== [http://www.wikicoursenote.com/wiki/Stat841f10#Linear_and_Logistic_Regression_-_October_12.2C_2010 Linear and Logistic Regression - October 12, 2010]==


In this Lecture, [http://www.math.uwaterloo.ca/~aghodsib/ Prof Ali Ghodsi] reviews the LDA as a dimensionality reduction method and introduces 2 models for regression, linear and logistic regression.
In this Lecture, [http://www.math.uwaterloo.ca/~aghodsib/ Prof Ali Ghodsi] reviews the LDA as a dimensionality reduction method and introduces 2 models for regression, linear and logistic regression.
Line 45: Line 43:
General information on [http://en.wikipedia.org/wiki/Linear_regression linear regression] can be found at the [http://numericalmethods.eng.usf.edu/topics/linear_regression.html University of South Florida] and [http://academicearth.org/lectures/applications-to-linear-estimation-least-squares this MIT lecture].
General information on [http://en.wikipedia.org/wiki/Linear_regression linear regression] can be found at the [http://numericalmethods.eng.usf.edu/topics/linear_regression.html University of South Florida] and [http://academicearth.org/lectures/applications-to-linear-estimation-least-squares this MIT lecture].


== Logistic Regression Cont. - October 14, 2010  ==
== [http://www.wikicoursenote.com/wiki/Stat841f10#Logistic_Regression_Cont._-_October_14.2C_2010 Logistic Regression Cont. - October 14, 2010] ==


Traditionally logistic regression parameters are estimated using maximum likelihood. However , other optimization techniques may be used as well.
Traditionally logistic regression parameters are estimated using maximum likelihood. However , other optimization techniques may be used as well.
Line 53: Line 51:
Logistic regression requires less parameters than LDA or QDA and is therefore more favorable for high dimensional data.
Logistic regression requires less parameters than LDA or QDA and is therefore more favorable for high dimensional data.


== Multi-Class Logistic Regression & Perceptron - October 19, 2010 ==
== [http://www.wikicoursenote.com/wiki/Stat841f10#Multi-Class_Logistic_Regression_.26_Perceptron_-_October_19.2C_2010 Multi-Class Logistic Regression & Perceptron - October 19, 2010] ==


In this lecture, the topic of logistic regression was finalized by covering the multi-class logistic regression and new topic of perceptron has started. Perceptron is a linear classifier for two-class problems. The main goal of perceptron is to minimize the distances between the misclassified points and the decision boundary. This will be continued in the following lectures.
In this lecture, the topic of logistic regression was finalized by covering the multi-class logistic regression and new topic of perceptron has started. Perceptron is a linear classifier for two-class problems. The main goal of perceptron is to minimize the distances between the misclassified points and the decision boundary. This will be continued in the following lectures.


==Perceptron Learning Algorithm and Feed Forward Neural Networks - October 21, 2010 ==
==[http://www.wikicoursenote.com/wiki/Stat841f10#Perceptron_Learning_Algorithm_and_Feed_Forward_Neural_Networks_-_October_21.2C_2010 Perceptron Learning Algorithm and Feed Forward Neural Networks - October 21, 2010] ==


In this lecture, we finalize our discussion of the Perceptron by reviewing its learning algorithm, which is based on gradient descent. We then begin the next topic, Neural Networks (NN), and we focus on a NN that is useful for classification: the Feed Forward Neural Network (FFNN). The mathematical model for the FFNN is shown, and we review one of its most popular learning algorithms: Back-Propagation.  
In this lecture, we finalize our discussion of the Perceptron by reviewing its learning algorithm, which is based on gradient descent. We then begin the next topic, Neural Networks (NN), and we focus on a NN that is useful for classification: the Feed Forward Neural Network (FFNN). The mathematical model for the FFNN is shown, and we review one of its most popular learning algorithms: Back-Propagation.  
Line 63: Line 61:
To open the Neural Network discussion, we present a formulation of the universal function approximator. The mathematical model for Neural Networks is then built upon this formulation. We also discuss the trade-off between training error and testing error -- known as the generalization problem -- under the universal function approximator section.
To open the Neural Network discussion, we present a formulation of the universal function approximator. The mathematical model for Neural Networks is then built upon this formulation. We also discuss the trade-off between training error and testing error -- known as the generalization problem -- under the universal function approximator section.


== Complexity Control - October 26, 2010 ==
== [http://www.wikicoursenote.com/wiki/Stat841f10#Complexity_Control_-_October_26.2C_2010 Complexity Control - October 26, 2010] ==


Selecting the model structure with an appropriate complexity is a standard problem in pattern recognition and machine learning. Systems with the optimal complexity have a good generalization to unseen data.
Selecting the model structure with an appropriate complexity is a standard problem in pattern recognition and machine learning. Systems with the optimal complexity have a good generalization to unseen data.
Line 73: Line 71:
Moreover, Cross-validation has been introduced during the leacture which is a method for estimating generalization error based on "resampling" (Weiss and Kulikowski 1991; Efron and Tibshirani 1993; Hjorth 1994; Plutowski, Sakata, and White 1994; Shao and Tu 1995). The resulting estimates of generalization error are often used for choosing among various models. Also, it can be used for model selection by choosing one of several models that has the smallest estimated generalization error. Finally, We discuss two methods to estimate the true error rate: simple cross-validation and K-fold cross-validation. Both of these methods involve splitting the data into a training set to train the model and a validation set to compute the test error rate.
Moreover, Cross-validation has been introduced during the leacture which is a method for estimating generalization error based on "resampling" (Weiss and Kulikowski 1991; Efron and Tibshirani 1993; Hjorth 1994; Plutowski, Sakata, and White 1994; Shao and Tu 1995). The resulting estimates of generalization error are often used for choosing among various models. Also, it can be used for model selection by choosing one of several models that has the smallest estimated generalization error. Finally, We discuss two methods to estimate the true error rate: simple cross-validation and K-fold cross-validation. Both of these methods involve splitting the data into a training set to train the model and a validation set to compute the test error rate.


== Leave-One-Out Cross-Validation and Radial Basis Function Networks - October 28, 2010 ==
== [http://www.wikicoursenote.com/wiki/Stat841f10#Leave-One-Out_Cross-Validation_-_October_28.2C_2010 Leave-One-Out Cross-Validation and Radial Basis Function Networks - October 28, 2010] ==


In this lecture, we finalize the discussion of cross-validation with leave-one-out cross-validation and begin regression by discussing Radial Basis Function (RBF) Networks. Leave-one-out is similar to k-fold cross-validation, but separates each single element. Under linear model conditions, leave-one-out cross-validation performs well asymptotically.
In this lecture, we finalize the discussion of cross-validation with leave-one-out cross-validation and begin regression by discussing Radial Basis Function (RBF) Networks. Leave-one-out is similar to k-fold cross-validation, but separates each single element. Under linear model conditions, leave-one-out cross-validation performs well asymptotically.
Line 129: Line 127:




== Regularization Weight Decay - November 4, 2010 ==
== [http://www.wikicoursenote.com/wiki/Stat841f10#Regularization_for_Neural_Network_-_November_4.2C_2010 Regularization Weight Decay - November 4, 2010] ==


=== [http://en.wikipedia.org/wiki/Regularization_(mathematics) Regularization] ===  
=== [http://en.wikipedia.org/wiki/Regularization_(mathematics) Regularization] ===  
Line 144: Line 142:
<math>\,REG = err + \rho \sum_{ij}u_{ij}^2</math>
<math>\,REG = err + \rho \sum_{ij}u_{ij}^2</math>


 
== [http://www.wikicoursenote.com/wiki/Stat841f10#Support_Vector_Machine_-_November_09.2C_2010 Support Vector Machine - November 9, 2010] ==
 
== Support Vector Machine - November 9, 2010 ==


[http://en.wikipedia.org/wiki/Support_vector_machine Support Vector Machines (SVM)] are a set of related supervised learning methods used for classification and regression. A support vector machine constructs a maximum margin hyperplane or set of hyperplanes in a higher or infinite dimensional space. The set of points near the class boundaries, support vectors, define the model which can be used for classification, regression or other tasks.
[http://en.wikipedia.org/wiki/Support_vector_machine Support Vector Machines (SVM)] are a set of related supervised learning methods used for classification and regression. A support vector machine constructs a maximum margin hyperplane or set of hyperplanes in a higher or infinite dimensional space. The set of points near the class boundaries, support vectors, define the model which can be used for classification, regression or other tasks.
Line 156: Line 152:
* the data is linearly separable
* the data is linearly separable


We discussed the following facts about optimal hyperplane:


We discussed the following facts about optimal hyperplane:


:1. <math>\displaystyle \underline{\beta}</math> is orthogonal to the hyperplane:
:1. <math>\displaystyle \underline{\beta}</math> is orthogonal to the hyperplane:
Line 172: Line 168:


:4. Margin is the distance of the closest point to the hyperplane:
:4. Margin is the distance of the closest point to the hyperplane:
::margin = <math>\displaystyle min(y_i d_i) </math>
::margin = <math>\displaystyle max(y_i d_i) </math>  
 
::margin = <math>\displaystyle min( \frac {y_i(\underline{\beta}^{T}\underline{x}_i+{\beta}_0)}{| \beta |}) </math>
 
In order to maximize the Margine we have to minimize the <math>\displaystyle \beta </math>


::margin = <math>\displaystyle max( \frac {y_i(\underline{\beta}^{T}\underline{x}_i+{\beta}_0)}{| \beta |}) </math>


minimize <math>\displaystyle\frac{1}{2}\|\beta\|^2</math>     
In order to maximize the Margine we have to minimize <math>\displaystyle\frac{1}{2}\|\beta\|^2</math>     




Line 185: Line 178:




:5. We applied Lagrange multiplier to margin cost function which resulted in well know optimization problem: Quadratic Programming.
:5. We applied Lagrange multiplier to margin cost function which resulted in well known optimization problem: Quadratic Programming.
The Lagrangian form is introduced to ensure that the optimization conditions are satisfied, as well as finding an optimal solution.  
:The Lagrangian form is introduced to ensure that the optimization conditions are satisfied, as well as finding an optimal solution.  


Therefore, we have a new optimization problem:
:Therefore, we have a new optimization problem:


<math>\underset{\alpha}{\min} \sum_{i=1}^n{\alpha_i}- \,\frac{1}{2}\sum_{i=1}^n{\sum_{j=1}^n{\alpha_i\alpha_jy_iy_jx_i^Tx_j}} </math>
:<math>\underset{\alpha}{\max} \sum_{i=1}^n{\alpha_i}- \,\frac{1}{2}\sum_{i=1}^n{\sum_{j=1}^n{\alpha_i\alpha_jy_iy_jx_i^Tx_j}} </math>


1)  <math>\,\alpha_i</math>  > 0
:1)  <math>\,\alpha_i</math>  > 0


2)  <math>\sum{\alpha_i y_i}</math> = <math> 0 </math>
:2)  <math>\sum{\alpha_i y_i}</math> = <math> 0 </math>


This is a much simpler optimization problem and we can solve it by quadratic programming (QP) whcih is a special type of mathematical optimization problem. It is the problem of optimizing (minimizing or maximizing) a quadratic function of several variables subject to linear constraints on these variables.
This is a much simpler optimization problem and we can solve it by quadratic programming (QP) whcih is a special type of mathematical optimization problem. It is the problem of optimizing (minimizing or maximizing) a quadratic function of several variables subject to linear constraints on these variables.


== Support Vector Machine Cont., Kernel Trick - November 11, 2010 ==
== [http://www.wikicoursenote.com/wiki/Stat841f10#Support_Vector_Machine_Cont..2C_Kernel_Trick_-_November_11.2C_2010 Support Vector Machine Cont., Kernel Trick - November 11, 2010] ==


Continued from last lecture, we have created the optimization problem:
Continued from last lecture, we have created the optimization problem:
Line 208: Line 201:
and it's dual is  
and it's dual is  


<math>\displaystyle \overset{min} {_{ \alpha}} </math> <math>\displaystyle \sum_{i} {\alpha_i} - \tfrac{1}{2} \sum_{i} \sum_{j} {\alpha_i} {\alpha_j} y_i y_j \underline x_i^T \underline x_j </math>
<math>\displaystyle \overset{max} {_{ \alpha}} </math> <math>\displaystyle \sum_{i} {\alpha_i} - \tfrac{1}{2} \sum_{i} \sum_{j} {\alpha_i} {\alpha_j} y_i y_j \underline x_i^T \underline x_j </math>


s.t.  <math>\displaystyle \alpha_i \ge 0</math>
s.t.  <math>\displaystyle \alpha_i \ge 0</math>
Line 234: Line 227:


Discussed Kernel methods and how we can obtain a non-linear classification boundary using linear methods.
Discussed Kernel methods and how we can obtain a non-linear classification boundary using linear methods.
== [http://www.wikicoursenote.com/wiki/Stat841f10#Support_Vector_Machine.2C_Kernel_Trick_-_Cont._Case_II_-_November_16.2C_2010 Support Vector Machine, Kernel Trick - Cont. Case II - November 16, 2010] ==
In this case, we have supposed that the data is non-seperable and optimization problem (Primal form) for this case becomes:
:<math>\min \frac{1}{2}|\beta|^2+\gamma\sum_{i} {\xi_i}</math>
:<math>\,s.t.</math>  <math>y_i(\beta^Tx_i+\beta_0) \geq 1-\xi_i</math>
:<math>\xi_i \geq 0</math>
It's Lagrangian is
:<math> \frac{1}{2} |\beta|^2 + \gamma \sum_{i} \xi_i - \sum_{i} \alpha_i[y_i(\beta^T x_i+\beta_0)-1+\xi_i]-\sum_{i} \lambda_i \xi_i</math>
:<math>\alpha_i \geq 0, \lambda_i \geq 0</math>
We applied KKT Conditions and found that:
:1.<math> \displaystyle \beta = \sum_{i} \alpha_i y_i x_i </math>
::<math>\displaystyle \sum_{i} \alpha_i y_i =0 </math>
::<math>\displaystyle \gamma - \alpha_i - \lambda_i=0 \Rightarrow \gamma = \alpha_i+\lambda_i</math>, which is the only new condition added for soft margin
:2.<math>\,\alpha_i \geq 0, \lambda_i \geq 0</math>, dual feasibility
:3.<math>\,\alpha_i[y_i(\beta^T x_i+\beta_0)-1+\xi_i]=0</math> and <math>\,\lambda_i \xi_i=0</math>
:4.<math>\,y_i( \beta^T x_i+ \beta_0)-1+ \xi_i \geq 0</math>
By solving the Lagrangian, we found new optimization problem:
<math>\displaystyle \max_{\alpha_i} \sum_{i}{\alpha_i} - \frac{1}{2}\sum_{i}{\sum_{j}{\alpha_i \alpha_j y_i y_j x_i^T x_j}}</math>
such that <math> \displaystyle 0 \le \alpha_i \le \gamma </math> and <math>\displaystyle \sum_{i}{\alpha_i y_i} = 0</math>
Then we discussed that we can easily recover hyperplane by finding <math>\displaystyle \underline \beta</math> and <math>\displaystyle \beta_0</math>
* <math>\displaystyle \beta</math> can easily find from first KKT condition i.e. <math> \displaystyle \beta = \sum_{i} \alpha_i y_i x_i </math>
* For <math>\displaystyle \beta_0</math>, we have to choose a point that satisfy <math> \displaystyle 0 < \alpha_i < \gamma </math> and
:solve the following equation (obtained from third KKT condition) for <math>\displaystyle \beta_0</math>:
::<math> \displaystyle y_i( \beta^T x_i+ \beta_0)= 1</math>
In the end of the lecture we discussed another classification model [http://en.wikipedia.org/wiki/Naive_Bayes_classifier Naive Bayes Classifier]
== [http://www.wikicoursenote.com/wiki/Stat841f10#Naive_Bayes.2C_K_Nearest_Neighbours.2C_Boosting.2C_Bagging_and_Decision_Trees.2C_-_November_18.2C_2010 Classification Models - November 18, 2010] ==
We have discussed the following classification models:
* [http://en.wikipedia.org/wiki/Naive_Bayes_classifier Naive Bayes Classifier]
* [http://en.wikipedia.org/wiki/K-nearest_neighbor K-nearest Neighbor]
* [http://en.wikipedia.org/wiki/Boosting Boosting]
* [http://en.wikipedia.org/wiki/Bootstrap_aggregating Bagging]
* [http://en.wikipedia.org/wiki/Decision_tree_learning Trees]

Latest revision as of 09:45, 30 August 2017

Classification - September 21, 2010

  • Classification is an area of supervised learning that systematically assigns unlabeled novel data to their label through the characteristics and attributes obtained from observation.
  • Classification is the prediction of a discrete random variable [math]\displaystyle{ \mathcal{Y} }[/math] from another random variable [math]\displaystyle{ \mathcal{X} }[/math], where [math]\displaystyle{ \mathcal{Y} }[/math] represents the label assigned to a new data input and [math]\displaystyle{ \mathcal{X} }[/math] represents the known feature values of the input. The classification rule used by a classifier has the form [math]\displaystyle{ \,h: \mathcal{X} \mapsto \mathcal{Y} }[/math].
  • True error rate is the probability that the classification rule [math]\displaystyle{ \,h }[/math] does not correctly classify any data input. Empirical error rate is the frequency where the classification rule [math]\displaystyle{ \,h }[/math] does not correctly classify any data input in the training set. In experimental tasks true error cannot be measured and as a result the empirical error rate is used as its estimate.
  • Bayes Classifier is a probabilistic classifier by applying Bayes Theorem with strong (naive) independence assumptions. It has the advantage of requiring small training data to estimate the parameters needed for classification. Under this classifier an input [math]\displaystyle{ \,x }[/math] is classified to class [math]\displaystyle{ \,y }[/math] where the posterior probability for [math]\displaystyle{ \,y }[/math] is the largest for input [math]\displaystyle{ \,x }[/math].
  • Bayes Classification Rule Optimality Theorem states that Bayes classifier is the optimal classifier, in other words the true error rate of the Bayes classification rule will always be smaller or equal to any other classification rule
  • Bayes Decision Boundary is the hyperplane boundary that separates the two classes [math]\displaystyle{ \,m, n }[/math] obtained by setting the posterior probability for the two classes equal, [math]\displaystyle{ \,D(h)=\{x: P(Y=m|X=x)=P(Y=n|X=x)\} }[/math].
  • Linear Discriminant Analysis (LDA) for the Bayes classifier decision boundary between two classes makes the assumption that both are generated from Gaussian distribution and have the same covariance matrix.
  • PCA is an appropriate method when you have obtained measures on a number of observed variables and wish to develop a smaller number of artificial variables (called principal components) that will account for most of the variance in the observed variables. This is a powerful technique for dimensionally reduction. It has applications in data visualization, data mining, reducing the dimensionality of a data set and etc. It is mostly used for data analysis and for making predictive models.

Linear and Quadratic Discriminant Analysis cont'd - September 23, 2010

In the second lecture, Professor Ali Ghodsi recapitulates that by calculating the class posteriors [math]\displaystyle{ \Pr(Y=k|X=x) }[/math] we have optimal classification. He also shows that by assuming that the classes have common covariance matrix [math]\displaystyle{ \Sigma_{k}=\Sigma \forall k }[/math], the decision boundary between classes [math]\displaystyle{ k }[/math] and [math]\displaystyle{ l }[/math] is linear (LDA). However, if we do not assume same covariance between the two classes, the decision boundary is a quadratic function (QDA).

The following MATLAB examples can be used to demonstrated LDA and QDA.


Principle Component Analysis -September 30, 2010

Principal component analysis (PCA) is a dimensionality-reduction method invented by Karl Pearson in 1901 [1]. Depending on where this methodology is applied, other common names of PCA include the Karhunen–Loève transform (KLT) , the Hotelling transform, and the proper orthogonal decomposition (POD). PCA is the simplist eigenvector-based multivariate analysis. It reduces the dimensionality of the data by revealing the internal structure of the data in a way that best explains the variance in the data. To this end, PCA works by using a user-defined number of the most important directions of variation (dimensions or principal components) of the data to project the data onto these directions so as to produce a lower-dimensional representation of the original data. The resulting lower-dimensional representation of our data is usually much easier to visualize and it also exhibits the most informative aspects (dimensions) of our data whilst capturing as much of the variation exhibited by our data as it possibly could.

Fisher's (Linear) Discriminant Analysis (FDA) - Two Class Problem - October 5, 2010

This lecture introduces Fisher's linear discrimination analysis (FDA), which is a supervised dimensionality reduction method. FDA does not assume any distribution of the data and it works by reducing the dimensionality of the data by projecting the data on a line. That is, given d-dimensional data FDA project it to one-dimensional representation by [math]\displaystyle{ z = \underline{w}^T \underline{x} }[/math] where [math]\displaystyle{ x \in \mathbb{R}^{d} }[/math] and [math]\displaystyle{ \underline{w} = \begin{bmatrix}w_1 \\ \vdots \\w_d \end{bmatrix} _{d \times 1} }[/math]
FDA derives a set of feature vectors by which high-dimensional data can be projected onto a low-dimensional feature space in the sense of maximizing class separability. Furthermore, the lecture clarifies a set of FDA basic concepts like Fisher’s ratio, ratio of between-class scatter matrix to within-class scatter matrix. It also discusses the goals specified by Fisher for his analysis then proceeding by mathematical formulation of these goals.

Fisher's (Linear) Discriminant Analysis (FDA) - Multi-Class Problem - October 7, 2010

This lecture describes a generalization of Fisher's discriminant analysis to more than 2 classes. For the multi-class, or [math]\displaystyle{ k }[/math]-class problem, we are trying to find a projection from a [math]\displaystyle{ d }[/math]-dimensional space to a [math]\displaystyle{ (k-1) }[/math]-dimensional space. Recall that for the [math]\displaystyle{ 2 }[/math]-class problem, the objective function was [math]\displaystyle{ \displaystyle \max \frac{(w^Ts_Bw)}{(w^Ts_ww)} }[/math] . In the [math]\displaystyle{ k }[/math]-class problem, [math]\displaystyle{ \mathbf{W} }[/math] is a [math]\displaystyle{ d\times (k-1) }[/math] transformation matrix, [math]\displaystyle{ \mathbf{W} =[\mathbf{w}_{1}, \mathbf{w}_{2},..., \mathbf{w}_{k-1}] }[/math], and the objective function becomes [math]\displaystyle{ \displaystyle \max \frac{Tr[\mathbf{W}^{T}\mathbf{S}_{B}\mathbf{W}]}{Tr[\mathbf{W}^{T}\mathbf{S}_{W}\mathbf{W}]} }[/math]

As in the [math]\displaystyle{ 2 }[/math]-class case, this is also a generalized eigenvalue problem, and the solution can be computed as the first [math]\displaystyle{ (k-1) }[/math] eigenvectors of [math]\displaystyle{ \mathbf{S}_{W}^{-1}\mathbf{S}_{B}, }[/math] i.e. [math]\displaystyle{ \mathbf{S}_{W}^{-1}\mathbf{S}_{B}\mathbf{w}_{i} =\lambda_{i}\mathbf{w}_{i} }[/math].

Linear and Logistic Regression - October 12, 2010

In this Lecture, Prof Ali Ghodsi reviews the LDA as a dimensionality reduction method and introduces 2 models for regression, linear and logistic regression.

Regression analysis is a general statistical technique for modeling and analyzing how a dependent variable changes according to changes in independent variables. In classification, we are interested in how a label, [math]\displaystyle{ \,y }[/math], changes according to changes in [math]\displaystyle{ \,X }[/math].

General information on linear regression can be found at the University of South Florida and this MIT lecture.

Logistic Regression Cont. - October 14, 2010

Traditionally logistic regression parameters are estimated using maximum likelihood. However , other optimization techniques may be used as well.
Since there is no closed form solution for finding the zero of the first derivative of the log likelihood the Newton Raphson algorithm is used. Since the problem is convex Newtons is guaranteed to converge to a global optimum.
Logistic regression requires less parameters than LDA or QDA and is therefore more favorable for high dimensional data.

Multi-Class Logistic Regression & Perceptron - October 19, 2010

In this lecture, the topic of logistic regression was finalized by covering the multi-class logistic regression and new topic of perceptron has started. Perceptron is a linear classifier for two-class problems. The main goal of perceptron is to minimize the distances between the misclassified points and the decision boundary. This will be continued in the following lectures.

Perceptron Learning Algorithm and Feed Forward Neural Networks - October 21, 2010

In this lecture, we finalize our discussion of the Perceptron by reviewing its learning algorithm, which is based on gradient descent. We then begin the next topic, Neural Networks (NN), and we focus on a NN that is useful for classification: the Feed Forward Neural Network (FFNN). The mathematical model for the FFNN is shown, and we review one of its most popular learning algorithms: Back-Propagation.

To open the Neural Network discussion, we present a formulation of the universal function approximator. The mathematical model for Neural Networks is then built upon this formulation. We also discuss the trade-off between training error and testing error -- known as the generalization problem -- under the universal function approximator section.

Complexity Control - October 26, 2010

Selecting the model structure with an appropriate complexity is a standard problem in pattern recognition and machine learning. Systems with the optimal complexity have a good generalization to unseen data.

A wide range of techniques may be used which alter the system complexity. In this lecture, we introduce the topic of Model Selection, also known as Complexity Control. This involves choosing the model that will best classify any given data by minimizing test error rates or by regularization.

The two problems that we wish to avoid when choosing a model are over-fitting and under-fitting. We present over & under-fitting concepts with an example to illustrate how we choose a good classifier and how to avoid over-fitting.

Moreover, Cross-validation has been introduced during the leacture which is a method for estimating generalization error based on "resampling" (Weiss and Kulikowski 1991; Efron and Tibshirani 1993; Hjorth 1994; Plutowski, Sakata, and White 1994; Shao and Tu 1995). The resulting estimates of generalization error are often used for choosing among various models. Also, it can be used for model selection by choosing one of several models that has the smallest estimated generalization error. Finally, We discuss two methods to estimate the true error rate: simple cross-validation and K-fold cross-validation. Both of these methods involve splitting the data into a training set to train the model and a validation set to compute the test error rate.

Leave-One-Out Cross-Validation and Radial Basis Function Networks - October 28, 2010

In this lecture, we finalize the discussion of cross-validation with leave-one-out cross-validation and begin regression by discussing Radial Basis Function (RBF) Networks. Leave-one-out is similar to k-fold cross-validation, but separates each single element. Under linear model conditions, leave-one-out cross-validation performs well asymptotically.

RBF networks are similar to neural networks. There is one hidden layer with each node being a basis function. Between the hidden layer and output layer there are weights. Noise in the actual model creates issues in model selection, so we consider the expected squared difference between actual and estimated outputs.

{{

 Template:namespace detect

| type = style | image = | imageright = | style = | textstyle = | text = This article may require cleanup to meet Wikicoursenote's quality standards. The specific problem is: It is worth to note that any function can be expanded in terms of RBF functions since they are a orthogonal basis so RBF network has the property to interpolate any function that we have in our problems. Please improve this article if you can. (November 2010) | small = | smallimage = | smallimageright = | smalltext = }}

  • Radial Basis Function (RBF) networks is an artificial neural network consisting of an output layer, a hidden layer, and weights from the hidden layer to the output; it has a closed form solution and can be solved without back-propagation.
  • During the model selection process, the complexity (the number of neurons in the hidden layer), the basis function, and the basis function parameters are estimated.
    • A common basis function is the RBF (or Gaussian) function;
    • Function parameters can be estimated by clustering the data into as many clusters as there are nodes and using the sample mean and variance
    • The complexity of the model is determined during the training process (methods like K-Fold Cross-Validation or Leave-One-Out Cross Validation can be used)

Model Selection (SURE) for RBF Network - November 2nd, 2010

Model selection is the task of selecting a model of optimal complexity for a given set of data. Learning a radial basis function network from data is a parameter estimation problem. A model is selected that has parameters associated with the best observed performance on the training data. Squared error is used as the performance index.

Some basic assumptions are taken initially. Such as:

  • [math]\displaystyle{ \hat f(X) }[/math] denote the prediction/estimated model, which is generated from a training data set [math]\displaystyle{ \displaystyle D = \{(x_i, y_i)\}^n_{i=1} }[/math].
  • [math]\displaystyle{ \displaystyle MSE=E[(\hat f-f)^2] }[/math] denote the mean squared error, where [math]\displaystyle{ \hat f(X) }[/math] is the estimated model and [math]\displaystyle{ \displaystyle f(X) }[/math] is the true model.

We have made two assumptions about the observations:

  • [math]\displaystyle{ \displaystyle \epsilon }[/math] is additive Gaussian noise, and
  • [math]\displaystyle{ \displaystyle \epsilon_i }[/math] ~ [math]\displaystyle{ \displaystyle N(0,\sigma^2) }[/math].


Then we estimated

  • [math]\displaystyle{ \hat f }[/math] from the training data set [math]\displaystyle{ D=\{(x_i,y_i)\}^n_{i=1} }[/math] and
  • [math]\displaystyle{ \displaystyle E[(\hat y_i-y_i)^2 ]=E[(\hat f_i-f_i)^2]+\sigma^2-2E[\epsilon_i(\hat f_i-f_i)] }[/math]

We have taken the last term [math]\displaystyle{ \displaystyle 2E[\epsilon_i(\hat f_i-f_i)] }[/math] of mean squared error for two cases as follows:

Case 1

New data point has been introduced to the estimated model, i.e. [math]\displaystyle{ (x_i,y_i)\not\in D }[/math]; this new point belongs to the testing/validation data set [math]\displaystyle{ V=\{(x_i,y_i)\}^m_{i=1} }[/math].

We found that in this case [math]\displaystyle{ \displaystyle err=Err+m\sigma^2 }[/math].

Case 2

In this case we do not use new data points to assess the performance of the estimated model, we suppose [math]\displaystyle{ (x_i,y_i)\in D }[/math]. Then by applying Stein's Lemma, we obtained equation for one data point: [math]\displaystyle{ \displaystyle E[(\hat y_i-y_i)^2 ]=E[(\hat f_i-f_i)^2]+\sigma^2-2\sigma^2E\left[\frac {\partial \hat f}{\partial y_i}\right] }[/math] or

[math]\displaystyle{ \displaystyle err=Err+n\sigma^2-2\sigma^2\sum_{i=1}^n \frac {\partial \hat f}{\partial y_i} }[/math] known as Stein's unbiased risk estimate (SURE).

SURE for RBF Network

Finally based on SURE, we derived

[math]\displaystyle{ \displaystyle Err=err-n\sigma^2+2\sigma^2m }[/math]

for RBF Network with no intercept.


Regularization Weight Decay - November 4, 2010

Regularization

Regularization is used to prevent overfitting. We add a penalty function to training error such that when the complexity increases, the penalty function increases as well.

Regularization methods are also used for model selection, well-known model selection techniques include AIC, BIC, and MDL.


Weight Decay

Weight decay adds a penalty term to the error function. The usual penalty is the sum of squared weights times a decay constant.

[math]\displaystyle{ \,REG = err + \rho \sum_{ij}u_{ij}^2 }[/math]

Support Vector Machine - November 9, 2010

Support Vector Machines (SVM) are a set of related supervised learning methods used for classification and regression. A support vector machine constructs a maximum margin hyperplane or set of hyperplanes in a higher or infinite dimensional space. The set of points near the class boundaries, support vectors, define the model which can be used for classification, regression or other tasks.


We supposed that

  • hyperplane is defined as [math]\displaystyle{ \displaystyle \underline{\beta}^{T}\underline{x}+\beta_0=0 }[/math]
  • [math]\displaystyle{ \displaystyle y_i\in\{-1,+1\} }[/math]
  • the data is linearly separable

We discussed the following facts about optimal hyperplane:


1. [math]\displaystyle{ \displaystyle \underline{\beta} }[/math] is orthogonal to the hyperplane:
i.e. [math]\displaystyle{ \displaystyle \underline{\beta} \perp (\underline{x}_1-\underline{x}_2) }[/math], where [math]\displaystyle{ \displaystyle x_1 }[/math] and [math]\displaystyle{ \displaystyle x_2 }[/math] are on the plane.


2. For any point [math]\displaystyle{ \displaystyle x_0 }[/math] on the plane [math]\displaystyle{ \displaystyle \underline{\beta}^{T}\underline{x}_0+\beta_0=0 }[/math]
[math]\displaystyle{ \displaystyle \Rightarrow \underline{\beta}^{T}\underline{x}_0=-\beta_0 }[/math]


3. For any point [math]\displaystyle{ \displaystyle x_i }[/math], the distance of the point to the hyperplane denoted by [math]\displaystyle{ \displaystyle d_i }[/math] is the projection of [math]\displaystyle{ \displaystyle (\underline{x}_i-\underline{x}_0) }[/math] on [math]\displaystyle{ \displaystyle \underline{\beta} }[/math]:
[math]\displaystyle{ \displaystyle d_i = \frac {\underline{\beta}^{T}(\underline{x}_i-\underline{x}_0)}{| \beta |} = \frac {\underline{\beta}^{T}\underline{x}_i-\underline{\beta}^{T}\underline{x}_0}{| \beta |} = \frac {\underline{\beta}^{T}\underline{x}_i+{\beta}_0}{| \beta |} }[/math]


4. Margin is the distance of the closest point to the hyperplane:
margin = [math]\displaystyle{ \displaystyle max(y_i d_i) }[/math]
margin = [math]\displaystyle{ \displaystyle max( \frac {y_i(\underline{\beta}^{T}\underline{x}_i+{\beta}_0)}{| \beta |}) }[/math]

In order to maximize the Margine we have to minimize [math]\displaystyle{ \displaystyle\frac{1}{2}\|\beta\|^2 }[/math]


s.t [math]\displaystyle{ \displaystyle y_i(\beta^T x_i + \beta_0) \geq 1 \forall }[/math] i


5. We applied Lagrange multiplier to margin cost function which resulted in well known optimization problem: Quadratic Programming.
The Lagrangian form is introduced to ensure that the optimization conditions are satisfied, as well as finding an optimal solution.
Therefore, we have a new optimization problem:
[math]\displaystyle{ \underset{\alpha}{\max} \sum_{i=1}^n{\alpha_i}- \,\frac{1}{2}\sum_{i=1}^n{\sum_{j=1}^n{\alpha_i\alpha_jy_iy_jx_i^Tx_j}} }[/math]
1) [math]\displaystyle{ \,\alpha_i }[/math] > 0
2) [math]\displaystyle{ \sum{\alpha_i y_i} }[/math] = [math]\displaystyle{ 0 }[/math]

This is a much simpler optimization problem and we can solve it by quadratic programming (QP) whcih is a special type of mathematical optimization problem. It is the problem of optimizing (minimizing or maximizing) a quadratic function of several variables subject to linear constraints on these variables.

Support Vector Machine Cont., Kernel Trick - November 11, 2010

Continued from last lecture, we have created the optimization problem:

[math]\displaystyle{ \displaystyle \overset{min} {_{\underline \beta}} }[/math] [math]\displaystyle{ \displaystyle \tfrac{1}{2} {| \underline \beta |}^2 }[/math]

s.t. [math]\displaystyle{ \displaystyle y_i(\underline{\beta}^{T}\underline{x}_i+{\beta}_0) \ge 1 \qquad \forall i }[/math]

and it's dual is

[math]\displaystyle{ \displaystyle \overset{max} {_{ \alpha}} }[/math] [math]\displaystyle{ \displaystyle \sum_{i} {\alpha_i} - \tfrac{1}{2} \sum_{i} \sum_{j} {\alpha_i} {\alpha_j} y_i y_j \underline x_i^T \underline x_j }[/math]

s.t. [math]\displaystyle{ \displaystyle \alpha_i \ge 0 }[/math]

[math]\displaystyle{ \displaystyle \sum_{i} \alpha_i y_i = 0 }[/math]


Which we can write in matrix form as follows:

[math]\displaystyle{ \displaystyle \overset{min} {_{ \underline \alpha}} }[/math] [math]\displaystyle{ \displaystyle \alpha^T . \mathbf{1} - \alpha^T S \alpha }[/math]

[math]\displaystyle{ \displaystyle \underline \alpha \ge \underline 0 }[/math]

[math]\displaystyle{ \displaystyle \alpha^T \underline y = 0 }[/math]


By using K.K.T. Condition [math]\displaystyle{ \displaystyle \alpha_i [ y_i(\underline{\beta}^{T}\underline{x}_i+{\beta}_0) - 1 ] = 0 }[/math], which is true for all points.

  • If a point is not on the margin i.e. [math]\displaystyle{ \displaystyle y_i(\underline{\beta}^{T}\underline{x}_i+{\beta}_0) \gt 1 }[/math] then [math]\displaystyle{ \displaystyle \alpha_i = 0 }[/math].
  • If [math]\displaystyle{ \displaystyle \alpha_i \gt 0 }[/math] then [math]\displaystyle{ \displaystyle y_i(\underline{\beta}^{T}\underline{x}_i+{\beta}_0) = 1 }[/math] which means that point is on the margin and these points are called Support vectors.

We can easily find [math]\displaystyle{ \displaystyle \underline \beta = \sum_{i} {\alpha_i} y_i \underline x_i }[/math] and to find [math]\displaystyle{ \displaystyle {\beta}_0 }[/math], we can choose a point [math]\displaystyle{ \displaystyle i }[/math] with [math]\displaystyle{ \displaystyle \alpha_i \gt 0 }[/math] then solve [math]\displaystyle{ \displaystyle y_i(\underline{\beta}^{T}\underline{x}_i+{\beta}_0) = 1 }[/math]

Discussed Kernel methods and how we can obtain a non-linear classification boundary using linear methods.

Support Vector Machine, Kernel Trick - Cont. Case II - November 16, 2010

In this case, we have supposed that the data is non-seperable and optimization problem (Primal form) for this case becomes:

[math]\displaystyle{ \min \frac{1}{2}|\beta|^2+\gamma\sum_{i} {\xi_i} }[/math]
[math]\displaystyle{ \,s.t. }[/math] [math]\displaystyle{ y_i(\beta^Tx_i+\beta_0) \geq 1-\xi_i }[/math]
[math]\displaystyle{ \xi_i \geq 0 }[/math]

It's Lagrangian is

[math]\displaystyle{ \frac{1}{2} |\beta|^2 + \gamma \sum_{i} \xi_i - \sum_{i} \alpha_i[y_i(\beta^T x_i+\beta_0)-1+\xi_i]-\sum_{i} \lambda_i \xi_i }[/math]
[math]\displaystyle{ \alpha_i \geq 0, \lambda_i \geq 0 }[/math]


We applied KKT Conditions and found that:

1.[math]\displaystyle{ \displaystyle \beta = \sum_{i} \alpha_i y_i x_i }[/math]
[math]\displaystyle{ \displaystyle \sum_{i} \alpha_i y_i =0 }[/math]
[math]\displaystyle{ \displaystyle \gamma - \alpha_i - \lambda_i=0 \Rightarrow \gamma = \alpha_i+\lambda_i }[/math], which is the only new condition added for soft margin
2.[math]\displaystyle{ \,\alpha_i \geq 0, \lambda_i \geq 0 }[/math], dual feasibility
3.[math]\displaystyle{ \,\alpha_i[y_i(\beta^T x_i+\beta_0)-1+\xi_i]=0 }[/math] and [math]\displaystyle{ \,\lambda_i \xi_i=0 }[/math]
4.[math]\displaystyle{ \,y_i( \beta^T x_i+ \beta_0)-1+ \xi_i \geq 0 }[/math]


By solving the Lagrangian, we found new optimization problem:

[math]\displaystyle{ \displaystyle \max_{\alpha_i} \sum_{i}{\alpha_i} - \frac{1}{2}\sum_{i}{\sum_{j}{\alpha_i \alpha_j y_i y_j x_i^T x_j}} }[/math] such that [math]\displaystyle{ \displaystyle 0 \le \alpha_i \le \gamma }[/math] and [math]\displaystyle{ \displaystyle \sum_{i}{\alpha_i y_i} = 0 }[/math]


Then we discussed that we can easily recover hyperplane by finding [math]\displaystyle{ \displaystyle \underline \beta }[/math] and [math]\displaystyle{ \displaystyle \beta_0 }[/math]

  • [math]\displaystyle{ \displaystyle \beta }[/math] can easily find from first KKT condition i.e. [math]\displaystyle{ \displaystyle \beta = \sum_{i} \alpha_i y_i x_i }[/math]
  • For [math]\displaystyle{ \displaystyle \beta_0 }[/math], we have to choose a point that satisfy [math]\displaystyle{ \displaystyle 0 \lt \alpha_i \lt \gamma }[/math] and
solve the following equation (obtained from third KKT condition) for [math]\displaystyle{ \displaystyle \beta_0 }[/math]:
[math]\displaystyle{ \displaystyle y_i( \beta^T x_i+ \beta_0)= 1 }[/math]


In the end of the lecture we discussed another classification model Naive Bayes Classifier


Classification Models - November 18, 2010

We have discussed the following classification models: