stat841f10: Difference between revisions
Jump to navigation
Jump to search
No edit summary |
|||
Line 6: | Line 6: | ||
== '''Linear and Quadratic Discriminant Analysis''' == | == '''Linear and Quadratic Discriminant Analysis''' == | ||
== '''Linear and Quadratic Discriminant Analysis cont'd - 2010.09.23''' == | == '''Linear and Quadratic Discriminant Analysis cont'd - 2010.09.23''' == | ||
In the second lecture, Professor Ali Ghodsi recapitulates that by calculating the class posteriors <math>\Pr(Y=k|X=x)</math> we have optimal classification. He also shows that by assuming that the classes have common covariance matrix <math>\Sigma_{k}=\Sigma \forall k </math> the decision boundary between classes <math>k</math> and <math>l</math> is linear (LDA). However, if we do not assume same covariance between the two classes the decision boundary is quadratic function (QDA). | |||
Some MATLAB samples are used to demonstrated LDA and QDA |
Revision as of 13:19, 24 September 2010
Editor sign up
Classfication-2010.09.21
Error rate
Bayes Classifier
Bayesian vs. Frequentist
Linear and Quadratic Discriminant Analysis
Linear and Quadratic Discriminant Analysis cont'd - 2010.09.23
In the second lecture, Professor Ali Ghodsi recapitulates that by calculating the class posteriors [math]\displaystyle{ \Pr(Y=k|X=x) }[/math] we have optimal classification. He also shows that by assuming that the classes have common covariance matrix [math]\displaystyle{ \Sigma_{k}=\Sigma \forall k }[/math] the decision boundary between classes [math]\displaystyle{ k }[/math] and [math]\displaystyle{ l }[/math] is linear (LDA). However, if we do not assume same covariance between the two classes the decision boundary is quadratic function (QDA).
Some MATLAB samples are used to demonstrated LDA and QDA