# Difference between revisions of "stat841f10"

(→Classification) |
|||

Line 49: | Line 49: | ||

*'''Note''' <math>\,\arg\max_{k} \delta_k(x)</math>returns the set of k for which <math>\,\delta_k(x)</math> attains its largest value. | *'''Note''' <math>\,\arg\max_{k} \delta_k(x)</math>returns the set of k for which <math>\,\delta_k(x)</math> attains its largest value. | ||

+ | |||

+ | == '''Reference''' == | ||

+ | ===The Elements of Statistical Learning: Data Mining, Inference, and Prediction=== | ||

+ | |||

+ | The Elements of Statistical Learning: Data Mining, Inference, and Prediction, Second Edition, February 2009 | ||

+ | Trevor Hastie, Robert Tibshirani, Jerome Friedman | ||

+ | |||

+ | http://www-stat.stanford.edu/~tibs/ElemStatLearn/ | ||

+ | (3rd Edition is available) |

## Revision as of 12:20, 25 September 2010

## Contents

## Editor sign up

** Classfication-2010.09.21**

### Classification

**Statistical classification**, usually simply known as classification, refers to the supervised learning of the classes (also known as labels or groups) of novel data using models built using classifiers and labeled training data. Classification is an important task for people and society since the beginnings of history. The earliest application of classification in human society was probably done by prehistory peoples for recognizing which wild animals were beneficial to people and which one were harmful; and the earliest systematic use of classification was done by the famous Greek philosopher when he, for example, divided all living things into the two groups of plants and animals.[1] Classification is generally regarded as one of four major areas of statistics, with the other three major areas being regression regression, clustering, and dimensionality reduction (also known as feature extraction or manifold learning).

In **classical statistics**, classification techniques were developed to learn useful information using small data sets where there is usually not enough of data. When **machine learning** was developed after the application of computers to statistics, classification techniques were developed to work with very large data sets where there is usually too many data. A major challenge facing data mining using machine learning is how to efficiently find useful patterns in very large amounts of data. A interesting quote that describes this problem quite well is the following one made by the retired Yale University Librarian Rutherford D. Rogers.

"We are drowning in information and starving for knowledge."- Rutherford D. Rogers

In the Information Age, machine learning when it is combined with efficient classification techniques can be very useful for data mining using very large data sets. This is most useful when the structure of the data is not well understood but the data nevertheless exhibit strong statistical regularity. Areas in which machine learning and classification have been used together include search and recommendation (e.g. Google, Amazon),automatic speech recognition and speaker verification, medical diagnosis etc.

### Error rate

### Bayes Classifier

### Bayesian vs. Frequentist

**Linear and Quadratic Discriminant Analysis**

**Linear and Quadratic Discriminant Analysis cont'd - 2010.09.23**

In the second lecture, Professor Ali Ghodsi recapitulates that by calculating the class posteriors [math]\Pr(Y=k|X=x)[/math] we have optimal classification. He also shows that by assuming that the classes have common covariance matrix [math]\Sigma_{k}=\Sigma \forall k [/math] the decision boundary between classes [math]k[/math] and [math]l[/math] is linear (LDA). However, if we do not assume same covariance between the two classes the decision boundary is quadratic function (QDA).

Some MATLAB samples are used to demonstrated LDA and QDA

### LDA x QDA

Linear discriminant analysis[2] is a statistical method used to find the *linear combination* of features which best separate two or more classes of objects or events. It is widely applied in classifying diseases, positioning, product management, and marketing research.

Quadratic Discriminant Analysis[3], on the other had, aims to find the *quadratic combination* of features. It is more general than Linear discriminant analysis. Unlike LDA however, in QDA there is no assumption that the covariance of each of the classes is identical.

### Summarizing LDA and QDA

We can summarize what we have learned so far into the following theorem.

**Theorem**:

Suppose that [math]\,Y \in \{1,\dots,k\}[/math], if [math]\,f_k(x) = Pr(X=x|Y=k)[/math] is Gaussian, the Bayes Classifier rule is

- [math]\,h(X) = \arg\max_{k} \delta_k(x)[/math]

where

- [math] \,\delta_k = - \frac{1}{2}log(|\Sigma_k|) - \frac{1}{2}(x-\mu_k)^\top\Sigma_k^{-1}(x-\mu_k) + log (\pi_k) [/math] (quadratic)

**Note**The decision boundary between classes [math]k[/math] and [math]l[/math] is quadratic in [math]x[/math].

If the covariance of the Gaussians are the same, this becomes

- [math] \,\delta_k = x^\top\Sigma^{-1}\mu_k - \frac{1}{2}\mu_k^\top\Sigma^{-1}\mu_k + log (\pi_k) [/math] (linear)

**Note**[math]\,\arg\max_{k} \delta_k(x)[/math]returns the set of k for which [math]\,\delta_k(x)[/math] attains its largest value.

**Reference**

### The Elements of Statistical Learning: Data Mining, Inference, and Prediction

The Elements of Statistical Learning: Data Mining, Inference, and Prediction, Second Edition, February 2009 Trevor Hastie, Robert Tibshirani, Jerome Friedman

http://www-stat.stanford.edu/~tibs/ElemStatLearn/ (3rd Edition is available)