stat841: Difference between revisions

From statwiki
Jump to navigation Jump to search
Line 11: Line 11:
'''3. Error data'''<br />
'''3. Error data'''<br />
Definition:<br />
Definition:<br />
''True error rate'' of a classifier(h) is defined as the probability that <math>\overline{Y}</math> predicted from <math>\X</math> by classifier <math>\h</math> does not actually equal to <math>\Y</math>, namely, <math>\, L(h)=P(h(X) \neq Y)</math>.<br />
''True error rate'' of a classifier(h) is defined as the probability that <math>\overline{Y}</math> predicted from <math>\,X</math> by classifier <math>\,h</math> does not actually equal to <math>\,Y</math>, namely, <math>\, L(h)=P(h(X) \neq Y)</math>.<br />
''Empirical error rate(training error rate)'' of a classifier(h) is defined as the frequency of event that <math>\overline{Y}</math> predicted from <math>\X</math> by <math>\h</math> does not equal to Y in total n prediction. The mathematical expression is as below:<br />
''Empirical error rate(training error rate)'' of a classifier(h) is defined as the frequency of event that <math>\overline{Y}</math> predicted from <math>\,X</math> by <math>\,h</math> does not equal to Y in total n prediction. The mathematical expression is as below:<br />
<math>\, L_{h}= \frac{1}{n} \sum_{i=1}^{n} I(h(X_{i} \neq Y_{i}))</math>, where <math>\,I</math> is an indicator that <math>\, I=</math>.
<math>\, L_{h}= \frac{1}{n} \sum_{i=1}^{n} I(h(X_{i} \neq Y_{i}))</math>, where <math>\,I</math> is an indicator that <math>\, I=</math>.


'''4. Bayes Classifier'''<br />
'''4. Bayes Classifier'''<br />

Revision as of 16:56, 30 September 2009

Scribe sign up

Course Note for Sept.30th (Classfication_by Liang Jiaxi)

1.


2. Classification
Classification is a function between two random varialbe

3. Error data
Definition:
True error rate of a classifier(h) is defined as the probability that [math]\displaystyle{ \overline{Y} }[/math] predicted from [math]\displaystyle{ \,X }[/math] by classifier [math]\displaystyle{ \,h }[/math] does not actually equal to [math]\displaystyle{ \,Y }[/math], namely, [math]\displaystyle{ \, L(h)=P(h(X) \neq Y) }[/math].
Empirical error rate(training error rate) of a classifier(h) is defined as the frequency of event that [math]\displaystyle{ \overline{Y} }[/math] predicted from [math]\displaystyle{ \,X }[/math] by [math]\displaystyle{ \,h }[/math] does not equal to Y in total n prediction. The mathematical expression is as below:
[math]\displaystyle{ \, L_{h}= \frac{1}{n} \sum_{i=1}^{n} I(h(X_{i} \neq Y_{i})) }[/math], where [math]\displaystyle{ \,I }[/math] is an indicator that [math]\displaystyle{ \, I= }[/math].

4. Bayes Classifier