|
|
Line 1: |
Line 1: |
| ==Introduction==
| |
| In dimensionality reduction (or manifold-learning) , the foundation of all methods is the belief that the observed data <math>\left\{ \mathbf{x}_{j} \right\}</math> are not truly in the high-dimensional <math>\mathbb{R}^{D}</math>. Rather, there exists a smooth mapping <math>\varphi</math> such that the data can be efficiently represented in a lower-dimensional space <math>\mathbb{R}^{d}</math> (<math>0<d \leq D</math>, called intrinsic dimension) by the mapping: <math>\mathbf{y}=\varphi(\mathbf{x}), \mathbf{y} \in \mathbb{R}^{d}</math>. Most methods (such as PCA, MDS, LLE, ISOMAP, etc.) focus on recover the embedding of high-dimensional data, i.e. <math>\left\{ \widehat{\mathbf{y} \right\}</math>. However, there is no consensus on how this intrinsic dimension <math>d</math> should be determined.
| |
|
| |
|
| This paper reviewed several previous works in this topic and proposed a new estimator of intrinsic dimension. The properties of the estimator is discussed and the comparison between this estimator and others is carried out in the numeral experiments.
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
| ==Previous works==
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
| ==MLE of intrinsic dimension==
| |
|
| |
|
| |
|
| |
|
| |
|
| |
| ==Experiments and comparison==
| |
|
| |
|
| |
|
| |
|
| |
|
| |
| ==Discussion==
| |