# Difference between revisions of "proposal for STAT946 projects"

Use the following format for your proposal (maximum one page)

## Project 2 : Comformal Map in Classification of 3D Objects

### By: Jiheng Wang, Zhiyue Huang and Saad Zaman

(tentative)

The classification of three dimensional surfaces is a fundamental issue in artificial intelligence and computer vision with many applications. However, 3D surface matching with noise, occlusion and clutter is a challenging problem.

There exist a class of algorithms in the area of dimensionality reduction. However, many of them have a particular drawback: they do not produce conformal map.

A conformal map is a low-dimensional embedding where the angles formed by three neighboring points in the original high dimensional dataset are equal to the angles between those same three points in the embedding.

Basically, we will follow the idea in the paper "Conformal Mapping by Computationally Efficient Methods" and mainly discuss its application in Classification of 3D objects.

<references/> 1.Xianfeng Gu, Shing-Tung Yau, "Surface Classification Using Conformal Structures," iccv, vol. 1, pp.701, Ninth IEEE International Conference on Computer Vision (ICCV'03) - Volume 1, 2003

2.Sen Wang, Yang Wang, Miao Jin, Gu, X.D., Samaras, D.,Conformal Geometry and Its Applications on 3D Shape Matching, Recognition, and Stitching, Pattern Analysis and Machine Intelligence, IEEE Transactions on, page(s): 1209-1220, Volume: 29, Issue: 7, July 2007

## Project 3: LLE and noisy data

### By: Ruchi Jiwrajka and Dennis Zhuang

Nonlinear dimensionality reduction (NLDR) aims to find low dimension representation of data lying on nonlinear manifolds in high dimensional space. It has many applications in classification, clustering, data visualization etc. In the recent years, ISOMAP and LLE have emerged as two advancing alternatives for the problem of nonlinear dimensionality reduction. ISOMAP attempts to preserve the global geometric property of the manifold while LLE tries to approximate the local geometric property [1].

Here, we only focus on LLE. Although LLE is quite successful in many applications due to its capability of dealing with large amount of data and its global and non-iterative way to do optimization involving only one parameter (the number of neighbors) to adjust; it comes with a assumption that data is well sampled from a single smooth nonlinear manifold. It could have bad performance under certain circumstances where this assumption is not well observed [1] [2] [3] [4]: (a) data lies on several disjoint manifolds; (b) data is sparsely and unevenly sampled; (c) data is contaminated by noise (outlier).

Motivated by the weakness of LLE, many variants of it were developed to improve its performance and make it robust to outliers. These can roughly be divided into three categories: (a) appropriately selecting the neighbors [2] [5] [6] [7] [10]; (b) identifying the outliers [4] [8]; (c) dealing with the instability of the Gram matrix [3] [9].

In the project, we will do a brief survey for methods endeavouring to improve LLE over not well sampled and noisy contaminated data. As, the second step of LLE , which is a least square optimization to find the weights that reconstruct a point from its neighbours could be ill-defined, it might result in inverting a singular or near singular Gram Matrix (G). Therefore, as suggested by Professor Ali Ghodsi we would like to see if the eigenvectors of the Gram Matrix could also be the solution to the optimization problem. As the eigenvector would satisfy the constraint $\mathbf {w^Tw}=1$, but the optimization problem requires that components of the weight vector sum to one, that is $\mathbf {w^Te}=1$, there is a need to normalize the eigenvectors of G and see if the normalized vector is still a good solution to the problem. The changing of the constraint results in the optimal weights that best linearly reconstruct a point from its neighbours to be the eigenvector of the Gram matrix corresponding to the smallest eigenvalue. Furthermore, we can change the optimal weight vector to be eigenvector associated with k-th smallest eigenvalue or some linearly combination of them. We would like to investigate whether using the eigenvector of the Gram matrix would help LLE to deal with the noisy contaminated data.

References

1. Abdenour Hadid and Matti Pietikäinen, Efficient Locally Linear Embeddings of Imperfect Manifolds, MLDM 2003, LNAI 2734, pp. 188–201, 2003.
2. Yulin Zhang, Jian Zhuang, Sun’an Wang, Xiaohu Li, Local Linear Embedding in Dimensionality Reduction Based on Small World Principle, 2008 International Conference on Computer Science and Software Engineering
3. ChenpingHou, JingWang, YiWua, DongyunYi, Local linear transformation embedding, Neurocomputing 72, pp. 2368–2378, 2009
4. Hong Chang, Dit-YanYeung, Robust locally linear embedding, Pattern Recognition 39, pp. 1053 – 1065, 2006
5. Guihua Wen and Lijun Jiang, Clustering-based Locally Linear Embedding, 2006 IEEE International Conference on Systems, Man, and Cybernetics
6. Kanghua Hui, Chunheng Wang, 2008 International Conference on Pattern Recognition
7. Jian Xiao, Zongtan Zhou, Dewen Hu, Junsong Yin, and Shuang Chen, Self-organized Locally Linear Embedding for Nonlinear Dimensionality Reduction, ICNC 2005, LNCS 3610, pp. 101–109, 2005
8. Xianhua Zeng, Siwei Luo, Generalized Locally Linear Embedding Based on Local Reconstruction Similarity, 2008 International Conference on Fuzzy Systems and Knowledge Discovery
9. Zhenyue Zhang, Jing Wang, MLLE: Modified Locally Linear Embedding Using Multiple Weights, 2006 Neural Information Processing Systems conference
10. Yaozhang Pan,Shuzhi Sam Ge, Abdullah Al Mamun, Weighted locally linear embedding for dimension reduction, Pattern Recognition 42, pp. 798 – 811, 2009

## Project 4: Reducing the dimension of financial time series

### By: Yousef A. Sohrabi, Amir Memartoluie and Shu-tong Tse

#### Introduction

In our project, we apply recently developed dimensionality reduction techniques to analyze several types of financial time series, including but not limited to interest rates, exchange rates, stock indices. Because of its applied flavor, our project is heavily involved in implementation work: pre-processing real data, coding a number of algorithms and comparing the strengths and weaknesses of the algorithms in analyzing financial time series of various kind. As a remark, we are interested in this project because our research works are on computational finance.

#### Related Work

One of the earliest(in 1991) and most well-known application of dimensionality reduction techniques in a financial setting is the use of PCA to show that 97% of the movements in long term interest rates can be explained by three factors.<ref name="PCA"> R Litterman, J Scheinkman; Common factors affecting bond returns; Journal of Fixed Income, 1991</ref>. After this ground-breaking work in 1991, the analysis of financial time series have received a lot of interest in the academia, in the financial industry and in various regulatory bodies.

Despite the development of many advanced techniques for dimensionality reduction over the past two decades, we find almost no other technique other than PCA is used to analyze financial time series in the literature. It's all the more surprising given that PCA has been shown to be quite fruitless in analyzing short term interest rates, exchange rates and stock indices in a technical report published by the U.S. Federal Reserve in 1997<ref name="Fed"> Mico Loretan; Generating market risk scenarios using PCA; Federal Reserve Board, 1997</ref>. In fact few highly cited papers in this research area can be found after 1997. Nevertheless, a survey paper published in 2004<ref name="survey04">Dongsong Zhang, Lina Zhou; Discovering Golden Nuggets: Data Mining in Financial Application; IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART C: APPLICATIONS AND REVIEWS, VOL. 34, NO. 4, NOVEMBER 2004</ref> and a book chapter published in 2008<ref name="chapter08">Stephan K. Chalup and Andreas Mitschele, Kernel Methods in Finance; pp. 655--688, Handbook of Information Technology in Finance; Springer-Verlag, 2008.</ref> show that such applications (if not novel research work) have remained popular in recent years.

#### Our Proposal

Essentially we hope to improve the results of the 1997 Fed report by applying more advanced techniques for dimensionality reduction covered in this course. We will focus on Kernel PCA, MDS and ISOMAP, as suggested in the 2008 book chapter mentioned above.

The key steps in our project will be