Loading...
Please wait, while we are loading the content...
Similar Documents
An Overview of Distance Metric Learning
| Content Provider | Semantic Scholar |
|---|---|
| Author | Yang, Liu |
| Copyright Year | 2007 |
| Abstract | In our previous comprehensive survey [41], we have categori zed the disparate issues in distance metric learning. Within each of the four cat egories, we have summarized existing work, disclosed their essential connection s, strengths and weaknesses. The first category is supervised distance metric learning, w hich contains supervised global distance metric learning, local adaptive supervise d distance metric learning, Neighborhood Component Analysis (NCA) [13], and Relevant C omponents Analysis (RCA) [1]. The second category is unsupervised distance metric learning, covering linear (Principal Component Analysis (PCA) [14], Mul tidimensional Scaling (MDS) [5]) and nonlinear embedding methods (ISOMAP [35], Lo cally Linear Embedding (LLE) [30], and Laplacian Eigenamp (LE) [2]). We fur ther unify these algorithms into a common framework based on the embedding comp utation. The third category, which is maximum margin based distance metric lea rning approaches, includes the large margin nearest neighbor based distance met ric learning methods and semi-definite Programming (SDP) methods to solve the kernel ized margin maximization problem. And the fourth category discussing kernel met hods towards learning distance metrics, covers kernel alignment [28] and its SDP a pproaches [26], and also the extension work of learning the idealized kernel [25]. In addition to this survey [41], here we provide a complete an d updated summarization of the related work on both unsupervised distance metri c learning and supervised distance metric learning, including the most recent work in the area of distance metric learning. Many unsupervised distance metric learning algorithms are ess ntially for the purpose of unsupervised dimensionality reduction, i.e. learn ing a low-dimensional embedding of the original feature space. This group of methods can be divided into nonlinear and linear methods. The well known algorithms for nonlinear unsupervised dimensionality reduction are ISOMAP [35], Locally Linear Embedding ( LLE) [30], and Laplacian Eigenamp (LE) [2]. ISOMAP seeks the subspace that best p r erves the geodesic distances between any two data points, while LLE and LE focus n the preservation of local neighbor structure. An improved and stable version of LLE is achieved in [45], by introducing multiple linearly independent local weight ve ctors for each neighborhood. Among the linear methods, Principal Component Analysis (PC A) [14] finds the subspace that best preserves the variance of the data; Multidim ensional Scaling (MDS) [5] finds the low-rank projection that best preserves the interpoint distance given by the pairwise distance matrix; Independent components analysi s (ICA) [4] seeks a linear transformation to coordinates in which the data are maximal ly statistically indepen- |
| File Format | PDF HTM / HTML |
| Alternate Webpage(s) | http://www.cs.cmu.edu/~liuy/dist_overview.pdf |
| Alternate Webpage(s) | http://www.cse.msu.edu/~yangliu1/dist_overview.pdf |
| Language | English |
| Access Restriction | Open |
| Content Type | Text |
| Resource Type | Article |