Learning Image Manifolds by Semantic Subspace Projection




Yu, Jie
Tian, Qi

Journal Title

Journal ISSN

Volume Title


UTSA Department of Computer Science


In many image retrieval applications, the mapping between high-level semantic concept and low-level features is obtained through a learning process. Traditional approaches often assume that images with same semantic label share strong visual similarities and should be clustered together to facilitate modeling and classification. Our research indicates this assumption is inappropriate in many cases. Instead we model the images as lying on non-linear image subspaces embedded in the high-dimensional space and find that multiple subspaces may correspond to one semantic concept. By intelligently utilizing the similarity and dissimilarity information in semantic and geometric (image) domains, we find an optimal Semantic Subspace Projection (SSP) that captures the most important properties of the subspaces with respect to classification. Theoretical analysis proves that the well-known Linear Discriminant Analysis (LDA) could be formulated as a special case of our proposed method. To capture the semantic concept dynamically, SSP can integrate relevance feedback efficiently through incremental learning. Extensive experiments have been designed and conducted to compare our proposed method to the state-of-the-art techniques such as LDA, Locality Preservation Projection (LPP), Local Linear Embedding (LLE), Local Discriminant Embedding (LDE) and their semi-supervised algorithms. The results show the superior performance of SSP.



algorithms, theory, performance, experimentation, measurement, semantic subspace projection, image retrieval, relevance feedback, subspace learning, principal component analysis, linear discriminant analysis



Computer Science