You are here

Efficient Learning of Optimal Linear Representations for Object Classification

Title: Efficient Learning of Optimal Linear Representations for Object Classification.
25 views
4 downloads
Name(s): Wu, Yiming, author
Liu, Xiuwen, professor directing dissertation
Zhang, Jinfeng, university representative
Kumar, Piyush, committee member
Srinivasan, Ashok, committee member
Li, FeiFei, committee member
Mio, Washington, committee member
Department of Computer Science, degree granting department
Florida State University, degree granting institution
Type of Resource: text
Genre: Text
Issuance: monographic
Date Issued: 2010
Publisher: Florida State University
Place of Publication: Tallahassee, Florida
Physical Form: computer
online resource
Extent: 1 online resource
Language(s): English
Abstract/Description: In many pattern classification problems, efficiently learning a suitable low-dimensional representation of high dimensional data is essential. The advantages of linear dimension reduction methods are their simplicity and efficiency. Optimal component analysis (OCA) is a recently proposed linear dimensional reduction method which seeks to optimize the discriminant ability of the nearest neighbor classifier for data classification and labeling. Mathematically, OCA defines an objective function which aims to discriminatively separate data in different classes and an optimal basis is obtained through a stochastic gradient search on the underlying Grassmann manifold. OCA shows good performance in various applications including face recognition, object recognition, and image retrieval. However, a limitation of OCA is that the computational complexity is high, which prevents its wide usage in real applications. In this dissertation, several efficient methods, including two-stage OCA, multi-stage OCA, scalable OCA, and two-stage sphere factor analysis (SFA), have been proposed to cope with this problem and achieve both efficiency and accuracy. Two-stage and multi-stage OCA aim to speed up the OCA search by reducing the dimension of the search space; scalable OCA uses a more efficient gradient updating method to reduce the computational complexity of OCA; two-stage SFA first reduces the search space and then the optimal basis is searched on a simpler geometrical manifold than that of OCA. Furthermore, a sparse OCA method is also proposed by adding sparseness constraints to OCA. Additionally, an application of the efficient OCA methods on rapid classification tree is also presented. Experimental results on face and object classification show these methods achieve efficiency and discrimination simultaneously.
Identifier: FSU_migr_etd-0700 (IID)
Submitted Note: A Dissertation submitted to the Department of Computer Science in partial fulfillment of the requirements for the degree of Doctor of Philosophy.
Degree Awarded: Spring Semester, 2010.
Date of Defense: March 24, 2010.
Keywords: Optimal Basis, Linear Dimension Reduction, Grassmann Manifold, Object Classification, Stochastic Search
Bibliography Note: Includes bibliographical references.
Advisory Committee: Xiuwen Liu, Professor Directing Dissertation; Jinfeng Zhang, University Representative; Piyush Kumar, Committee Member; Ashok Srinivasan, Committee Member; FeiFei Li, Committee Member; Washington Mio, Committee Member.
Subject(s): Computer science
Persistent Link to This Record: http://purl.flvc.org/fsu/fd/FSU_migr_etd-0700
Owner Institution: FSU

Choose the citation style.
Wu, Y. (2010). Efficient Learning of Optimal Linear Representations for Object Classification. Retrieved from http://purl.flvc.org/fsu/fd/FSU_migr_etd-0700