The PDF file you selected should load here if your Web browser has a PDF reader plug-in installed (for example, a recent version of Adobe Acrobat Reader).

If you would like more information about how to print, save, and work with PDFs, Highwire Press provides a helpful Frequently Asked Questions about PDFs.

Alternatively, you can download the PDF file directly to your computer, from where it can be opened using a PDF reader. To download the PDF, click the Download link above.

Fullscreen Fullscreen Off


Objectives: To analyze the influence of the sparseness distribution characteristics of gradient-based descriptor data on reduction of high-dimensional data, this paper presents experimental analysis on learned samples of gradient descriptor data. Method: In order to draw valid inferences, a single gradient descriptor, the Edge based Gabor Magnitude (EGM) facial descriptor, is used. The descriptor data is learned using various linear subspace dimensionality reduction methods. The subspace models are the Principle Component Analysis plus Linear Discriminant Analysis (PCA plus LDA), supervised Locality Preserving Projection (sLPP) and Locality Sensitive Discriminant Analysis (LSDA) under the LGE and OLGE general framework (which in the present is used to aid the characterization of the data geometric properties). Findings: Using the plastic surgery data set, the following observations were made. The global based linear subspace model (PCA plus LDA) which do not require complex neighborhood assignment performs favorably well in relation to the graph embedding models. This may be due to the fact that it only works on the basis of class information. The LSDA is observed to be more affected by the nature of the descriptor data influenced by the complexity of plastic surgery because in all its identification rates, a below 60% is achieved. On the other hand, the sLPP show to be a best fit model for the sparse nature of the descriptor data. This can be attributed to its data preserving property by which it is able to preserve the local structures of a sparse data (gradient-based) and so outperformed the PCA plus LDA and most importantly, the LSDA. Applications/Improvements: Understanding the best fit model for certain descriptor data is as important as optimizing recognition rates, an important observation for the face recognition research community.

Keywords

Data Distribution, Descriptor Data, Dimensionality Reduction, Face Recognition, Graph Embedding, Linear Subspace Learning.
User