Some New Methods for Image Feature Extraction
|School||Nanjing University of Technology and Engineering|
|Course||Pattern Recognition and Intelligent Systems|
|Keywords||Pattern Recognition Feature Extraction Classifier Fuzzy Set Fisher Linear Discriminant Analysis Small Sample Size Problem Face Recognition|
With the development of network and communications technologies and the constantly expanding of physical and virtual space of the human being, the information security hasraised more and more public concern. Biometric technologies automatically identify a person using the computer by means of his/her distinct physiological or behavioral characteristics. As an important part of the biometrics, face recognition that exploits the effective information from the facial image and video for automatical personal authentication. Compared with other biometrics, face recognition possess many virtues: convenient and contactless data acquisition, without any damage to the users, and have a highly interactive mode. So face recognition is being a hotspot of biometrics and has a wide prospect in application. For example, as a classical algebra method, linear discriminant analysis (LDA) has been developed to solve the linear problem, and a serial of kernel methods based on the support vector machine (SVM) have been proposed to solve the nonlinear one. In this dissertation, linear and nonlinear methods on feature extraction field are both deeply analyzed and researched. In our work, structure information is incorporated in the process of feature extraction method and more discriminant information is preserved in the extracted feature space. Furthermore, effectiveness and performance are both considered in our proposed method. The main creative work in this dissertation details as follows:Kernel trick is developed based on the statistical learning theory and SVM method. It is an effective strategy to solve the nonlinear problem. In this dissertation, a new approach to face classifier based on the SVM and decision tree (DT) is developed. Firstly, traditional methods of SVM and fuzzy support vector machine (FSVM) for multi-class problems are introduced, then a hybrid classifier algorithm (SRA-DT-FSVM) based on DT-FSVM and nearest-neighbor classifier is proposed, in which the idea of sample region analysis (SRA) is incorporated into the design of the new classifiers. Meanwhile, in the LDA feature space, a comprehensive performance comparison of face recognition between the algorithms including FSVM, DT-FSVM, SRA-DT-FSVM and Binary-DT-SVM is made respectively. The experimental results conducted on ORL face images show the effectiveness of these new classifier methods. Furthermore, the classification speed of FSVM based on DT can be improved a lot.Conventional linear discriminant analysis is based on the binary classification criterion. That is to say, to be assigned sample, hard criterion is adopted and each sample is classified to one class or not. While in real applications, the distribution of samples is often effected by environment. For example, some well-known feature extraction methods are relatively sensitive to substantial variations in light direction, face pose, and facial expression. So, in our work, the fuzzy structure information is incorporated in the process of feature extraction method and more discriminant information is preserved in the extracted feature space. In this dissertation, we proposed a novel complete fuzzy discriminant analysis approach to accomplish the mission of face recognition. This algorithm is based on an improved fuzzy LDA feature extraction method and DT-FSVM classifier, which has increased discriminatory capability and is more adaptive compared with the traditional fuzzy LDA. In particular, considering the fact that the outlier samples in the patterns may have some adverse influence on the classification result, we developed a novel algorithm that incorporated a relaxed normalized condition in the definition of fuzzy membership function. Thus, the classification limitation from the outlier samples is effectively alleviated.Small sample size problem (SSS) is a common problem when Fisher linear discriminant analysis criterion is applied to feature extraction. A popular approach to the SSS problem is the removal of non-informative features via subspace-based decomposition techniques. Motivated by this viewpoint, in this dissertation, a new LDA feature extraction method called symmetric null spaces algorithm (OSNS) is proposed. This algorithm computes the Fisher discriminant maximal criterion combined with minimal criterion, and at the same time, the symmetric null spaces of within-class scatter matrix and between-class scatter matrix are defined respectively. Two null spaces are used sequentially in the different Fisher discriminant criterion in order to obtain the most efficient classification information that describe a whole set of human face images. This procedure is done by merging the runs of the two algorithms based on the minimal discriminant criterion and its inversed one, therefore, the limitation of final eigenvector’s dimensions determined by class number is overcome. This algorithm is applied to face recognition problem. Simulation results on some face image databases showed high average success rate of this algorithm compared to others.Sequential Minimal Optimization (SMO) is a simple algorithm that can quickly solve the SVM QP problem without any extra matrix storage and without using numerical QP optimization steps at all. SMO decomposes the overall QP problem into QP sub-problems, using Osuna’s theorem to ensure convergence. In this dissertation, DT-FSMO based on the combination of SMO and DT-FSVM is suggested, in which the features are extracted from the original face images by using ICA. The comparison is made for the DT-FSMO and DT-FSVM, the experimental results conducted on ORL face images show the effectiveness of this method is better than that of DT-FSVM provided that the number of classes is relatively small.Classification of nonlinear high-dimensional data is usually not amenable to standard pattern recognition techniques because of an underlying nonlinear small sample size conditions. To address the problem, a novel kernel fourfold subspace learning (KFS) is developed in this paper. First, a hybrid discriminant criterion based on the Fisher theory is proposed, by which fourfold subspaces learning derived from the within-class and between-class scatter matrices are constructed, respectively. Second, considering the fact that the kernel Fisher discriminant (KFD) is effective to extract nonlinear discriminative information of the input feature space by using kernel trick, a kernel algorithm based on the new Fisher discriminant criterion is presented subsequently, which has the potential to outperform the traditional subspace learning algorithms, especially in the cases of nonlinear small sample sizes. Experimental results conducted on the ORL and Yale face database demonstrate the effectiveness of the proposed method.