Information fusion of face and palm-print multimodal biometric at matching score level

Multimodal biometric systems that integrate the biometric traits from several modalities are able to overcome the limitations of single modal biometrics. Fusing the information at the middle stage by consolidating the information given by different traits can give a better result due to the richnes...

全面介绍

Saved in:
书目详细资料
主要作者: Mohammed Elzaroug, Alshrief
格式: Thesis
语言:English
主题:
在线阅读:http://dspace.unimap.edu.my:80/xmlui/bitstream/123456789/59422/1/Page%201-24.pdf
http://dspace.unimap.edu.my:80/xmlui/bitstream/123456789/59422/2/Full%20text.pdf
标签: 添加标签
没有标签, 成为第一个标记此记录!
实物特征
总结:Multimodal biometric systems that integrate the biometric traits from several modalities are able to overcome the limitations of single modal biometrics. Fusing the information at the middle stage by consolidating the information given by different traits can give a better result due to the richness of information at this stage. In this thesis, an information fusion at matching score level is used to integrate face and palm-print modalities. Three types of matching score rule are used which is sum, product and minimum rule. A linear statistical projection method based on the principle component analysis (PCA) is used to capture the important information and reduce feature dimension in the feature space. A fusion process is performed using matching score computed using Euclidean distance classifier. The experiment is conducted using a benchmark ORL face and PolyU palm-print dataset to examine the recognition rates of the propose technique. The best recognition rate is 98.96% achieved by using sum rule fusion method. Recognition rate can also be improved by increasing number of training images and number of PCA coefficients.