Information fusion of face and palm-print multimodal biometric at matching score level

Multimodal biometric systems that integrate the biometric traits from several modalities are able to overcome the limitations of single modal biometrics. Fusing the information at the middle stage by consolidating the information given by different traits can give a better result due to the richnes...

全面介紹

Saved in:
書目詳細資料
主要作者: Mohammed Elzaroug, Alshrief
格式: Thesis
語言:English
主題:
在線閱讀:http://dspace.unimap.edu.my:80/xmlui/bitstream/123456789/59422/1/Page%201-24.pdf
http://dspace.unimap.edu.my:80/xmlui/bitstream/123456789/59422/2/Full%20text.pdf
標簽: 添加標簽
沒有標簽, 成為第一個標記此記錄!
實物特徵
總結:Multimodal biometric systems that integrate the biometric traits from several modalities are able to overcome the limitations of single modal biometrics. Fusing the information at the middle stage by consolidating the information given by different traits can give a better result due to the richness of information at this stage. In this thesis, an information fusion at matching score level is used to integrate face and palm-print modalities. Three types of matching score rule are used which is sum, product and minimum rule. A linear statistical projection method based on the principle component analysis (PCA) is used to capture the important information and reduce feature dimension in the feature space. A fusion process is performed using matching score computed using Euclidean distance classifier. The experiment is conducted using a benchmark ORL face and PolyU palm-print dataset to examine the recognition rates of the propose technique. The best recognition rate is 98.96% achieved by using sum rule fusion method. Recognition rate can also be improved by increasing number of training images and number of PCA coefficients.