Information fusion of face and palm-print multimodal biometric at matching score level
Multimodal biometric systems that integrate the biometric traits from several modalities are able to overcome the limitations of single modal biometrics. Fusing the information at the middle stage by consolidating the information given by different traits can give a better result due to the richnes...
Saved in:
Main Author: | |
---|---|
Format: | Thesis |
Language: | English |
Subjects: | |
Online Access: | http://dspace.unimap.edu.my:80/xmlui/bitstream/123456789/59422/1/Page%201-24.pdf http://dspace.unimap.edu.my:80/xmlui/bitstream/123456789/59422/2/Full%20text.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Multimodal biometric systems that integrate the biometric traits from several modalities
are able to overcome the limitations of single modal biometrics. Fusing the information at the middle stage by consolidating the information given by different traits can give a better result due to the richness of information at this stage. In this thesis, an information fusion at matching score level is used to integrate face and palm-print
modalities. Three types of matching score rule are used which is sum, product and
minimum rule. A linear statistical projection method based on the principle component
analysis (PCA) is used to capture the important information and reduce feature
dimension in the feature space. A fusion process is performed using matching score
computed using Euclidean distance classifier. The experiment is conducted using a
benchmark ORL face and PolyU palm-print dataset to examine the recognition rates of
the propose technique. The best recognition rate is 98.96% achieved by using sum rule
fusion method. Recognition rate can also be improved by increasing number of training
images and number of PCA coefficients. |
---|