Facial Feature Extraction Based on Improved Harris Corner Detection Algorithm

The extraction of facial feature points has become an important issue in many applications, such as face recognition, face expression recognition and face detection. Segmenting the facial features’ points in an image is the first important step for human face recognition, identification and verifica...

Full description

Saved in:
Bibliographic Details
Main Author: Bagherian, Elhaam
Format: Thesis
Language:English
English
Published: 2011
Subjects:
Online Access:http://psasir.upm.edu.my/id/eprint/20024/1/FSKTM_2011_9_ir.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The extraction of facial feature points has become an important issue in many applications, such as face recognition, face expression recognition and face detection. Segmenting the facial features’ points in an image is the first important step for human face recognition, identification and verification. Problems occur in different face orientations and poses, and under varied lighting conditions, covering and facial expressions. A method of facial feature extraction and corner detection is presented in this study to unravel these problems. The proposed technique has been developed to extract the facial features from a colored image, captured by the webcam under normal lighting condition. In order to precisely extract the facial features such as eyes, mouth and nostrils, some preprocessing steps are employed once the image is captured. Some of these steps are also used during the corner detection phase. Experiments are conducted with a number of images from the frontal, near frontal, up and down views of the head and from different expressions such as happy, sad, surprised and neutral. This technique is evaluated on two different standard databases, BioID and George Tech. These two databases consist of 1520 images and 710 images respectively. Each of these databases includes images with different orientations and expressions, occlusions and lighting conditions. This technique is also tested using five different webcams; with different levels of resolution and quality and web camera specifications, in order to maintain the accuracy of the technique. The performance of the technique is judged by its accuracy on each of the features like nose, eyes and mouth. After validations and verifications are made which are based on the defined performance parameter, it can be observed that the proposed technique is more accurate and precise.