Large-Scale Face Image Retrieval Using Semantic Facial Attributes And Deep Transferred Descriptors

With the ever-increasing popularity of social networks, a colossal amount of images are being uploaded to the digital world encompassing human faces. Analysis of such faces has led to the expansion of fascinating and enabling technologies in different spheres such as social sciences, entertainment,...

Full description

Saved in:
Bibliographic Details
Main Author: Banaeeyan, Rasoul
Format: Thesis
Published: 2018
Subjects:
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:With the ever-increasing popularity of social networks, a colossal amount of images are being uploaded to the digital world encompassing human faces. Analysis of such faces has led to the expansion of fascinating and enabling technologies in different spheres such as social sciences, entertainment, security, etc. Face retrieval is one instance of such enabling technologies which intends to locate index of one or more identical faces to a given query face. The performance of face retrieval systems heavily rely on the careful analysis of different facial components (eyes, nose, mouth, etc.) and, at a higher level, facial attributes (gender, race, age, hair color, eye color, etc.) owing to the fact that these semantic attributes help to tolerate some degrees of geometrical distortion, illuminations, expressions, and partial occlusions. However, solely employing facial attribute classifiers fail to add scalability in the context of thousands of distracting face images even though these classifiers are highly accurate. In addition, owing to the discriminative power of Convolutional Neural Networks (CNN) features, recent works have employed a complete set of deep transferred CNN features (taken from fully-connected layers) with a large feature dimensionality to obtain enhanced performance; yet these retrieval systems require high computational power and are very resource-demanding at the retrieval time due to the curse of dimensionality. Therefore, this study aims to exploit the distinctive capability of all facial attribute classifiers while their results are further refined by a proposed sequential subset feature selection to reduce the dimensionality of the features extracted from a very deep pre-trained CNN model (VGG-face).