Facial Expression Synthesis using Kernel Approach

Recently, facial identity and emotion study has gained some interest from researchers especially in the works of integrating human emotions and machine learning to improve the current lifestyle. Emotions are initially expressed through facial expression and followed by body language to deliver infor...

Full description

Saved in:
Bibliographic Details
Main Author: Marcella, Peter
Format: Thesis
Language:English
Published: 2020
Subjects:
Online Access:http://ir.unimas.my/id/eprint/32957/3/Facial%20Expression%20Synthesis%20using%20Kernel%20Approach.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
id my-unimas-ir.32957
record_format uketd_dc
institution Universiti Malaysia Sarawak
collection UNIMAS Institutional Repository
language English
topic QA75 Electronic computers
Computer science
spellingShingle QA75 Electronic computers
Computer science
Marcella, Peter
Facial Expression Synthesis using Kernel Approach
description Recently, facial identity and emotion study has gained some interest from researchers especially in the works of integrating human emotions and machine learning to improve the current lifestyle. Emotions are initially expressed through facial expression and followed by body language to deliver information. By nature, emotions can be easily expressed such as happiness, sadness, and surprised. However, in computer language, it is still a challenging task especially synthesising realistic facial expression. Therefore, various methods have been proposed to synthesise better facial expression systems that include learning-based and statistical-based approaches. Most of these approaches applied linear methods and the most commonly used one is the Principal Component Analysis (PCA). PCA is a linear transformation technique and can be used for reducing high dimensional data, extracting facial features from an input, transforming the extracted features to represent a face via a face model and subsequently extended for face recognition system. However, linear transformations may lead to some information loss along the way. Furthermore, the facial structure of a face in itself is complex to be expressed using a linear method. Therefore, in this study, a kernel-based method is proposed to deal with the linear approach problems on transformation and projection. This study explored the potential of using nonlinear kernel approach for synthesising neutral facial expressions 3D geometric face models to improve the performance and recognition rates. The kernel approach employed in the research is a novel modified kernel-based Active Shape Model whereby it employed mean template-based face model. The results from the modified kernel method is then compared with the linear-based Active Shape Model and the outcome of the face recognition is used to evaluate the resulting synthesised neutral facial expression. Experiment results have recorded the highest recognition rate with 100% of true positive and have also shown that the recognition iv outperformed the linear Active Shape Model. The qualitative results of the synthesis have also shown the almost (if not) real facial expressions of the subject. In conclusion, the proposed modified kernel-based Active Shape Model using template-based approach can improve the synthesis of facial expression which then would increase the performance of the recognition rates. Future work would include to further investigating the effect of adjusting expression intensity on the shape model of the synthesised facial expression by integrating the nonlinear approach into an automated face recognition system and applying optimisation approach to improve the efficiency of the modified kernel-based Active Shape Model.
format Thesis
qualification_level Master's degree
author Marcella, Peter
author_facet Marcella, Peter
author_sort Marcella, Peter
title Facial Expression Synthesis using Kernel Approach
title_short Facial Expression Synthesis using Kernel Approach
title_full Facial Expression Synthesis using Kernel Approach
title_fullStr Facial Expression Synthesis using Kernel Approach
title_full_unstemmed Facial Expression Synthesis using Kernel Approach
title_sort facial expression synthesis using kernel approach
granting_institution Universiti Malaysia Sarawak (UNIMAS)
granting_department Faculty of Computer Science and Information Technology
publishDate 2020
url http://ir.unimas.my/id/eprint/32957/3/Facial%20Expression%20Synthesis%20using%20Kernel%20Approach.pdf
_version_ 1783728419214721024
spelling my-unimas-ir.329572023-03-20T07:46:43Z Facial Expression Synthesis using Kernel Approach 2020-11-24 Marcella, Peter QA75 Electronic computers. Computer science Recently, facial identity and emotion study has gained some interest from researchers especially in the works of integrating human emotions and machine learning to improve the current lifestyle. Emotions are initially expressed through facial expression and followed by body language to deliver information. By nature, emotions can be easily expressed such as happiness, sadness, and surprised. However, in computer language, it is still a challenging task especially synthesising realistic facial expression. Therefore, various methods have been proposed to synthesise better facial expression systems that include learning-based and statistical-based approaches. Most of these approaches applied linear methods and the most commonly used one is the Principal Component Analysis (PCA). PCA is a linear transformation technique and can be used for reducing high dimensional data, extracting facial features from an input, transforming the extracted features to represent a face via a face model and subsequently extended for face recognition system. However, linear transformations may lead to some information loss along the way. Furthermore, the facial structure of a face in itself is complex to be expressed using a linear method. Therefore, in this study, a kernel-based method is proposed to deal with the linear approach problems on transformation and projection. This study explored the potential of using nonlinear kernel approach for synthesising neutral facial expressions 3D geometric face models to improve the performance and recognition rates. The kernel approach employed in the research is a novel modified kernel-based Active Shape Model whereby it employed mean template-based face model. The results from the modified kernel method is then compared with the linear-based Active Shape Model and the outcome of the face recognition is used to evaluate the resulting synthesised neutral facial expression. Experiment results have recorded the highest recognition rate with 100% of true positive and have also shown that the recognition iv outperformed the linear Active Shape Model. The qualitative results of the synthesis have also shown the almost (if not) real facial expressions of the subject. In conclusion, the proposed modified kernel-based Active Shape Model using template-based approach can improve the synthesis of facial expression which then would increase the performance of the recognition rates. Future work would include to further investigating the effect of adjusting expression intensity on the shape model of the synthesised facial expression by integrating the nonlinear approach into an automated face recognition system and applying optimisation approach to improve the efficiency of the modified kernel-based Active Shape Model. Universiti Malaysia Sarawak (UNIMAS) 2020-11 Thesis http://ir.unimas.my/id/eprint/32957/ http://ir.unimas.my/id/eprint/32957/3/Facial%20Expression%20Synthesis%20using%20Kernel%20Approach.pdf text en validuser masters Universiti Malaysia Sarawak (UNIMAS) Faculty of Computer Science and Information Technology Abboud, B., Davoine, F., & Dang, M. (2004). Facial expression recognition and synthesis based on an appearance model. Signal Processing: Image Communication, 19(8), 723–740. https://doi.org//10.1016/j.image.2004.05.009 Agianpuye, A. S., & Minoi, J. L. (2014). Synthesizing neutral facial expression on 3D faces using active shape models. In 2014 IEEE Region 10 Symposium (pp. 600-605). IEEE. https://doi.org/10.1109/tenconspring.2014.6863105 Agianpuye, S. (2015). Synthesizing neutral facial expressions on 3D faces. Master’s thesis, Universiti Malaysia Sarawak. Amin, S. H., & Gillies, D. (2007). Analysis of 3D face reconstruction. In 14th International Conference on Image Analysis and Processing (pp. 413-418). IEEE. https://doi.org/10.1109/iciap.2007.4362813 Arya, G. J., Kumar, K. A., & Rajasree, R. (2014). Synthesize of emotional facial expressions through manipulating facial parameters. In 2014 International Conference on Control, Instrumentation, Communication and Computational Technologies (pp. 911–916). IEEE. https://doi.org/10.1109/iccicct.2014.6993088 Barbosa, L. A., Dahia, G., & Segundo, M. P. (2019). Expression removal in 3D faces for recognition purposes. In 2019 8th Brazilian Conference on Intelligent Systems (BRACIS) (pp. 753-758). IEEE. https://doi.org/10.1109/BRACIS.2019.00135 Bishop, C. M. (2006). Pattern recognition and machine learning (1st ed.). Springer. Blanz, V., & Vetter, T. (1999). A morphable model for the synthesis of 3D faces. In Proceedings of the 26th Annual Conference on Computer Graphics and Interactive Techniques (pp. 187-194). ACM. https://dl.acm.org/doi/pdf/10.1145/311535.311556 Blanz, V. (2006). Face recognition based on a 3D morphable model. In 7th International Conference on Automatic Face and Gesture Recognition (pp. 617-624). IEEE. https://doi.ieeecomputersociety.org/10.1109/FGR.2006.42 Bottino, A., De Simone, M., Laurentini, A., & Sforza, C. (2012). A new 3-D tool for planning plastic surgery. IEEE transactions on biomedical engineering, 59(12), 3439-3449. https://doi.org/10.1109/TBME.2012.2217496 Chang, J., Zheng, Y., & Wang, Z. (2007). Facial expression analysis and synthesis: a bilinear approach. In 2007 International Conference on Information Acquisition (pp. 457-464). IEEE. https://doi.org/10.1109/ICIA.2007.4295777 Chen, X., Qing, L., He, X., Su, J., & Peng, Y. (2018). From eyes to face synthesis: A new approach for human-centered smart surveillance. IEEE Access, 6, 14567-14575. https://doi.org/10.1109/ACCESS.2018.2803787 Chen, Y., Bai, R., & Hua, C. (2014). Personalized face neutralization based on subspace bilinear regression. IET Computer Vision, 8(4), 329-337. https://doi.org/10.1049/iet-cvi.2013.0212 Cootes, T., Taylor, C. J., Cooper, D. H., and Graham, J. (1995). Active shape models - their training and application. Computer Vision and Image Understanding, 61(1), 38–59. http://dx.doi.org/10.1006%2Fcviu.1995.1004 Cui, W., Chen, S., Yu, T., & Ren, L. (2012). Feature extraction of x-ray chest image based on kpca. In Proceedings of 2012 2nd International Conference on Computer Science and Network Technology, ICCSNT 2012 (pp. 1263–1266). https://doi.org/10.1109/iccsnt.2012.6526153 Eguizabal, A., Schreier, P. J., & Ramírez, D. (2018). Model-order selection in statistical shape models. In 2018 IEEE 28th International Workshop on Machine Learning for Signal Processing (pp. 1-6). IEEE. https://doi.org/10.1109/MLSP.2018.8516941 Ekman, P. & Friesen, W. (1978). Facial Action Coding System: A Technique for the Measurement of Facial Movement. Consulting Psychologists Press. Elkhadir, Z., Chougdali, K., & Benattou, M. (2016). Intrusion detection system using pca and kernel pca methods. In Proceedings of the Mediterranean Conference on Information & Communication Technologies 2015 (pp. 489-497). Springer. https://doi.org/10.1007/978-3-319-30298-0_50 García-González, A., Huerta, A., Zlotnik, S., & Díez, P. (2020). A kernel Principal Component Analysis (kPCA) digest with a new backward mapping (pre-image reconstruction) strategy. arXiv preprint. https://arxiv.org/abs/2001.01958v1 Jolliffe, I.T. (1986). Principal component analysis and factor analysis. In Principal Component Analysis. Springer Series in Statistics. https://doi.org/10.1007/978-1-4757-1904-8_8 Kirschner, M., Becker, M., & Wesarg, S. (2011). 3D active shape model segmentation with nonlinear shape priors. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 6892, pp. 492–499). Springer. https://doi.org/10.1007/978-3-642-23629-7_60 Kouzani, A. Z. (1999). Facial expression synthesis. In Proceedings 1999 International Conference on Image Processing, Cat. 99CH36348 (Vol. 1, pp. 643-647). IEEE. https://doi.org/10.1109/ICIP.1999.821713 Lee, S., Wolberg, G., & Shin, S. Y. (1997). Scattered data interpolation with multilevel b-splines. IEEE Transactions on Visualisation and Computer Graphics, 3(3), 228-244. https://doi.org/10.1109/2945.620490 Li, L., Liu, S., Peng, Y., & Sun, Z. (2016). Overview of principal component analysis algorithm. Optik - International Journal for Light and Electron Optics, 9(127), 3935–3944. https://doi.org/10.1016/j.ijleo.2016.01.033 Lv, C., Wu, Z., Wang, X., & Zhou, M. (2019). 3D facial expression modelling based on facial landmarks in single image. Neurocomputing, 355, 155-167. https://doi.org/10.1016/j.neucom.2019.04.050 Liang, H., Liang, R., Song, M. and He, X. (2015). Coupled dictionary learning for the detail-enhanced synthesis of 3D facial expressions. IEEE Transactions on Cybernetics, 46(4), 890-901. https://doi.org/10.1109/TCYB.2015.2417211 Liang, H., Song, M., Xie, L., & Liang, R. (2013). Personalized 3D facial expression synthesis based on landmark constraint. In 2013 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (pp. 1-6). IEEE. https://doi.org/10.1109/APSIPA.2013.6694270 Liu, X., Xia, S., Fan, Y., & Wang, Z. (2011). Exploring non‐linear relationship of blend shape facial animation. Computer Graphics Forum, 30(6), 1655-1666. Blackwell Publishing Ltd. https://doi.org/10.1111/j.1467-8659.2011.01852.x Malatesta, L., Raouzaiou, A., Karpouzis, K., & Kollias, S. (2009). MPEG-4 facial expression synthesis. Personal and Ubiquitous Computing, 13(1), 77-83. https://doi.org/10.1007/s00779-007-0164-1 Matthews, I., Xiao, J., & Baker, S. (2006). On the dimensionality of deformable face models. Technical Report CMU-RI-TR-06-12. Minoi, J. L. (2009). Geometric expression invariant 3D face recognition using statistical discriminant models. PhD Thesis, Imperial College London. Minoi, J. L., Thomaz, C. E., & Gillies, D. (2011). Synthesizing 3D face shapes using tensor-based multivariate statistical discriminant methods. In International Conference on Informatics Engineering and Information Science (pp. 413-426). Springer. https://doi.org/10.1007/978-3-642-25483-3_34 Nair, S. (2019, April 26). AirAsia uses facial recognition for self-check-in and boarding at Senai Airport. The Star Online. https://www.thestar.com.my/tech/tech-news/2019/04/26/register-your-face-on-airasia-app-for-a-seamless-experience-at-senai-airport Pal, M., Ghosh S., & Sarkar, R. (2020). Modification of existing face images based on textual description through local geometrical transformation. In J. K. Mandal & D. Bhattacharya (Eds.), Emerging Technology in Modelling and Graphics (pp. 159-170). Springer. https://doi.org/10.1007/978-981-13-7403-6_16 Pan, G., Han, S., Wu, Z., & Zhang, Y. (2010). Removal of 3D facial expressions: A learning-based approach. In 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (pp. 2614-2621). IEEE. https://doi.org/10.1109/CVPR.2010.5539974 Papatheodorou, T. (2006). 3D face recognition using rigid and non-rigid registration. PhD Thesis, Imperial College London. Park, J. M., Choi, H. C., & Oh, S. Y. (2010). Non-rigid 3D face shape reconstruction using a genetic algorithm. In IEEE Congress on Evolutionary Computation (pp. 1-6). IEEE. https://doi.org/10.1109/CEC.2010.5586177 Patel, N., & Zaveri, M. (2013). 3D facial model reconstruction, expressions synthesis and animation using single frontal face image. Signal, Image and Video Processing, 7(5), 889-897. https://doi.org/10.1007/s11760-011-0278-9 Pighin, F., Hecker, J., Lischinski, D., Szeliski, R., & Salesin, D. H. (2006). Synthesizing realistic facial expressions from photographs. In Special Interest Group on Computer Graphics and Interactive Techniques Conference (pp. 19-28). ACM. https://doi.org/10.1145/1185657.1185859 Riaz, S., Ali, Z., Park, U., Choi, J., Masi, I., & Natarajan, P. (2019). Age-invariant face recognition using gender specific 3D aging modelling. Multimedia Tools and Applications, 78(17), 25163-25183. https://doi.org/10.1007/s11042-019-7694-1 Romdhani, S., Gong, S., & Psarrou, A. (1999). A multi-view nonlinear active shape model using kernel pca. In Proceedings of the British Machine Vision Conference 1999 (pp. 483-492). BMVA. https://doi.org/10.5244/c.13.48 Schölkopf, B., Smola, A., & Müller, K. R. (1997). Kernel principal component analysis. In International Conference on Artificial Neural Networks (pp. 583-588). Springer. https://doi.org/10.1007/BFb0020217 Schölkopf, B., Smola, A., & Müller, K. R. (1998). Nonlinear component analysis as a kernel eigenvalue problem. Neural Computation, 10(5), 1299-1319. https://doi.org/10.1162/089976698300017467 Schroeder, W. J., Martin, K. M., & Lorensen, W. E. (2006). The visualization toolkit (4th ed.). Kitware. Song, L., Lu, Z., He, R., Sun, Z., & Tan, T. (2018). Geometry guided adversarial facial expression synthesis. In Proceedings of the 26th ACM International Conference on Multimedia (pp. 627-635). https://dl.acm.org/doi/pdf/10.1145/3240508.3240612 Thies, J., Zollhöfer, M., & Nießner, M. (2019). Deferred neural rendering: Image synthesis using neural textures. ACM Transactions on Graphics, 38(4), 1-12. https://doi.org/10.1145/3306346.3323035 Ueda, J., & Okajima, K. (2019). Face morphing using average face for subtle expression recognition. In 2019 11th International Symposium on Image and Signal Processing and Analysis (pp. 187-192). IEEE. https://doi.org/10.1109/ISPA.2019.8868931 Vezzetti, E., & Marcolin, F. (2012). 3D human face description: Landmarks measures and geometrical features. Image and Vision Computing, 30(10), 698–712. https://doi.org/10.1016/j.imavis.2012.02.007 Wang, D., Lu, H., & Yang, M. H. (2015). Kernel collaborative face recognition. Pattern Recognition, 48(10), 3025-3037. https://doi.org/10.1016/j.patcog.2015.01.012 Wang, Y., Liu, Z., & Guo, B. (2011). Face synthesis. In Handbook of face recognition (pp. 521-547). Springer. https://doi.org/10.1007/978-0-85729-932-1_20 Wang, S., Gu, X. D., & Qin, H. (2008). Automatic non-rigid registration of 3D dynamic data for facial expression synthesis and transfer. In 2008 IEEE Conference on Computer Vision and Pattern Recognition (pp. 1-8). IEEE. https://doi.org/10.1109/CVPR.2008.4587791 Wang, D., & Tanaka, T. (2019). Kernel principal component analysis allowing sparse representation and sample selection. ECTI Transactions on Computer and Information Technology, 13(1), 9-20. https://doi.org/10.37936/ecti-cit.2019131.187506 Wang, X., & Yang, R. (2010). Learning 3D shape from a single facial image via non-linear manifold embedding and alignment. In 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (pp. 414-421). IEEE. https://doi.org/10.1109/CVPR.2010.5540185 Yambor, W. S., Draper, B. A., & Beveridge, J. R. (2002). Analyzing pca-based face recognition algorithms: Eigenvector selection and distance measures. Series in Machine Perception and Artificial Intelligence, 50, 39–60. https://doi.org/10.1142/9789812777423_0003 Yin, L., Wei, X., Sun, Y., Wang, J., & Rosato, M. J. (2006). A 3D facial expression database for facial behavior research. In 7th International Conference on Automatic Face and Gesture Recognition (pp. 211-216). IEEE. https://doi.org/10.1109/FGR.2006.6 Zhao, B., Gao, L., Liao, W., & Zhang, B. (2017). A new kernel method for hyperspectral image feature extraction. Geo-spatial Information Science, 20(4), 309-318. https://doi.org/10.1080/10095020.2017.1403088