Enhanced facial expression animation for zapin dance using ray casting and position-based estimation

Currently, there are many efforts to preserve traditional dances digitally as part of the Intangible Cultural Heritage (ICH) preservation efforts. Such efforts include digital scans, videos, and animations in either 2D or 3D. Motion capture data is one of the common methods to animate the 3D dancer...

Full description

Saved in:
Bibliographic Details
Main Author: Ahmad, Muhammad Anwar
Format: Thesis
Language:English
Published: 2019
Subjects:
Online Access:http://eprints.utm.my/id/eprint/96296/1/MuhammadAnwarMFC2019.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Currently, there are many efforts to preserve traditional dances digitally as part of the Intangible Cultural Heritage (ICH) preservation efforts. Such efforts include digital scans, videos, and animations in either 2D or 3D. Motion capture data is one of the common methods to animate the 3D dancer model. Nevertheless, they are usually limited to movement only and there is no facial expression data. For 3D animation, facial expression animation can be added to the digitalised 3D dancer model to increase the authenticity of the dance. Similar to body language, facial expressions in some dances are important to effectively communicate with the audience. Awkward facial expressions will make the performance look unnatural. This issue has led to this research’s purpose, which is to enhance the process of adding facial expressions onto a virtual 3D dancer model using an algorithm that combines ray casting and position-aware concepts. The proposed algorithm was used to map the facial expressions according to the dancer’s current position in certain segments of the dance. The primary goal of the algorithm is to enhance the process of animating facial expressions on a 3D Zapin dance motion data. Consequently, the animator does not need to manually animate the face on every keyframe in the animation or to capture expressions using facial motion capture systems due to high cost and complexity. In this scope of research, only the eyes and mouth were animated since they are the main focus of facial movements when the dancers perform. The algorithm and output were respectively evaluated using algorithm complexity and user evaluation tests. The complexity test was conducted using the Big O Notation method and it was found that the algorithm has the O(N) complexity, which makes it an efficient algorithm. The user evaluation test was performed by interviewing three Zapin experts from the Johor Heritage Foundation and they were mostly satisfied with the results.