Spatio-temporal normalized joint coordinates as features for skeleton-based human action recognition

Human Action Recognition (HAR) is critical in video monitoring, human-computer interaction, video comprehension, and virtual reality. While significant progress has been made in the HAR domain in recent years, developing an accurate, fast, and efficient system for video action recognition remains a...

Full description

Saved in:
Bibliographic Details
Main Author: Nasrul ‘Alam, Fakhrul Aniq Hakimi
Format: Thesis
Language:English
Published: 2022
Subjects:
Online Access:http://eprints.utm.my/id/eprint/99599/1/FakhrulAniqHakimiMMJIIT2022.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Human Action Recognition (HAR) is critical in video monitoring, human-computer interaction, video comprehension, and virtual reality. While significant progress has been made in the HAR domain in recent years, developing an accurate, fast, and efficient system for video action recognition remains a challenge due to a variety of obstacles, such as changes in camera viewpoint, occlusions, background, and motion speed. In general, the action recognition model learns spatial and temporal features in order to classify human actions. The state-of-the-art approaches to deep learning skeleton-based action recognition rely primarily on Recurrent Neural Networks (RNN) or Convolutional Neural Networks (CNN). RNN-based action recognition methods only model the long-term contextual information in the temporal domain. In return, they neglect the spatial configurations of articulated skeletons where the joints are strongly discriminative. Therefore, it is challenging to extract high-level features. In contrast, action recognition based on CNNs is incapable of modelling long-term temporal dependency. Typically, implementations stack a limited number of frames and convert them into images to represent spatio-temporal information. However, this approach is susceptible to information loss during the conversion process. This study proposes STEM-Coords as pre-processing and features extraction technique, to effectively represent spatio-temporal features using joint coordinates from a human pose. The feature set comprised normalized joint coordinates and their respective speed was represented tabularly as input for the Neural Oblivious Decision Ensemble (NODE) classification model. The proposed STEM-Coords was validated on three benchmark datasets KTH, RealWorld HAR, and MSR DailyActivity 3D. Our method outperformed the state-of-the-art approaches on every dataset with 97.3%, 99.3%, and 97.4% accuracy rates, respectively. The results demonstrated that our proposed method effectively and efficiently represents spatio-temporal information while maintaining robustness to partial occlusion, anthropometrically, and view-invariant.