On this page

Analyzing Body Language and Emotional Expression Mechanisms in Dance through Computer Vision

By: Guojing Tan1, Jianan Wang1
1School of Performing Arts, Sichuan University of Media and Communications, Chengdu, Sichuan, 610000, China

Abstract

In this study, a hybrid CNN-BLSTM model integrating biomechanical feature extraction and graph theory is proposed as the core of computer vision technology for the recognition of emotion dynamics in dance body language. Through the Euler angle matrix transformation and de-rotation and de-translation process, biomechanical features such as joint position, bone angle, and human body orientation are quantified, and a force effect parameter system including lightness and smoothness is constructed. The synergistic mechanism between movement learning and emotional expression is explained from the perspective of cognitive psychology by combining movement concept and schema theory. The experiments are based on DanceDB and FolkDance datasets, and the CNNBLSTM model with deep and shallow feature fusion is used for validation. The results show that the proposed model achieves an average recognition accuracy of 43.48% and 52.37% on DanceDB and FolkDance datasets, respectively, which is an improvement of 7.46%-12.20% compared with single-feature methods such as directional gradient and optical flow direction. In key frame extraction, the multimodal feature fusion strategy reduces the compression rate to 2.96% and improves the accuracy and F1 score to 95.54% and 91.87%, respectively. It is shown that the model significantly enhances the emotion resolution of complex dance movements through joint modeling of spatio-temporal features.