On this page

Sentiment analysis and propagation trend modeling of social media video content based on frequency domain feature extraction and computational methods

By: Jingyu Zhang1, Kang An1
1Shanghai Documentary Academy, Shanghai University of Political Science and Law, Shanghai, 201701, China

Abstract

In this paper, a multimodal joint analysis method based on frequency domain feature extraction and deep learning is proposed. Firstly, frequency domain decomposition and threshold denoising of video frames are performed using wavelet transform to improve the image characterization ability by retaining low-frequency key information and high-frequency detailed features. Second, the RNN model is improved by combining the multiplepate notice machine-made to realize the emotion-semanteme fusion across visual-text modalities. Finally, the probabilistic clustering of communication sequences based on the GMM pattern is engaged in analysis the spatiotemporal evolution pattern of opinion diffusion. The test consequences indicate that the proposed method realize an image extraction precision of 92.59% on the VOT-2024 dataset and an F1 value of 84.29% for RNN-AM in the CMU-MOSI sentiment analysis task, which outperforms existing mainstream models. When applied to the COVIDHATE dataset, it successfully discovers the sentiment relevance of video content and captures the 48-hour intervention window and 7-day decay cycle of video dissemination, indicating that online public opinion intervention behaviors need to be carried out within 48 hours in order to achieve better results.