On this page

Machine Learning Based Sentiment Analysis of Social Network Users

By: Hao Zhang 1
1Nanjing Forestry University, Nanjing, Jiangsu, 210037, China

Abstract

The large amount of multimodal data generated by social network platforms contains rich sentiment information, and effective analysis of user sentiment is of great value for public opinion monitoring, business decision-making and user experience optimization. In this paper, we propose a cross-modal sentiment analysis model based on feature fusion, which extracts text sentiment features by BERT, image sentiment features by ResNet152, and adopts the multi-head attention mechanism to realize the effective fusion of graphical and textual information, and designs a feature-level fusion strategy to make full use of inter-modal correlation and independence information. The experiments are conducted on the Twitter public dataset, and the results show that the featurelevel fusion method proposed in this paper improves the accuracy by 2.53% and 1.46% compared with the traditional feature splicing and Transformer fusion, respectively, and achieves 79.48% accuracy on the MVSA-Single dataset, which is 2.7% higher than that of the current popular DR-Transformer model by 2.7%. The simulation experiment selects the Paris Olympics event for multi-source social network user sentiment analysis, and obtains results that match the theoretical values in the three-source case of microblogging, WeChat and Sohu News, with a positive sentiment value of 8636.6 and a negative sentiment value of 2363.5, which verifies the validity of the model in the practical application scenarios. The study fully considers the mutual enhancement effect between graphic and textual modalities, solves the problem of insufficient accuracy of traditional single-modal sentiment analysis, and has important theoretical value and application prospects for social media sentiment monitoring, public opinion analysis, and personalized recommendation system.