On this page

An investigation into the development of musical styles using deep generative modeling

By: Hongzhi Zeng 1
1Nanjing academy of music, Communication University of China, Nanjing 211199, China

Abstract

Deep generative models have a lot of promise for music production and style modeling given the quick advancement of artificial intelligence technologies. There are still several obstacles in the way of properly capturing and analyzing the principles of music style evolution using deep generative models. This study, which is based on deep generative modeling, attempts to investigate the dynamic process of music style change and develop a generative framework that can capture the traits of music style evolution. In the meantime, an optimization scheme based on the attention mechanism and multiscale modeling is proposed to improve the quality of generation and the interpretability of style evolution in to address the limitations of deep generative models when handling complex time series and multimodal music data.In terms of stylistic consistency, sound diversity, and evolution rationality, the generated music greatly surpasses the current approaches, according to experimental results, which demonstrate that the model put out in this study can successfully represent the time-series evolution characteristics of musical styles.