With the continuous development of artificial intelligence technology, deep neural networks show great potential in the field of intelligent music creation. In this paper, we first extract the CQT and Meier spectral features of music, deform and fill the biphasic information by WaveNet decoder, and realize the overall style migration of emotional music. Then, we design a music emotion representation model that integrates the Plutchik and Thayer bi-emotion models and devise a fusion method for the bimodal emotion results, based on which rhythmic control and tonal conditions are introduced to generate music that contains multiple emotions. The model in this paper can effectively merge audio tracks, and the average style transformation intensity of music of the same style reaches 0.80 and above, and can accurately express negative and positive emotions and transform them into emotional music representations, obtaining a music quality score of 4.1. It adds a scientific supporting theory to the field of intelligent composition research.