In the era where art and technology converge, the application of artificial intelligence in the field of music is becoming increasingly widespread. This paper constructs a music generation model using the Transformer, which is based on the self-attention mechanism, as the primary network architecture. Music style is measured using indicators such as chord histogram entropy, chord notes, non-chord note ratio, and style adaptability. This paper selected 10 new MIDI music pieces as research subjects to evaluate the Transformer model and explore its reliability. The results show that the performance of the model used in this paper is superior to that of other models. Other models still lag behind the Transformer model by 3% to 15% in terms of musicality metrics, demonstrating the superiority of the Transformer-based music model.