This paper proposes a music composition model based on neural network optimization algorithm, which integrates genetic algorithm and improved BP neural network to realize the intelligence and efficiency of music composition. The problem of insufficient diversity of traditional methods is solved by a multidimensional coding strategy (12-bit octal, quadratic and octal coding for scales, registers and beats, respectively), combined with genetic operators to dynamically optimize melodic segments. For polyphonic music counterpoint vocal part generation, the elastic gradient descent method is introduced to improve the BP network, which effectively overcomes the defects of the traditional algorithm that is slow to converge and prone to trap local extremes. The experiments use LakhMIDI and MUT datasets, and compare with RNN, LSTM, Seq2Seq and other models, and the results show that the similarity between the generated music and the database waveforms reaches 86.74%, and the chord rule matching is significantly consistent. In the manual evaluation, the model is fully ahead in fluency of 4.23, consistency of 4.49, and rhythm of 4.57, with an average score of 4.34. The evaluation of the music theory features shows that the note repetition degree is 18.77% and the style matching degree is 91.48%, which are better than the benchmark model. The study shows that the model significantly improves the automation level and artistry of music generation by synergistically optimizing the coding strategy and network structure.