Language modeling provides a resource carrier for students’ English learning. In this paper, ZO-VRAGDA algorithm is designed to reduce the complexity of multi-task solving for English language models. By calculating the complexity of the model processing task, the intelligent body is guided to decompose the task into multiple subtasks. The efficiency and accuracy of the model in completing the task are optimized by invoking appropriate tools and reminding the error-prone points. The English language model is introduced in the language classroom to recommend personalized learning resources for students and improve teaching quality. The study shows that with different numbers of neurons and iterations, the training time of the model based on computational complexity analysis in this paper is 5.54s-7.05s, 698.53s-1213.94s and 115s and 2722s in the 2 datasets, respectively, which is better than the comparison model. In different complex task processing, the confusion degree is reduced to 41 with only 99.22s, 104.21s, 97.91s. The similarity degree is improved to 27 with only 113.53s, 60.77s, 93.31s.