In recent years, neural network technology has been widely used in machine translation, especially in improving translation quality and semantic consistency. In this paper, a translation system optimization model based on a dynamic computational method is proposed. The model adopts the dynamic reconfigurable binarized neural network (DRBNN) computational method to improve the semantic consistency and translation quality of the translation system. Feature interaction layers and grouped sparse regularization terms are introduced into the model to reduce the number of model parameters, and the computational efficiency is improved by quantization methods. In the experiments, the model performs well in several English translation tasks. In particular, in the WMT14 English to German task, the model achieves an accuracy rate of 94.62% and an F1 value of 94.28%; on the AI Challenger dataset, the accuracy rate is 96.24% and the F1 value is 96.05%. The BLEU scores of the model also show high performance under different data volumes, especially in the case of larger data volumes, the BLEU scores are significantly improved. In addition, the model performs well in terms of semantic consistency, reducing the TER values from 2.42 to 10.14 compared to the traditional methods. The experimental results prove that the dynamic computing method proposed in this paper effectively improves the overall performance and semantic consistency of the translation system.