Upper limb exoskeleton rehabilitation robots require precise and robust control systems to accurately interpret user motion intentions and deliver effective assistance. We propose a multi-sensor fusion-based surface electromyography (sEMG) control system that integrates sEMG signals with data from angle, pressure, inertial, and torque sensors to enhance motion intention recognition. The proposed method employs a hierarchical pipeline involving signal acquisition, preprocessing, feature extraction, and fusion, followed by classification using machine learning algorithms to decode user intentions. The fused sensor data compensates for the inherent limitations of sEMG signals, such as noise sensitivity and variability, thereby improving system reliability. Furthermore, the control strategy translates classified intentions into exoskeleton commands, enabling seamless interaction between the user and the robotic device. The novelty of this work lies in the synergistic combination of heterogeneous sensor modalities, which collectively address the challenges of real-world rehabilitation scenarios. The results show that the system achieves high accuracy in intention recognition and responsive exoskeleton control, making it suitable for clinical and assistive applications. The significance of this approach is underscored by its potential to advance personalized rehabilitation, offering adaptable support tailored to individual user needs. This work contributes to the growing field of human-robot interaction by providing a scalable framework for intelligent exoskeleton control.