The high-level attention mechanism enhances object detection by focusing on important features and details, making it a potential tool for tumor segmentation. However, its effectiveness and efficiency in this context remain uncertain. This study aims to investigate the efficiency, feasibility and effectiveness of integrating a high-level attention mechanism into the U-Net and U-Net + + model for improving tumor segmentation. Experiments were conducted using U-Net and U-Net + + models augmented with high-level attention mechanisms to compare their performance. The proposed model incorporated high-level attention mechanisms in the encoder, decoder, and skip connections. Model training and validation were performed using T1, FLAIR, T2, and T1ce MR images from the BraTS2018 and BraTS2019 datasets. To further evaluate the model's effectiveness, testing was conducted on the UPenn-GBM dataset provided by the Center for Biomedical Image Computing and Analysis at the University of Pennsylvania. The segmentation accuracy of the high-level attention U-Net + + was evaluated using the DICE score, achieving values of 88.68 (ET), 89.71 (TC), and 91.50 (WT) on the BraTS2019 dataset and 90.93 (ET), 92.79 (TC), and 93.77 (WT) on the UPEEN-GBM dataset. The results demonstrate that U-Net + + integrated with the high-level attention mechanism achieves higher accuracy in brain tumor segmentation compared to baseline models. Experiments conducted on comparable and challenging datasets highlight the superior performance of the proposed approach. Furthermore, the proposed model exhibits promising potential for generalization to other datasets or use cases, making it a viable tool for broader medical imaging applications.