The current deep learning-based multimodal medical image fusion algorithms usually use a single feature extractor to extract features from images of different modalities. However, these approaches tend to overlook the distinctive features of different modality medical images, resulting in feature loss. In addition, applying complex network structures to low-level image-processing tasks would waste computational power. Therefore, we innovatively design an end-to-end multimodal fusion network with a dual encoder and single decoder structure, which resembles the letter ‘W’, and we have termed WMFusion. Specifically, we first develop a multi-scale context dynamic feature extractor (MCDFE) that employs context-gated convolution to extract multiscale features from different modalities effectively. Subsequently, we propose a local-global feature fusion module (LGFM) for fusing features of different scales, and we design a cross-modality bidirectional interaction structure in the local branch. Finally, feature redundancy is suppressed and the fusion image is reconstructed by a spatial channel reconstruction module (SCRM) with a spatial and channel reconstruction unit. A large number of experimental results demonstrate that our proposed WMFusion method is superior to some state-of-the-art algorithms in terms of both subjective and objective evaluation metrics, and has satisfactory computation efficiency.