{"title":"具有双路径细节增强和全局上下文感知的无监督跨模态生物医学图像融合框架。","authors":"Yao Liu, Wujie Chen, Zhen-Li Huang, ZhengXia Wang","doi":"10.1364/BOE.562137","DOIUrl":null,"url":null,"abstract":"<p><p>Fluorescence imaging and phase-contrast imaging are two important imaging techniques in molecular biology research. Green fluorescent protein images can locate high-intensity protein regions in Arabidopsis cells, while phase-contrast images provide information on cellular structures. The fusion of these two types of images facilitates protein localization and interaction studies. However, traditional multimodal optical imaging systems have complex optical components and cumbersome operations. Although deep learning has provided new solutions for multimodal image fusion, existing methods are usually based on convolution operations, which have limitations such as ignoring long-range contextual information and losing detailed information. To address these limitations, we propose an unsupervised cross-modal biomedical image fusion framework, called UCBFusion. First, we design a dual-branch feature extraction module to retain the local detail information of each modality and prevent the loss of texture details during convolution operations. Second, we introduce a context-aware attention fusion module to enhance the ability to extract global features and establish long-range relationships. Lastly, our framework adopts an interactive parallel architecture to achieve the interactive fusion of local and global information. Experimental results on Arabidopsis thaliana datasets and other image fusion tasks indicate that UCBFusion achieves superior fusion results compared with state-of-the-art algorithms, in terms of performance and generalization ability across different types of datasets. This study provides a crucial driving force for the development of Arabidopsis thaliana research.</p>","PeriodicalId":8969,"journal":{"name":"Biomedical optics express","volume":"16 8","pages":"3378-3394"},"PeriodicalIF":3.2000,"publicationDate":"2025-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12339351/pdf/","citationCount":"0","resultStr":"{\"title\":\"Unsupervised cross-modal biomedical image fusion framework with dual-path detail enhancement and global context awareness.\",\"authors\":\"Yao Liu, Wujie Chen, Zhen-Li Huang, ZhengXia Wang\",\"doi\":\"10.1364/BOE.562137\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Fluorescence imaging and phase-contrast imaging are two important imaging techniques in molecular biology research. Green fluorescent protein images can locate high-intensity protein regions in Arabidopsis cells, while phase-contrast images provide information on cellular structures. The fusion of these two types of images facilitates protein localization and interaction studies. However, traditional multimodal optical imaging systems have complex optical components and cumbersome operations. Although deep learning has provided new solutions for multimodal image fusion, existing methods are usually based on convolution operations, which have limitations such as ignoring long-range contextual information and losing detailed information. To address these limitations, we propose an unsupervised cross-modal biomedical image fusion framework, called UCBFusion. First, we design a dual-branch feature extraction module to retain the local detail information of each modality and prevent the loss of texture details during convolution operations. Second, we introduce a context-aware attention fusion module to enhance the ability to extract global features and establish long-range relationships. Lastly, our framework adopts an interactive parallel architecture to achieve the interactive fusion of local and global information. Experimental results on Arabidopsis thaliana datasets and other image fusion tasks indicate that UCBFusion achieves superior fusion results compared with state-of-the-art algorithms, in terms of performance and generalization ability across different types of datasets. This study provides a crucial driving force for the development of Arabidopsis thaliana research.</p>\",\"PeriodicalId\":8969,\"journal\":{\"name\":\"Biomedical optics express\",\"volume\":\"16 8\",\"pages\":\"3378-3394\"},\"PeriodicalIF\":3.2000,\"publicationDate\":\"2025-07-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12339351/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Biomedical optics express\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1364/BOE.562137\",\"RegionNum\":2,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2025/8/1 0:00:00\",\"PubModel\":\"eCollection\",\"JCR\":\"Q2\",\"JCRName\":\"BIOCHEMICAL RESEARCH METHODS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Biomedical optics express","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1364/BOE.562137","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/8/1 0:00:00","PubModel":"eCollection","JCR":"Q2","JCRName":"BIOCHEMICAL RESEARCH METHODS","Score":null,"Total":0}
Unsupervised cross-modal biomedical image fusion framework with dual-path detail enhancement and global context awareness.
Fluorescence imaging and phase-contrast imaging are two important imaging techniques in molecular biology research. Green fluorescent protein images can locate high-intensity protein regions in Arabidopsis cells, while phase-contrast images provide information on cellular structures. The fusion of these two types of images facilitates protein localization and interaction studies. However, traditional multimodal optical imaging systems have complex optical components and cumbersome operations. Although deep learning has provided new solutions for multimodal image fusion, existing methods are usually based on convolution operations, which have limitations such as ignoring long-range contextual information and losing detailed information. To address these limitations, we propose an unsupervised cross-modal biomedical image fusion framework, called UCBFusion. First, we design a dual-branch feature extraction module to retain the local detail information of each modality and prevent the loss of texture details during convolution operations. Second, we introduce a context-aware attention fusion module to enhance the ability to extract global features and establish long-range relationships. Lastly, our framework adopts an interactive parallel architecture to achieve the interactive fusion of local and global information. Experimental results on Arabidopsis thaliana datasets and other image fusion tasks indicate that UCBFusion achieves superior fusion results compared with state-of-the-art algorithms, in terms of performance and generalization ability across different types of datasets. This study provides a crucial driving force for the development of Arabidopsis thaliana research.
期刊介绍:
The journal''s scope encompasses fundamental research, technology development, biomedical studies and clinical applications. BOEx focuses on the leading edge topics in the field, including:
Tissue optics and spectroscopy
Novel microscopies
Optical coherence tomography
Diffuse and fluorescence tomography
Photoacoustic and multimodal imaging
Molecular imaging and therapies
Nanophotonic biosensing
Optical biophysics/photobiology
Microfluidic optical devices
Vision research.