Jing Di, Chan Liang, Li Ren, Wenqing Guo, Jizhao Liu, Jing Lian
{"title":"C2IENet:基于对比约束特征和信息交换的多分支医学影像融合","authors":"Jing Di, Chan Liang, Li Ren, Wenqing Guo, Jizhao Liu, Jing Lian","doi":"10.1007/s00530-024-01473-y","DOIUrl":null,"url":null,"abstract":"<p>In the field of medical image fusion, traditional approaches often fail to differentiate between the unique characteristics of each raw image, leading to fused images with compromised texture and structural clarity. Addressing this, we introduce an advanced multi-branch fusion method characterized by contrast-enhanced features and interactive information exchange. This method integrates a multi-scale residual module and a gradient-dense module within a private branch to precisely extract and enrich texture details from individual raw images. In parallel, a common feature extraction branch, equipped with an information interaction module, processes paired raw images to synergistically capture complementary and shared functional information across modalities. Additionally, we implement a sophisticated attention mechanism tailored for both the private and public branches to enhance global feature extraction, thereby significantly improving the contrast and contour definition of the fused image. A novel correlation consistency loss function further refines the fusion process by optimizing the information sharing between modalities, promoting the correlation among basic cross-modal features while minimizing the correlation of high-frequency details across different modalities. Objective evaluations demonstrate substantial improvements in indices such as EN, MI, QMI, SSIM, AG, SF, and <span>\\(\\text {Q}^{\\text {AB/F}}\\)</span>, with average increases of 23.67%, 12.35%, 4.22%, 20.81%, 8.96%, 6.38%, and 25.36%, respectively. These results underscore our method’s superiority in achieving enhanced texture detail and contrast in fused images compared to conventional algorithms, as validated by both subjective assessments and objective performance metrics.</p>","PeriodicalId":3,"journal":{"name":"ACS Applied Electronic Materials","volume":null,"pages":null},"PeriodicalIF":4.3000,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"C2IENet: Multi-branch medical image fusion based on contrastive constraint features and information exchange\",\"authors\":\"Jing Di, Chan Liang, Li Ren, Wenqing Guo, Jizhao Liu, Jing Lian\",\"doi\":\"10.1007/s00530-024-01473-y\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>In the field of medical image fusion, traditional approaches often fail to differentiate between the unique characteristics of each raw image, leading to fused images with compromised texture and structural clarity. Addressing this, we introduce an advanced multi-branch fusion method characterized by contrast-enhanced features and interactive information exchange. This method integrates a multi-scale residual module and a gradient-dense module within a private branch to precisely extract and enrich texture details from individual raw images. In parallel, a common feature extraction branch, equipped with an information interaction module, processes paired raw images to synergistically capture complementary and shared functional information across modalities. Additionally, we implement a sophisticated attention mechanism tailored for both the private and public branches to enhance global feature extraction, thereby significantly improving the contrast and contour definition of the fused image. A novel correlation consistency loss function further refines the fusion process by optimizing the information sharing between modalities, promoting the correlation among basic cross-modal features while minimizing the correlation of high-frequency details across different modalities. Objective evaluations demonstrate substantial improvements in indices such as EN, MI, QMI, SSIM, AG, SF, and <span>\\\\(\\\\text {Q}^{\\\\text {AB/F}}\\\\)</span>, with average increases of 23.67%, 12.35%, 4.22%, 20.81%, 8.96%, 6.38%, and 25.36%, respectively. These results underscore our method’s superiority in achieving enhanced texture detail and contrast in fused images compared to conventional algorithms, as validated by both subjective assessments and objective performance metrics.</p>\",\"PeriodicalId\":3,\"journal\":{\"name\":\"ACS Applied Electronic Materials\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":4.3000,\"publicationDate\":\"2024-09-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ACS Applied Electronic Materials\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1007/s00530-024-01473-y\",\"RegionNum\":3,\"RegionCategory\":\"材料科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACS Applied Electronic Materials","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s00530-024-01473-y","RegionNum":3,"RegionCategory":"材料科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
C2IENet: Multi-branch medical image fusion based on contrastive constraint features and information exchange
In the field of medical image fusion, traditional approaches often fail to differentiate between the unique characteristics of each raw image, leading to fused images with compromised texture and structural clarity. Addressing this, we introduce an advanced multi-branch fusion method characterized by contrast-enhanced features and interactive information exchange. This method integrates a multi-scale residual module and a gradient-dense module within a private branch to precisely extract and enrich texture details from individual raw images. In parallel, a common feature extraction branch, equipped with an information interaction module, processes paired raw images to synergistically capture complementary and shared functional information across modalities. Additionally, we implement a sophisticated attention mechanism tailored for both the private and public branches to enhance global feature extraction, thereby significantly improving the contrast and contour definition of the fused image. A novel correlation consistency loss function further refines the fusion process by optimizing the information sharing between modalities, promoting the correlation among basic cross-modal features while minimizing the correlation of high-frequency details across different modalities. Objective evaluations demonstrate substantial improvements in indices such as EN, MI, QMI, SSIM, AG, SF, and \(\text {Q}^{\text {AB/F}}\), with average increases of 23.67%, 12.35%, 4.22%, 20.81%, 8.96%, 6.38%, and 25.36%, respectively. These results underscore our method’s superiority in achieving enhanced texture detail and contrast in fused images compared to conventional algorithms, as validated by both subjective assessments and objective performance metrics.