基于交叉注意机制的PET/MR双峰图像全脑准确分割

IF 4.6 Q1 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING
Wenbo Li;Zhenxing Huang;Qiyang Zhang;Na Zhang;Wenjie Zhao;Yaping Wu;Jianmin Yuan;Yang Yang;Yan Zhang;Yongfeng Yang;Hairong Zheng;Dong Liang;Meiyun Wang;Zhanli Hu
{"title":"基于交叉注意机制的PET/MR双峰图像全脑准确分割","authors":"Wenbo Li;Zhenxing Huang;Qiyang Zhang;Na Zhang;Wenjie Zhao;Yaping Wu;Jianmin Yuan;Yang Yang;Yan Zhang;Yongfeng Yang;Hairong Zheng;Dong Liang;Meiyun Wang;Zhanli Hu","doi":"10.1109/TRPMS.2024.3413862","DOIUrl":null,"url":null,"abstract":"The PET/MRI system plays a significant role in the functional and anatomical quantification of the brain, providing accurate diagnostic data for a variety of brain disorders. However, most of the current methods for segmenting the brain are based on unimodal MRI and rarely combine structural and functional dual-modality information. Therefore, we aimed to employ deep-learning techniques to achieve automatic and accurate segmentation of the whole brain while incorporating functional and anatomical information. To leverage dual-modality information, a novel 3-D network with a cross-attention module was proposed to capture the correlation between dual-modality features and improve segmentation accuracy. Moreover, several deep-learning methods were employed as comparison measures to evaluate the model performance, with the dice similarity coefficient (DSC), Jaccard index (JAC), recall, and precision serving as quantitative metrics. Experimental results demonstrated our advantages in whole-brain segmentation, achieving an 85.35% DSC, 77.22% JAC, 88.86% recall, and 84.81% precision, which were better than those comparative methods. In addition, consistent and correlated analyses based on segmentation results also demonstrated that our approach achieved superior performance. In future work, we will try to apply our method to other multimodal tasks, such as PET/CT data analysis.","PeriodicalId":46807,"journal":{"name":"IEEE Transactions on Radiation and Plasma Medical Sciences","volume":"9 1","pages":"47-56"},"PeriodicalIF":4.6000,"publicationDate":"2024-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Accurate Whole-Brain Segmentation for Bimodal PET/MR Images via a Cross-Attention Mechanism\",\"authors\":\"Wenbo Li;Zhenxing Huang;Qiyang Zhang;Na Zhang;Wenjie Zhao;Yaping Wu;Jianmin Yuan;Yang Yang;Yan Zhang;Yongfeng Yang;Hairong Zheng;Dong Liang;Meiyun Wang;Zhanli Hu\",\"doi\":\"10.1109/TRPMS.2024.3413862\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The PET/MRI system plays a significant role in the functional and anatomical quantification of the brain, providing accurate diagnostic data for a variety of brain disorders. However, most of the current methods for segmenting the brain are based on unimodal MRI and rarely combine structural and functional dual-modality information. Therefore, we aimed to employ deep-learning techniques to achieve automatic and accurate segmentation of the whole brain while incorporating functional and anatomical information. To leverage dual-modality information, a novel 3-D network with a cross-attention module was proposed to capture the correlation between dual-modality features and improve segmentation accuracy. Moreover, several deep-learning methods were employed as comparison measures to evaluate the model performance, with the dice similarity coefficient (DSC), Jaccard index (JAC), recall, and precision serving as quantitative metrics. Experimental results demonstrated our advantages in whole-brain segmentation, achieving an 85.35% DSC, 77.22% JAC, 88.86% recall, and 84.81% precision, which were better than those comparative methods. In addition, consistent and correlated analyses based on segmentation results also demonstrated that our approach achieved superior performance. In future work, we will try to apply our method to other multimodal tasks, such as PET/CT data analysis.\",\"PeriodicalId\":46807,\"journal\":{\"name\":\"IEEE Transactions on Radiation and Plasma Medical Sciences\",\"volume\":\"9 1\",\"pages\":\"47-56\"},\"PeriodicalIF\":4.6000,\"publicationDate\":\"2024-06-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Radiation and Plasma Medical Sciences\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10556684/\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Radiation and Plasma Medical Sciences","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10556684/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING","Score":null,"Total":0}
引用次数: 0

摘要

PET/MRI系统在脑功能和解剖量化方面发挥着重要作用,为各种脑疾病提供准确的诊断数据。然而,目前大多数大脑分割方法都是基于单峰MRI,很少结合结构和功能双峰信息。因此,我们的目标是利用深度学习技术,在结合功能和解剖信息的同时,实现对整个大脑的自动准确分割。为了充分利用双模态信息,提出了一种新的具有交叉注意模块的三维网络,以捕获双模态特征之间的相关性,提高分割精度。此外,采用几种深度学习方法作为比较指标,以骰子相似系数(DSC)、Jaccard指数(JAC)、召回率(recall)和精度(precision)作为定量指标来评估模型的性能。实验结果显示了我们在全脑分割方面的优势,DSC为85.35%,JAC为77.22%,查全率为88.86%,查准率为84.81%,优于其他对比方法。此外,基于分割结果的一致性和相关性分析也证明了我们的方法取得了优异的性能。在未来的工作中,我们将尝试将我们的方法应用于其他多模态任务,例如PET/CT数据分析。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Accurate Whole-Brain Segmentation for Bimodal PET/MR Images via a Cross-Attention Mechanism
The PET/MRI system plays a significant role in the functional and anatomical quantification of the brain, providing accurate diagnostic data for a variety of brain disorders. However, most of the current methods for segmenting the brain are based on unimodal MRI and rarely combine structural and functional dual-modality information. Therefore, we aimed to employ deep-learning techniques to achieve automatic and accurate segmentation of the whole brain while incorporating functional and anatomical information. To leverage dual-modality information, a novel 3-D network with a cross-attention module was proposed to capture the correlation between dual-modality features and improve segmentation accuracy. Moreover, several deep-learning methods were employed as comparison measures to evaluate the model performance, with the dice similarity coefficient (DSC), Jaccard index (JAC), recall, and precision serving as quantitative metrics. Experimental results demonstrated our advantages in whole-brain segmentation, achieving an 85.35% DSC, 77.22% JAC, 88.86% recall, and 84.81% precision, which were better than those comparative methods. In addition, consistent and correlated analyses based on segmentation results also demonstrated that our approach achieved superior performance. In future work, we will try to apply our method to other multimodal tasks, such as PET/CT data analysis.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
IEEE Transactions on Radiation and Plasma Medical Sciences
IEEE Transactions on Radiation and Plasma Medical Sciences RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING-
CiteScore
8.00
自引率
18.20%
发文量
109
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信