{"title":"基于交叉注意机制的PET/MR双峰图像全脑准确分割","authors":"Wenbo Li;Zhenxing Huang;Qiyang Zhang;Na Zhang;Wenjie Zhao;Yaping Wu;Jianmin Yuan;Yang Yang;Yan Zhang;Yongfeng Yang;Hairong Zheng;Dong Liang;Meiyun Wang;Zhanli Hu","doi":"10.1109/TRPMS.2024.3413862","DOIUrl":null,"url":null,"abstract":"The PET/MRI system plays a significant role in the functional and anatomical quantification of the brain, providing accurate diagnostic data for a variety of brain disorders. However, most of the current methods for segmenting the brain are based on unimodal MRI and rarely combine structural and functional dual-modality information. Therefore, we aimed to employ deep-learning techniques to achieve automatic and accurate segmentation of the whole brain while incorporating functional and anatomical information. To leverage dual-modality information, a novel 3-D network with a cross-attention module was proposed to capture the correlation between dual-modality features and improve segmentation accuracy. Moreover, several deep-learning methods were employed as comparison measures to evaluate the model performance, with the dice similarity coefficient (DSC), Jaccard index (JAC), recall, and precision serving as quantitative metrics. Experimental results demonstrated our advantages in whole-brain segmentation, achieving an 85.35% DSC, 77.22% JAC, 88.86% recall, and 84.81% precision, which were better than those comparative methods. In addition, consistent and correlated analyses based on segmentation results also demonstrated that our approach achieved superior performance. In future work, we will try to apply our method to other multimodal tasks, such as PET/CT data analysis.","PeriodicalId":46807,"journal":{"name":"IEEE Transactions on Radiation and Plasma Medical Sciences","volume":"9 1","pages":"47-56"},"PeriodicalIF":4.6000,"publicationDate":"2024-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Accurate Whole-Brain Segmentation for Bimodal PET/MR Images via a Cross-Attention Mechanism\",\"authors\":\"Wenbo Li;Zhenxing Huang;Qiyang Zhang;Na Zhang;Wenjie Zhao;Yaping Wu;Jianmin Yuan;Yang Yang;Yan Zhang;Yongfeng Yang;Hairong Zheng;Dong Liang;Meiyun Wang;Zhanli Hu\",\"doi\":\"10.1109/TRPMS.2024.3413862\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The PET/MRI system plays a significant role in the functional and anatomical quantification of the brain, providing accurate diagnostic data for a variety of brain disorders. However, most of the current methods for segmenting the brain are based on unimodal MRI and rarely combine structural and functional dual-modality information. Therefore, we aimed to employ deep-learning techniques to achieve automatic and accurate segmentation of the whole brain while incorporating functional and anatomical information. To leverage dual-modality information, a novel 3-D network with a cross-attention module was proposed to capture the correlation between dual-modality features and improve segmentation accuracy. Moreover, several deep-learning methods were employed as comparison measures to evaluate the model performance, with the dice similarity coefficient (DSC), Jaccard index (JAC), recall, and precision serving as quantitative metrics. Experimental results demonstrated our advantages in whole-brain segmentation, achieving an 85.35% DSC, 77.22% JAC, 88.86% recall, and 84.81% precision, which were better than those comparative methods. In addition, consistent and correlated analyses based on segmentation results also demonstrated that our approach achieved superior performance. In future work, we will try to apply our method to other multimodal tasks, such as PET/CT data analysis.\",\"PeriodicalId\":46807,\"journal\":{\"name\":\"IEEE Transactions on Radiation and Plasma Medical Sciences\",\"volume\":\"9 1\",\"pages\":\"47-56\"},\"PeriodicalIF\":4.6000,\"publicationDate\":\"2024-06-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Radiation and Plasma Medical Sciences\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10556684/\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Radiation and Plasma Medical Sciences","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10556684/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING","Score":null,"Total":0}
Accurate Whole-Brain Segmentation for Bimodal PET/MR Images via a Cross-Attention Mechanism
The PET/MRI system plays a significant role in the functional and anatomical quantification of the brain, providing accurate diagnostic data for a variety of brain disorders. However, most of the current methods for segmenting the brain are based on unimodal MRI and rarely combine structural and functional dual-modality information. Therefore, we aimed to employ deep-learning techniques to achieve automatic and accurate segmentation of the whole brain while incorporating functional and anatomical information. To leverage dual-modality information, a novel 3-D network with a cross-attention module was proposed to capture the correlation between dual-modality features and improve segmentation accuracy. Moreover, several deep-learning methods were employed as comparison measures to evaluate the model performance, with the dice similarity coefficient (DSC), Jaccard index (JAC), recall, and precision serving as quantitative metrics. Experimental results demonstrated our advantages in whole-brain segmentation, achieving an 85.35% DSC, 77.22% JAC, 88.86% recall, and 84.81% precision, which were better than those comparative methods. In addition, consistent and correlated analyses based on segmentation results also demonstrated that our approach achieved superior performance. In future work, we will try to apply our method to other multimodal tasks, such as PET/CT data analysis.