A generative whole-brain segmentation model for positron emission tomography images.

IF 3 2区 医学 Q2 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING
Wenbo Li, Zhenxing Huang, Hongyan Tang, Yaping Wu, Yunlong Gao, Jing Qin, Jianmin Yuan, Yang Yang, Yan Zhang, Na Zhang, Hairong Zheng, Dong Liang, Meiyun Wang, Zhanli Hu
{"title":"A generative whole-brain segmentation model for positron emission tomography images.","authors":"Wenbo Li, Zhenxing Huang, Hongyan Tang, Yaping Wu, Yunlong Gao, Jing Qin, Jianmin Yuan, Yang Yang, Yan Zhang, Na Zhang, Hairong Zheng, Dong Liang, Meiyun Wang, Zhanli Hu","doi":"10.1186/s40658-025-00716-9","DOIUrl":null,"url":null,"abstract":"<p><strong>Purpose: </strong>Whole-brain segmentation via positron emission tomography (PET) imaging is crucial for advancing neuroscience research and clinical medicine, providing essential insights into biological metabolism and activity within different brain regions. However, the low resolution of PET images may have limited the segmentation accuracy of multiple brain structures. Therefore, we propose a generative multi-object segmentation model for brain PET images to achieve automatic and accurate segmentation.</p><p><strong>Methods: </strong>In this study, we propose a generative multi-object segmentation model for brain PET images with two learning protocols. First, we pretrained a latent mapping model to learn the mapping relationship between PET and MR images so that we could extract anatomical information of the brain. A 3D multi-object segmentation model was subsequently proposed to apply whole-brain segmentation to MR images generated from integrated latent mapping models. Moreover, a custom cross-attention module based on a cross-attention mechanism was constructed to effectively fuse the functional information and structural information. The proposed method was compared with various deep learning-based approaches in terms of the Dice similarity coefficient, Jaccard index, precision, and recall serving as evaluation metrics.</p><p><strong>Results: </strong>Experiments were conducted on real brain PET/MR images from 120 patients. Both visual and quantitative results indicate that our method outperforms the other comparison approaches, achieving 75.53% ± 4.26% Dice, 66.02% ± 4.55% Jaccard, 74.64% ± 4.15% recall and 81.40% ± 2.30% precision. Furthermore, the evaluation of the SUV distribution and correlation assessment in the regions of interest demonstrated consistency with the ground truth. Additionally, clinical tolerance rates, which are determined by the tumor background ratio, have confirmed the ability of the method to distinguish highly metabolic regions accurately from normal regions, reinforcing its clinical applicability.</p><p><strong>Conclusion: </strong>For automatic and accurate whole-brain segmentation, we propose a novel 3D generative multi-object segmentation model for brain PET images, which achieves superior model performance compared with other deep learning methods. In the future, we will apply our whole-brain segmentation method to clinical practice and extend it to other multimodal tasks.</p>","PeriodicalId":11559,"journal":{"name":"EJNMMI Physics","volume":"12 1","pages":"15"},"PeriodicalIF":3.0000,"publicationDate":"2025-02-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11805735/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"EJNMMI Physics","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1186/s40658-025-00716-9","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING","Score":null,"Total":0}
引用次数: 0

Abstract

Purpose: Whole-brain segmentation via positron emission tomography (PET) imaging is crucial for advancing neuroscience research and clinical medicine, providing essential insights into biological metabolism and activity within different brain regions. However, the low resolution of PET images may have limited the segmentation accuracy of multiple brain structures. Therefore, we propose a generative multi-object segmentation model for brain PET images to achieve automatic and accurate segmentation.

Methods: In this study, we propose a generative multi-object segmentation model for brain PET images with two learning protocols. First, we pretrained a latent mapping model to learn the mapping relationship between PET and MR images so that we could extract anatomical information of the brain. A 3D multi-object segmentation model was subsequently proposed to apply whole-brain segmentation to MR images generated from integrated latent mapping models. Moreover, a custom cross-attention module based on a cross-attention mechanism was constructed to effectively fuse the functional information and structural information. The proposed method was compared with various deep learning-based approaches in terms of the Dice similarity coefficient, Jaccard index, precision, and recall serving as evaluation metrics.

Results: Experiments were conducted on real brain PET/MR images from 120 patients. Both visual and quantitative results indicate that our method outperforms the other comparison approaches, achieving 75.53% ± 4.26% Dice, 66.02% ± 4.55% Jaccard, 74.64% ± 4.15% recall and 81.40% ± 2.30% precision. Furthermore, the evaluation of the SUV distribution and correlation assessment in the regions of interest demonstrated consistency with the ground truth. Additionally, clinical tolerance rates, which are determined by the tumor background ratio, have confirmed the ability of the method to distinguish highly metabolic regions accurately from normal regions, reinforcing its clinical applicability.

Conclusion: For automatic and accurate whole-brain segmentation, we propose a novel 3D generative multi-object segmentation model for brain PET images, which achieves superior model performance compared with other deep learning methods. In the future, we will apply our whole-brain segmentation method to clinical practice and extend it to other multimodal tasks.

正电子发射断层成像的生成式全脑分割模型。
目的:通过正电子发射断层扫描(PET)成像进行全脑分割对于推进神经科学研究和临床医学至关重要,为了解不同大脑区域的生物代谢和活动提供了重要的见解。然而,PET图像的低分辨率可能会限制多种脑结构的分割精度。为此,我们提出了一种生成式脑PET图像多目标分割模型,实现了脑PET图像的自动准确分割。方法:在本研究中,我们提出了一种基于两种学习协议的脑PET图像生成多目标分割模型。首先,我们预训练一个潜在映射模型来学习PET和MR图像之间的映射关系,从而提取大脑的解剖信息。随后提出了一种三维多目标分割模型,将全脑分割应用于综合潜映射模型生成的MR图像。构建了基于交叉注意机制的自定义交叉注意模块,实现了功能信息与结构信息的有效融合。以骰子相似系数、Jaccard指数、准确率和召回率作为评价指标,与各种基于深度学习的方法进行了比较。结果:对120例患者的真实脑PET/MR图像进行了实验。视觉和定量结果均表明,该方法优于其他方法,分别达到Dice(75.53%±4.26%)、Jaccard(66.02%±4.55%)、召回率(74.64%±4.15%)和准确率(81.40%±2.30%)。此外,对感兴趣区域的SUV分布和相关性评估表明与基本事实一致。此外,由肿瘤背景比确定的临床耐受性证实了该方法能够准确区分高代谢区和正常区,增强了其临床适用性。结论:为了实现自动准确的全脑分割,我们提出了一种新的脑PET图像三维生成多目标分割模型,该模型性能优于其他深度学习方法。在未来,我们将把我们的全脑分割方法应用于临床实践,并将其扩展到其他多模态任务中。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
EJNMMI Physics
EJNMMI Physics Physics and Astronomy-Radiation
CiteScore
6.70
自引率
10.00%
发文量
78
审稿时长
13 weeks
期刊介绍: EJNMMI Physics is an international platform for scientists, users and adopters of nuclear medicine with a particular interest in physics matters. As a companion journal to the European Journal of Nuclear Medicine and Molecular Imaging, this journal has a multi-disciplinary approach and welcomes original materials and studies with a focus on applied physics and mathematics as well as imaging systems engineering and prototyping in nuclear medicine. This includes physics-driven approaches or algorithms supported by physics that foster early clinical adoption of nuclear medicine imaging and therapy.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信