AutoPET 挑战赛 III:测试广义骰子病灶丢失训练的三维残留 UNet 对全身 PET/CT 图像中 FDG 和 PSMA 病灶分割的稳健性

Shadab Ahamed
{"title":"AutoPET 挑战赛 III:测试广义骰子病灶丢失训练的三维残留 UNet 对全身 PET/CT 图像中 FDG 和 PSMA 病灶分割的稳健性","authors":"Shadab Ahamed","doi":"arxiv-2409.10151","DOIUrl":null,"url":null,"abstract":"Automated segmentation of cancerous lesions in PET/CT scans is a crucial\nfirst step in quantitative image analysis. However, training deep learning\nmodels for segmentation with high accuracy is particularly challenging due to\nthe variations in lesion size, shape, and radiotracer uptake. These lesions can\nappear in different parts of the body, often near healthy organs that also\nexhibit considerable uptake, making the task even more complex. As a result,\ncreating an effective segmentation model for routine PET/CT image analysis is\nchallenging. In this study, we utilized a 3D Residual UNet model and employed\nthe Generalized Dice Focal Loss function to train the model on the AutoPET\nChallenge 2024 dataset. We conducted a 5-fold cross-validation and used an\naverage ensembling technique using the models from the five folds. In the\npreliminary test phase for Task-1, the average ensemble achieved a mean Dice\nSimilarity Coefficient (DSC) of 0.6687, mean false negative volume (FNV) of\n10.9522 ml and mean false positive volume (FPV) 2.9684 ml. More details about\nthe algorithm can be found on our GitHub repository:\nhttps://github.com/ahxmeds/autosegnet2024.git. The training code has been\nshared via the repository: https://github.com/ahxmeds/autopet2024.git.","PeriodicalId":501378,"journal":{"name":"arXiv - PHYS - Medical Physics","volume":"25 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"AutoPET Challenge III: Testing the Robustness of Generalized Dice Focal Loss trained 3D Residual UNet for FDG and PSMA Lesion Segmentation from Whole-Body PET/CT Images\",\"authors\":\"Shadab Ahamed\",\"doi\":\"arxiv-2409.10151\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Automated segmentation of cancerous lesions in PET/CT scans is a crucial\\nfirst step in quantitative image analysis. However, training deep learning\\nmodels for segmentation with high accuracy is particularly challenging due to\\nthe variations in lesion size, shape, and radiotracer uptake. These lesions can\\nappear in different parts of the body, often near healthy organs that also\\nexhibit considerable uptake, making the task even more complex. As a result,\\ncreating an effective segmentation model for routine PET/CT image analysis is\\nchallenging. In this study, we utilized a 3D Residual UNet model and employed\\nthe Generalized Dice Focal Loss function to train the model on the AutoPET\\nChallenge 2024 dataset. We conducted a 5-fold cross-validation and used an\\naverage ensembling technique using the models from the five folds. In the\\npreliminary test phase for Task-1, the average ensemble achieved a mean Dice\\nSimilarity Coefficient (DSC) of 0.6687, mean false negative volume (FNV) of\\n10.9522 ml and mean false positive volume (FPV) 2.9684 ml. More details about\\nthe algorithm can be found on our GitHub repository:\\nhttps://github.com/ahxmeds/autosegnet2024.git. The training code has been\\nshared via the repository: https://github.com/ahxmeds/autopet2024.git.\",\"PeriodicalId\":501378,\"journal\":{\"name\":\"arXiv - PHYS - Medical Physics\",\"volume\":\"25 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - PHYS - Medical Physics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.10151\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - PHYS - Medical Physics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.10151","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

PET/CT 扫描中癌症病灶的自动分割是定量图像分析中至关重要的第一步。然而,由于病灶的大小、形状和放射性示踪剂摄取量各不相同,训练深度学习模型进行高精度分割尤其具有挑战性。这些病变可能出现在身体的不同部位,通常靠近健康器官,而健康器官也会表现出相当大的摄取量,这使得任务变得更加复杂。因此,为常规 PET/CT 图像分析创建有效的分割模型是一项挑战。在这项研究中,我们利用三维残留 UNet 模型,并使用广义骰子焦点损失函数在 AutoPETChallenge 2024 数据集上训练该模型。我们进行了五倍交叉验证,并使用五倍模型的平均集合技术。在任务-1 的初步测试阶段,平均集合的平均骰子相似系数(DSC)为 0.6687,平均假阴性体积(FNV)为 10.9522 ml,平均假阳性体积(FPV)为 2.9684 ml。有关该算法的更多详细信息,请访问我们的 GitHub 存储库:https://github.com/ahxmeds/autosegnet2024.git。训练代码已通过存储库共享:https://github.com/ahxmeds/autopet2024.git。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
AutoPET Challenge III: Testing the Robustness of Generalized Dice Focal Loss trained 3D Residual UNet for FDG and PSMA Lesion Segmentation from Whole-Body PET/CT Images
Automated segmentation of cancerous lesions in PET/CT scans is a crucial first step in quantitative image analysis. However, training deep learning models for segmentation with high accuracy is particularly challenging due to the variations in lesion size, shape, and radiotracer uptake. These lesions can appear in different parts of the body, often near healthy organs that also exhibit considerable uptake, making the task even more complex. As a result, creating an effective segmentation model for routine PET/CT image analysis is challenging. In this study, we utilized a 3D Residual UNet model and employed the Generalized Dice Focal Loss function to train the model on the AutoPET Challenge 2024 dataset. We conducted a 5-fold cross-validation and used an average ensembling technique using the models from the five folds. In the preliminary test phase for Task-1, the average ensemble achieved a mean Dice Similarity Coefficient (DSC) of 0.6687, mean false negative volume (FNV) of 10.9522 ml and mean false positive volume (FPV) 2.9684 ml. More details about the algorithm can be found on our GitHub repository: https://github.com/ahxmeds/autosegnet2024.git. The training code has been shared via the repository: https://github.com/ahxmeds/autopet2024.git.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信