Hamza Kalisch, Fabian Hörst, Ken Herrmann, Jens Kleesiek, Constantin Seibold
{"title":"Autopet III 挑战赛:将解剖学知识纳入 nnUNet,在 PET/CT 中进行病灶分割","authors":"Hamza Kalisch, Fabian Hörst, Ken Herrmann, Jens Kleesiek, Constantin Seibold","doi":"arxiv-2409.12155","DOIUrl":null,"url":null,"abstract":"Lesion segmentation in PET/CT imaging is essential for precise tumor\ncharacterization, which supports personalized treatment planning and enhances\ndiagnostic precision in oncology. However, accurate manual segmentation of\nlesions is time-consuming and prone to inter-observer variability. Given the\nrising demand and clinical use of PET/CT, automated segmentation methods,\nparticularly deep-learning-based approaches, have become increasingly more\nrelevant. The autoPET III Challenge focuses on advancing automated segmentation\nof tumor lesions in PET/CT images in a multitracer multicenter setting,\naddressing the clinical need for quantitative, robust, and generalizable\nsolutions. Building on previous challenges, the third iteration of the autoPET\nchallenge introduces a more diverse dataset featuring two different tracers\n(FDG and PSMA) from two clinical centers. To this extent, we developed a\nclassifier that identifies the tracer of the given PET/CT based on the Maximum\nIntensity Projection of the PET scan. We trained two individual\nnnUNet-ensembles for each tracer where anatomical labels are included as a\nmulti-label task to enhance the model's performance. Our final submission\nachieves cross-validation Dice scores of 76.90% and 61.33% for the publicly\navailable FDG and PSMA datasets, respectively. The code is available at\nhttps://github.com/hakal104/autoPETIII/ .","PeriodicalId":501289,"journal":{"name":"arXiv - EE - Image and Video Processing","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Autopet III challenge: Incorporating anatomical knowledge into nnUNet for lesion segmentation in PET/CT\",\"authors\":\"Hamza Kalisch, Fabian Hörst, Ken Herrmann, Jens Kleesiek, Constantin Seibold\",\"doi\":\"arxiv-2409.12155\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Lesion segmentation in PET/CT imaging is essential for precise tumor\\ncharacterization, which supports personalized treatment planning and enhances\\ndiagnostic precision in oncology. However, accurate manual segmentation of\\nlesions is time-consuming and prone to inter-observer variability. Given the\\nrising demand and clinical use of PET/CT, automated segmentation methods,\\nparticularly deep-learning-based approaches, have become increasingly more\\nrelevant. The autoPET III Challenge focuses on advancing automated segmentation\\nof tumor lesions in PET/CT images in a multitracer multicenter setting,\\naddressing the clinical need for quantitative, robust, and generalizable\\nsolutions. Building on previous challenges, the third iteration of the autoPET\\nchallenge introduces a more diverse dataset featuring two different tracers\\n(FDG and PSMA) from two clinical centers. To this extent, we developed a\\nclassifier that identifies the tracer of the given PET/CT based on the Maximum\\nIntensity Projection of the PET scan. We trained two individual\\nnnUNet-ensembles for each tracer where anatomical labels are included as a\\nmulti-label task to enhance the model's performance. Our final submission\\nachieves cross-validation Dice scores of 76.90% and 61.33% for the publicly\\navailable FDG and PSMA datasets, respectively. The code is available at\\nhttps://github.com/hakal104/autoPETIII/ .\",\"PeriodicalId\":501289,\"journal\":{\"name\":\"arXiv - EE - Image and Video Processing\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - EE - Image and Video Processing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.12155\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - EE - Image and Video Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.12155","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Autopet III challenge: Incorporating anatomical knowledge into nnUNet for lesion segmentation in PET/CT
Lesion segmentation in PET/CT imaging is essential for precise tumor
characterization, which supports personalized treatment planning and enhances
diagnostic precision in oncology. However, accurate manual segmentation of
lesions is time-consuming and prone to inter-observer variability. Given the
rising demand and clinical use of PET/CT, automated segmentation methods,
particularly deep-learning-based approaches, have become increasingly more
relevant. The autoPET III Challenge focuses on advancing automated segmentation
of tumor lesions in PET/CT images in a multitracer multicenter setting,
addressing the clinical need for quantitative, robust, and generalizable
solutions. Building on previous challenges, the third iteration of the autoPET
challenge introduces a more diverse dataset featuring two different tracers
(FDG and PSMA) from two clinical centers. To this extent, we developed a
classifier that identifies the tracer of the given PET/CT based on the Maximum
Intensity Projection of the PET scan. We trained two individual
nnUNet-ensembles for each tracer where anatomical labels are included as a
multi-label task to enhance the model's performance. Our final submission
achieves cross-validation Dice scores of 76.90% and 61.33% for the publicly
available FDG and PSMA datasets, respectively. The code is available at
https://github.com/hakal104/autoPETIII/ .