Abdella M. Ahmed , Levi Madden , Maegan Stewart , Brian V.Y. Chow , Adam Mylonas , Ryan Brown , Gabrielle Metz , Meegan Shepherd , Carlito Coronel , Leigh Ambrose , Alex Turk , Maiko Crispin , Andrew Kneebone , George Hruby , Paul Keall , Jeremy T. Booth
{"title":"在kv引导放射治疗中,患者特异性深度学习跟踪实时二维胰腺定位","authors":"Abdella M. Ahmed , Levi Madden , Maegan Stewart , Brian V.Y. Chow , Adam Mylonas , Ryan Brown , Gabrielle Metz , Meegan Shepherd , Carlito Coronel , Leigh Ambrose , Alex Turk , Maiko Crispin , Andrew Kneebone , George Hruby , Paul Keall , Jeremy T. Booth","doi":"10.1016/j.phro.2025.100794","DOIUrl":null,"url":null,"abstract":"<div><h3>Background and purpose</h3><div>In pancreatic stereotactic body radiotherapy (SBRT), accurate motion management is crucial for the safe delivery of high doses per fraction. Intra-fraction tracking with magnetic resonance imaging-guidance for gated SBRT has shown potential for improved local control. Visualisation of pancreas (and surrounding organs) remains challenging in intra-fraction kilo-voltage (kV) imaging, requiring implanted fiducials. In this study, we investigate patient-specific deep-learning approaches to track the gross-tumour-volume (GTV), pancreas-head and the whole-pancreas in intra-fraction kV images.</div></div><div><h3>Materials and methods</h3><div>Conditional-generative-adversarial-networks were trained and tested on data from 25 patients enrolled in an ethics-approved pancreatic SBRT trial for contour prediction on intra-fraction 2D kV images. Labelled digitally-reconstructed-radiographs (DRRs) were generated from contoured planning-computed-tomography (CTs) (CT-DRRs) and cone-beam-CTs (CBCT-DRRs). A population model was trained using CT-DRRs of 19 patients. Two patient-specific model types were created for six additional patients by fine-tuning the population model using CBCT-DRRs (CBCT-models) or CT-DRRs (CT-models) acquired in exhale-breath-hold. Model predictions on unseen triggered-kV images from the corresponding six patients were evaluated against projected-contours using Dice-Similarity-Coefficient (DSC), centroid-error (CE), average Hausdorff-distance (AHD), and Hausdorff-distance at 95th-percentile (HD95).</div></div><div><h3>Results</h3><div>The mean ± 1SD (standard-deviation) DSCs were 0.86 ± 0.09 (CBCT-models) and 0.78 ± 0.12 (CT-models). For AHD and CE, the CBCT-model predicted contours within 2.0 mm ≥90.3 % of the time, while HD95 was within 5.0 mm ≥90.0 % of the time, and had a prediction time of 29.2 ± 3.7 ms per contour.</div></div><div><h3>Conclusion</h3><div>The patient-specific CBCT-models outperformed the CT-models and predicted the three contours with 90th-percentile error ≤2.0 mm, indicating the potential for clinical real-time application.</div></div>","PeriodicalId":36850,"journal":{"name":"Physics and Imaging in Radiation Oncology","volume":"35 ","pages":"Article 100794"},"PeriodicalIF":3.3000,"publicationDate":"2025-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Patient-specific deep learning tracking for real-time 2D pancreas localisation in kV-guided radiotherapy\",\"authors\":\"Abdella M. Ahmed , Levi Madden , Maegan Stewart , Brian V.Y. Chow , Adam Mylonas , Ryan Brown , Gabrielle Metz , Meegan Shepherd , Carlito Coronel , Leigh Ambrose , Alex Turk , Maiko Crispin , Andrew Kneebone , George Hruby , Paul Keall , Jeremy T. Booth\",\"doi\":\"10.1016/j.phro.2025.100794\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><h3>Background and purpose</h3><div>In pancreatic stereotactic body radiotherapy (SBRT), accurate motion management is crucial for the safe delivery of high doses per fraction. Intra-fraction tracking with magnetic resonance imaging-guidance for gated SBRT has shown potential for improved local control. Visualisation of pancreas (and surrounding organs) remains challenging in intra-fraction kilo-voltage (kV) imaging, requiring implanted fiducials. In this study, we investigate patient-specific deep-learning approaches to track the gross-tumour-volume (GTV), pancreas-head and the whole-pancreas in intra-fraction kV images.</div></div><div><h3>Materials and methods</h3><div>Conditional-generative-adversarial-networks were trained and tested on data from 25 patients enrolled in an ethics-approved pancreatic SBRT trial for contour prediction on intra-fraction 2D kV images. Labelled digitally-reconstructed-radiographs (DRRs) were generated from contoured planning-computed-tomography (CTs) (CT-DRRs) and cone-beam-CTs (CBCT-DRRs). A population model was trained using CT-DRRs of 19 patients. Two patient-specific model types were created for six additional patients by fine-tuning the population model using CBCT-DRRs (CBCT-models) or CT-DRRs (CT-models) acquired in exhale-breath-hold. Model predictions on unseen triggered-kV images from the corresponding six patients were evaluated against projected-contours using Dice-Similarity-Coefficient (DSC), centroid-error (CE), average Hausdorff-distance (AHD), and Hausdorff-distance at 95th-percentile (HD95).</div></div><div><h3>Results</h3><div>The mean ± 1SD (standard-deviation) DSCs were 0.86 ± 0.09 (CBCT-models) and 0.78 ± 0.12 (CT-models). For AHD and CE, the CBCT-model predicted contours within 2.0 mm ≥90.3 % of the time, while HD95 was within 5.0 mm ≥90.0 % of the time, and had a prediction time of 29.2 ± 3.7 ms per contour.</div></div><div><h3>Conclusion</h3><div>The patient-specific CBCT-models outperformed the CT-models and predicted the three contours with 90th-percentile error ≤2.0 mm, indicating the potential for clinical real-time application.</div></div>\",\"PeriodicalId\":36850,\"journal\":{\"name\":\"Physics and Imaging in Radiation Oncology\",\"volume\":\"35 \",\"pages\":\"Article 100794\"},\"PeriodicalIF\":3.3000,\"publicationDate\":\"2025-06-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Physics and Imaging in Radiation Oncology\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2405631625000995\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"ONCOLOGY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Physics and Imaging in Radiation Oncology","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2405631625000995","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ONCOLOGY","Score":null,"Total":0}
Patient-specific deep learning tracking for real-time 2D pancreas localisation in kV-guided radiotherapy
Background and purpose
In pancreatic stereotactic body radiotherapy (SBRT), accurate motion management is crucial for the safe delivery of high doses per fraction. Intra-fraction tracking with magnetic resonance imaging-guidance for gated SBRT has shown potential for improved local control. Visualisation of pancreas (and surrounding organs) remains challenging in intra-fraction kilo-voltage (kV) imaging, requiring implanted fiducials. In this study, we investigate patient-specific deep-learning approaches to track the gross-tumour-volume (GTV), pancreas-head and the whole-pancreas in intra-fraction kV images.
Materials and methods
Conditional-generative-adversarial-networks were trained and tested on data from 25 patients enrolled in an ethics-approved pancreatic SBRT trial for contour prediction on intra-fraction 2D kV images. Labelled digitally-reconstructed-radiographs (DRRs) were generated from contoured planning-computed-tomography (CTs) (CT-DRRs) and cone-beam-CTs (CBCT-DRRs). A population model was trained using CT-DRRs of 19 patients. Two patient-specific model types were created for six additional patients by fine-tuning the population model using CBCT-DRRs (CBCT-models) or CT-DRRs (CT-models) acquired in exhale-breath-hold. Model predictions on unseen triggered-kV images from the corresponding six patients were evaluated against projected-contours using Dice-Similarity-Coefficient (DSC), centroid-error (CE), average Hausdorff-distance (AHD), and Hausdorff-distance at 95th-percentile (HD95).
Results
The mean ± 1SD (standard-deviation) DSCs were 0.86 ± 0.09 (CBCT-models) and 0.78 ± 0.12 (CT-models). For AHD and CE, the CBCT-model predicted contours within 2.0 mm ≥90.3 % of the time, while HD95 was within 5.0 mm ≥90.0 % of the time, and had a prediction time of 29.2 ± 3.7 ms per contour.
Conclusion
The patient-specific CBCT-models outperformed the CT-models and predicted the three contours with 90th-percentile error ≤2.0 mm, indicating the potential for clinical real-time application.