Hongfei Sun, Ziqi An, Wei Huang, Qifeng Wang, Yufen Liu, Zihan Shi, Jie Li, Fan Meng, Jie Gong, Lina Zhao
{"title":"Prior-guided automatic delineation of post-radiotherapy gross tumor volume for esophageal cancer.","authors":"Hongfei Sun, Ziqi An, Wei Huang, Qifeng Wang, Yufen Liu, Zihan Shi, Jie Li, Fan Meng, Jie Gong, Lina Zhao","doi":"10.1002/mp.70005","DOIUrl":"https://doi.org/10.1002/mp.70005","url":null,"abstract":"<p><strong>Background: </strong>Integrating post-radiotherapy (RT) CT into longitudinal esophageal cancer response models substantially improves predictive accuracy. However, manual delineation of gross tumor volume (GTV) on post-RT CT is both labor-intensive and time-consuming.</p><p><strong>Purpose: </strong>We propose a novel deep learning-based framework that integrates medical physics priors-pre-RT GTV contours and radiotherapy dose distributions-to automatically delineate post-RT GTV.</p><p><strong>Methods: </strong>A multicenter retrospective cohort of 294 EC patients (225 training, 45 internal validation, 24 external validation) was assembled. Pre-RT CT scans, GTV contours, and dose map were co-registered and cropped to 256 × 256. We implemented an nnU-Net v2 backbone, incorporating high dose region and pre-RT GTV priors via element-wise multiplication and element-wise addition to guide feature extraction. Performance was evaluated using anatomical (Dice, IoU, HD95, ASSD, Precision, Recall) and radiomics analyses (ICC, Pearson correlation, LASSO-Cox, C-index) across internal and external cohorts.</p><p><strong>Results: </strong>In cross-validation, the optimal fold achieved DSC = 0.7809 ± 0.1310, IoU = 0.6486 ± 0.1507, HD95 = 3.6321 ± 2.0942, and ASSD = 1.9673 ± 1.0352 (p < 0.0167 vs. state-of-the-art models). Ablation studies demonstrated that combining two types of medical physics priors outperformed single-prior or no-prior models (internal: DSC = 0.7723 ± 0.1290; external: DSC = 0.7545 ± 0.1058). Radiomic features extracted from automated contours exhibited high reproducibility (78.6% with ICC > 0.75) and strong concordance with manual features (R > 0.8), yielding comparable prognostic performance (C-index Δ nonsignificant).</p><p><strong>Conclusion: </strong>By embedding medical physics priors into a self-configuring nnU-Net v2, our method achieves accurate and robust automated delineation of post- RT GTV in EC across multiple centers. This approach has potential to facilitate the construction of treatment response prediction models.</p>","PeriodicalId":94136,"journal":{"name":"Medical physics","volume":"52 10","pages":"e70005"},"PeriodicalIF":3.2,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145234747","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shuang Zhou, Arash Darafsheh, Zhiyan Xiao, Anthony Mascia, Yongbing Zhang, Jun Zhou, Liyong Lin, David Zhang, Liuxing Shen, Hao Jiang, Qinghao Chen, Tianyu Zhao, Stephanie Perkins, Tiezhi Zhang
{"title":"Three-dimensional proton FLASH dose rate measurement at high spatiotemporal resolution using a novel multi-layer strip ionization chamber (MLSIC) device.","authors":"Shuang Zhou, Arash Darafsheh, Zhiyan Xiao, Anthony Mascia, Yongbing Zhang, Jun Zhou, Liyong Lin, David Zhang, Liuxing Shen, Hao Jiang, Qinghao Chen, Tianyu Zhao, Stephanie Perkins, Tiezhi Zhang","doi":"10.1002/mp.70033","DOIUrl":"https://doi.org/10.1002/mp.70033","url":null,"abstract":"<p><strong>Background: </strong>Currently, proton therapy is the main radiation treatment modality that can treat deeply seated targets at ultra-high dose rates. The safe translation of FLASH RT into clinic requires dedicated dosimeters capable of measurements at sufficiently high spatiotemporal resolution.</p><p><strong>Purpose: </strong>The objective of this work is to demonstrate the feasibility of three-dimensional (3D) measurements of dose rate and dose for FLASH pencil beam scanning (PBS) proton therapy.</p><p><strong>Methods: </strong>A multi-layer strip ionization chamber (MLSIC) device, along with a reconstruction algorithm, was designed and developed to reconstruct dose and dose rate distribution over a 3D volume. Our MLSIC is composed of 66 layers of strip ionization chamber arrays with total water-equivalent thickness (WET) of 19.2 cm along the beam direction. The first two layers, composed of 128 channels with orthogonal direction with respect to each other, provide the (x,y) coordinate. The other 64 layers contain 32 channels with 8 mm lateral spacing. Data readout at a high-speed of 6250 fps allows spot-by-spot measurement. To prove the concept, PBS proton therapy plans were delivered at conventional and FLASH dose rates. Dose and dose rate information were reconstructed in 3D using an in-house reconstruction algorithm.</p><p><strong>Results: </strong>Ion recombination remained under 1% in the majority of cases. 3D dose reconstruction showed agreement with the treatment planning software; the 3D gamma analysis of the reconstructed dose showed 96.2% (5 mm/5%) and 86.8% (3 mm/3%) passing rates with 10% threshold for the conventional dose rate plan, 99.1% (5 mm/5%) and 92.6% (3 mm/3%) passing rates for the FLASH dose rate plan. 3D dose rate distributions were successfully generated using different definitions.</p><p><strong>Conclusions: </strong>Our MLSIC device allows obtaining 3D dose and dose rate distribution of PBS proton beams at FLASH dose rates with high spatiotemporal resolution.</p>","PeriodicalId":94136,"journal":{"name":"Medical physics","volume":"52 10","pages":"e70033"},"PeriodicalIF":3.2,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145234778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An edge enhanced 3D mamba U-Net for pediatric brain tumor segmentation with transfer learning.","authors":"Xiaoyan Sun, Wenhan He, Jianing Ruan, Zhenming Yuan, Zhexian Sun, Jian Zhang","doi":"10.1002/mp.70002","DOIUrl":"https://doi.org/10.1002/mp.70002","url":null,"abstract":"<p><strong>Background: </strong>Pediatric gliomas, particularly high-grade subtypes, are highly aggressive tumors with low survival rates, and their segmentation remains challenging due to distinct imaging characteristics and data scarcity. While deep learning models perform well in adult glioma segmentation, they struggle with pediatric gliomas, particularly in segmenting complex regions such as the tumor core (TC) and enhancing tumor (ET).</p><p><strong>Purpose: </strong>This study proposes a solution to address the dual challenges of complex tumor morphology and limited pediatric data in MRI-based pediatric brain tumor segmentation.</p><p><strong>Methods: </strong>A solution utilizing an edge-enhanced 3D Mamba U-Net model combined with transfer learning was proposed for pediatric brain tumor segmentation. The network integrated U-Net's multi-scale feature extraction with Mamba's global dependency modeling, augmented by a Mamba residual (MR) block. An edge enhancement (EE) module was embedded in the skip-connection layers to refine boundary detection and capture local features in small pediatric tumor regions. Finally, a non-encoder fine-tuning (NEF) strategy was adopted to adapt the pre-trained adult model to pediatric data by updating only the final reconstruction stage while preserving learned representations. The model was pre-trained on the BraTS 2021 dataset (1251 adult glioma training cases) and fine-tuned on the BraTS-PEDs 2023 dataset (99 pediatric glioma training cases, split 7:1:2 for training, validation, and testing).</p><p><strong>Results: </strong>On the BraTS-PEDs 2023 dataset, the method achieved average Dice scores of 0.8917 (WT), 0.8557 (TC), and 0.6365 (ET), with corresponding Hausdorff distances of 3.82, 5.14, and 3.53. The proposed method outperformed the baseline and existing pediatric glioma segmentation approaches included in our experiments.</p><p><strong>Conclusions: </strong>The 3D Mamba U-Net with transfer learning and edge-enhancement modules effectively alleviates the challenges of complex tumor boundaries and small sample size problem in pediatric glioma segmentation.</p>","PeriodicalId":94136,"journal":{"name":"Medical physics","volume":"52 10","pages":"e70002"},"PeriodicalIF":3.2,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145234786","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Dual-branch guided multi-scale half-instance normalization network for low-dose CT image denoising.","authors":"Jielin Jiang, Chaochao Ge, Shun Wei, Yan Cui","doi":"10.1002/mp.70046","DOIUrl":"https://doi.org/10.1002/mp.70046","url":null,"abstract":"<p><strong>Background: </strong>Low-dose computed tomography (LDCT) image denoising is a critical area of research in medical image processing. Compared to normal-dose CT, LDCT has gained significant attention due to its lower radiation dose, which reduces harm to the human body. However, the reduction in radiation dose introduces noise, which compromises the accuracy of medical diagnoses.</p><p><strong>Purpose: </strong>The main goal of this study is to develop an efficient LDCT denoising model that can effectively extract adjacent frame image information, while focusing on both local image details and global structural information, and ensuring good inference time for clinical application.</p><p><strong>Methods: </strong>This study proposes a dual-branch guided multi-scale half-instance normalization network (DGMINet) for LDCT image denoising. We introduce the efficient utilization of adjacent frame CT images to assist in denoising. By designing an adjacent frame image assistance module and constructing a dual-branch guided structure, the features of adjacent LDCT frame images are fused with those of the current frame image. This process enhances the representation of important regions, restores missing structural edges, and effectively retains local details. The fused features are then processed through a multi-scale half-instance normalization module, which captures multi-scale features using convolution kernels of varying sizes and innovatively adjusts the statistical properties of features at different scales through instance normalization. Additionally, the network employs the Charbonnier loss function to effectively preserving structural edges and texture features. These innovations enable DGMINet to effectively distinguish between noise and clean images, significantly improving denoising performance.</p><p><strong>Results: </strong>Our experimental results show that the DGMINet method outperforms existing state-of-the-art denoising methods, demonstrating superior denoising performance. For example, on the AAPM dataset, compared to LDCT, the PSNR, SSIM, and FSIM metrics improved by 4.61 dB, 0.0544, and 0.0171, respectively, and the RMSE metric decreased by 5.95. On the real-world Piglet dataset, DGMINet also exhibited excellent denoising performance compared to LDCT at four different dose levels. Visually, DGMINet outperforms other denoising methods in terms of detail preservation and noise removal. Additionally, DGMINet maintains competitive inference times, proving its strong feasibility for practical applications.</p><p><strong>Conclusions: </strong>The proposed DGMINet model achieves significant improvements in LDCT image denoising, offering an effective solution that removes noise while preserving crucial image details. Its outstanding performance and relatively efficient inference time highlight the model's potential for real-world clinical applications.</p>","PeriodicalId":94136,"journal":{"name":"Medical physics","volume":"52 10","pages":"e70046"},"PeriodicalIF":3.2,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145246066","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Eri Haneda, Nils Peters, Jiayong Zhang, Grigorios Karageorgos, Wenjun Xia, Harald Paganetti, Ge Wang, Yi Guo, Jianhua Ma, Hyoung Suk Park, Kiwan Jeon, Fuxin Fan, Mareike Thies, Bruno De Man
{"title":"AAPM CT metal artifact reduction grand challenge.","authors":"Eri Haneda, Nils Peters, Jiayong Zhang, Grigorios Karageorgos, Wenjun Xia, Harald Paganetti, Ge Wang, Yi Guo, Jianhua Ma, Hyoung Suk Park, Kiwan Jeon, Fuxin Fan, Mareike Thies, Bruno De Man","doi":"10.1002/mp.70050","DOIUrl":"https://doi.org/10.1002/mp.70050","url":null,"abstract":"<p><strong>Background: </strong>Metal artifact reduction (MAR) is a long-standing challenge in CT imaging. The presence of highly attenuating objects, such as dental fillings, hip prostheses, spinal screws/rods, and gold fiducial markers, can introduce severe streak artifacts in CT images, often reducing their diagnostic value. Existing CT MAR studies typically define their own test cases and evaluation metrics, making it difficult to objectively and comprehensively compare the performance of different MAR methods. There is a widespread need for a universal CT MAR image quality benchmark to evaluate the clinical impact of new MAR methods and compare them to state-of-the-art techniques.</p><p><strong>Purpose: </strong>The goal of the AAPM CT Metal Artifact Reduction (CT-MAR) grand challenge was to create and distribute a clinically representative 2D MAR performance benchmark, and to invite participants to objectively compare the performance of their MAR methods based on this benchmark. A secondary goal was to facilitate MAR development by disseminating a MAR training database and tools. After completion of the grand challenge, the MAR benchmark and the MAR training database will remain publicly accessible for future MAR developments and benchmarking.</p><p><strong>Methods: </strong>Grand challenge participants were invited to submit results for their MAR algorithm. The challenge organizers provided 14,000 CT training datasets generated using a hybrid data simulation framework that combined real patient images-including lung, abdomen, liver, head, and pelvis-with virtual metal objects. Each training dataset included five types of data: CT sinograms (uncorrected and metal-free), CT reconstructed images (uncorrected and metal-free), and metal masks. In the final evaluation phase, 29 clinical uncorrected datasets with metal were provided in both the sinogram and image domains for participants to process with their MAR algorithms. Their results were evaluated using eight clinically relevant image quality metrics. The final ranking was determined and compared to an established normalized metal artifact reduction (NMAR) reference method. Additionally, we conducted a survey to better understand the methodologies used by participants.</p><p><strong>Results: </strong>A total of 106 teams registered for the challenge, with 26 teams completing all phases of the challenge. 92% of these-including all top ten teams-used a deep learning (DL) approach, employing a variety of network architectures such as UNet, ResNet, GAN, diffusion models, and transformers. Additionally, 22% of the teams-including the top three teams-utilized a combination of sinogram- and image-domain approaches. The results showed a broad distribution of the scores. Overall, the competition was marked by diverse methods and a wide range of results, including some truly exceptional results. More than 70% of the teams achieved a better overall score than the popular baseline NMAR method.</p><p><s","PeriodicalId":94136,"journal":{"name":"Medical physics","volume":"52 10","pages":"e70050"},"PeriodicalIF":3.2,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145246114","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sadek A Nehmeh, Chang Cui, Rajiv Magge, Theodore H Schwartz, Jazmin Schwartz, Benjamin Liechty, Phelipi Schuck, Stefaan Guhlke, William Calimag, Ramon F Barajas, Dan Kadrmas, Howard Fine, Jana Ivanidze
{"title":"Dual-[<sup>18</sup>FMISO + <sup>18</sup>FLT] PET/CT and MRI imaging in glioblastoma.","authors":"Sadek A Nehmeh, Chang Cui, Rajiv Magge, Theodore H Schwartz, Jazmin Schwartz, Benjamin Liechty, Phelipi Schuck, Stefaan Guhlke, William Calimag, Ramon F Barajas, Dan Kadrmas, Howard Fine, Jana Ivanidze","doi":"10.1002/mp.18124","DOIUrl":"https://doi.org/10.1002/mp.18124","url":null,"abstract":"<p><strong>Background: </strong>Tumor hypoxia and proliferation are independent predictors of poor prognosis in glioblastoma and WHO grade 4 IDH-mutant astrocytoma, and are closely linked and can synergistically contribute to local recurrence (LR) and poor overall survival (OS). These two hallmarks can be imaged using FMISO and FLT PET, but only on different days due to the PET intrinsic limitation, which jeopardizes the clinical feasibility and accuracy of multi-parametric studies.</p><p><strong>Purpose: </strong>In this study, we assess the feasibility of dual-[FMISO+FLT]-PET in a cohort of patients with glioblastoma and WHO grade 4 IDH-mutant astrocytoma.</p><p><strong>Methods: </strong>Eight patients underwent 90 min dynamic PET (dynPET) with staggered FMISO/FLT injections followed by two 10 min scans at 120 and 180 min post-FMISO injection, respectively. The target volume (TV) was delineated on the 180-min imageset. The FMISO input function (IF) was derived from dynPET images of the carotids using the first 50 min, and then extrapolated to the rest of dynPET using a 3-exp fit. The IF<sub>FLT</sub> was deduced by subtracting the IF<sub>FMISO</sub> from IF<sub>FMISO+FLT</sub> over the range > 50 min. The FMISO and FLT kinetic rate constants (KRCs) of the TV and cerebellar cortex (reference tissue) were estimated using kinetic modeling (KM) with a parallel dual-1-tissue-2-compartment model.</p><p><strong>Results: </strong>Seven out of eight patients with a total of 13 lesions completed the study. All lesions were [FMISO+FLT]-avid at 180 min post-FMISO injection with a mean SUVR of 1.72 (range:1.26-3.23). IDH-mutant WHO grade 4 astrocytomas showed reduced tumor hypoxia. Mean lesion KRCs were K<sub>1-FMISO </sub>= 0.18 mL/cc/min (range:0.042-0.432), k<sub>i-FMISO </sub>= 0.011 min<sup>-1</sup> (range:0.00-0.039), K<sub>1-FLT </sub>= 0.103 mL/cc/min (range: 0.004-0.357), and K<sub>i-FLT </sub>= 0.014 mL/min/g (range: 0.00-0.062). Cerebellar cortex KRCs were K<sub>1-FMISO </sub>= 0.098 mL/cc/min (range:0.055-0.225), k<sub>i-FMISO </sub>= 0.008 min<sup>-1</sup> (range:0.002-0.014), K<sub>1-FLT </sub>= 0.089 mL/cc/min (range: 0.001-0.299), and K<sub>i-FLT </sub>= 0.003 mL/min/g (range:0.00-0.007). Lesion perfusion and hypoxia were inversely correlated (R = 0.99).</p><p><strong>Conclusions: </strong>Dual-[FMISO+FLT]-PET can provide detailed characterization of tumor microenvironment and interaction of multiple hallmarks that yield radio-resistance. This can improve the accuracy of image-guided radiosurgery and radiotherapy, thereby improving clinical outcomes in patients with glioblastoma and IDH-mutant WHO grade 4 astrocytoma.</p>","PeriodicalId":94136,"journal":{"name":"Medical physics","volume":"52 10","pages":"e18124"},"PeriodicalIF":3.2,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145246125","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Points of interest linear attention network for real-time non-rigid liver volume to surface registration.","authors":"Zeming Chen, Beiji Zou, Xiaoyan Kui, Yangyang Shi, Ding Lv, Liming Chen","doi":"10.1002/mp.17108","DOIUrl":"https://doi.org/10.1002/mp.17108","url":null,"abstract":"<p><strong>Background: </strong>In laparoscopic liver surgery, accurately predicting the displacement of key intrahepatic anatomical structures is crucial for informing the doctor's intraoperative decision-making. However, due to the constrained surgical perspective, only a partial surface of the liver is typically visible. Consequently, the utilization of non-rigid volume to surface registration methods becomes essential. But traditional registration methods lack the necessary accuracy and cannot meet real-time requirements.</p><p><strong>Purpose: </strong>To achieve high-precision liver registration with only partial surface information and estimate the displacement of internal liver tissues in real-time.</p><p><strong>Methods: </strong>We propose a novel neural network architecture tailored for real-time non-rigid liver volume to surface registration. The network utilizes a voxel-based method, integrating sparse convolution with the newly proposed points of interest (POI) linear attention module. POI linear attention module specifically calculates attention on the previously extracted POI. Additionally, we identified the most suitable normalization method RMSINorm.</p><p><strong>Results: </strong>We evaluated our proposed network and other networks on a dataset generated from real liver models and two real datasets. Our method achieves an average error of 4.23 mm and a mean frame rate of 65.4 fps in the generation dataset. It also achieves an average error of 8.29 mm in the human breathing motion dataset.</p><p><strong>Conclusions: </strong>Our network outperforms CNN-based networks and other attention networks in terms of accuracy and inference speed.</p>","PeriodicalId":94136,"journal":{"name":"Medical physics","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140961267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Medical physicsPub Date : 2022-10-01Epub Date: 2022-06-22DOI: 10.1002/mp.15806
Zhuoran Jiang, Yushi Chang, Zeyu Zhang, Fang-Fang Yin, Lei Ren
{"title":"Fast four-dimensional cone-beam computed tomography reconstruction using deformable convolutional networks.","authors":"Zhuoran Jiang, Yushi Chang, Zeyu Zhang, Fang-Fang Yin, Lei Ren","doi":"10.1002/mp.15806","DOIUrl":"https://doi.org/10.1002/mp.15806","url":null,"abstract":"<p><strong>Background: </strong>Although four-dimensional cone-beam computed tomography (4D-CBCT) is valuable to provide onboard image guidance for radiotherapy of moving targets, it requires a long acquisition time to achieve sufficient image quality for target localization. To improve the utility, it is highly desirable to reduce the 4D-CBCT scanning time while maintaining high-quality images. Current motion-compensated methods are limited by slow speed and compensation errors due to the severe intraphase undersampling.</p><p><strong>Purpose: </strong>In this work, we aim to propose an alternative feature-compensated method to realize the fast 4D-CBCT with high-quality images.</p><p><strong>Methods: </strong>We proposed a feature-compensated deformable convolutional network (FeaCo-DCN) to perform interphase compensation in the latent feature space, which has not been explored by previous studies. In FeaCo-DCN, encoding networks extract features from each phase, and then, features of other phases are deformed to those of the target phase via deformable convolutional networks. Finally, a decoding network combines and decodes features from all phases to yield high-quality images of the target phase. The proposed FeaCo-DCN was evaluated using lung cancer patient data.</p><p><strong>Results: </strong>(1) FeaCo-DCN generated high-quality images with accurate and clear structures for a fast 4D-CBCT scan; (2) 4D-CBCT images reconstructed by FeaCo-DCN achieved 3D tumor localization accuracy within 2.5 mm; (3) image reconstruction is nearly real time; and (4) FeaCo-DCN achieved superior performance by all metrics compared to the top-ranked techniques in the AAPM SPARE Challenge.</p><p><strong>Conclusion: </strong>The proposed FeaCo-DCN is effective and efficient in reconstructing 4D-CBCT while reducing about 90% of the scanning time, which can be highly valuable for moving target localization in image-guided radiotherapy.</p>","PeriodicalId":94136,"journal":{"name":"Medical physics","volume":"49 10","pages":"6461-6476"},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9588592/pdf/nihms-1817259.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41176513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Medical physicsPub Date : 2019-11-01Epub Date: 2019-09-26DOI: 10.1002/mp.13814
Dong Joo Rhee, Carlos E Cardenas, Hesham Elhalawani, Rachel McCarroll, Lifei Zhang, Jinzhong Yang, Adam S Garden, Christine B Peterson, Beth M Beadle, Laurence E Court
{"title":"Automatic detection of contouring errors using convolutional neural networks.","authors":"Dong Joo Rhee, Carlos E Cardenas, Hesham Elhalawani, Rachel McCarroll, Lifei Zhang, Jinzhong Yang, Adam S Garden, Christine B Peterson, Beth M Beadle, Laurence E Court","doi":"10.1002/mp.13814","DOIUrl":"https://doi.org/10.1002/mp.13814","url":null,"abstract":"<p><strong>Purpose: </strong>To develop a head and neck normal structures autocontouring tool that could be used to automatically detect the errors in autocontours from a clinically validated autocontouring tool.</p><p><strong>Methods: </strong>An autocontouring tool based on convolutional neural networks (CNN) was developed for 16 normal structures of the head and neck and tested to identify the contour errors from a clinically validated multiatlas-based autocontouring system (MACS). The computed tomography (CT) scans and clinical contours from 3495 patients were semiautomatically curated and used to train and validate the CNN-based autocontouring tool. The final accuracy of the tool was evaluated by calculating the Sørensen-Dice similarity coefficients (DSC) and Hausdorff distances between the automatically generated contours and physician-drawn contours on 174 internal and 24 external CT scans. Lastly, the CNN-based tool was evaluated on 60 patients' CT scans to investigate the possibility to detect contouring failures. The contouring failures on these patients were classified as either minor or major errors. The criteria to detect contouring errors were determined by analyzing the DSC between the CNN- and MACS-based contours under two independent scenarios: (a) contours with minor errors are clinically acceptable and (b) contours with minor errors are clinically unacceptable.</p><p><strong>Results: </strong>The average DSC and Hausdorff distance of our CNN-based tool was 98.4%/1.23 cm for brain, 89.1%/0.42 cm for eyes, 86.8%/1.28 cm for mandible, 86.4%/0.88 cm for brainstem, 83.4%/0.71 cm for spinal cord, 82.7%/1.37 cm for parotids, 80.7%/1.08 cm for esophagus, 71.7%/0.39 cm for lenses, 68.6%/0.72 for optic nerves, 66.4%/0.46 cm for cochleas, and 40.7%/0.96 cm for optic chiasm. With the error detection tool, the proportions of the clinically unacceptable MACS contours that were correctly detected were 0.99/0.80 on average except for the optic chiasm, when contours with minor errors are clinically acceptable/unacceptable, respectively. The proportions of the clinically acceptable MACS contours that were correctly detected were 0.81/0.60 on average except for the optic chiasm, when contours with minor errors are clinically acceptable/unacceptable, respectively.</p><p><strong>Conclusion: </strong>Our CNN-based autocontouring tool performed well on both the publically available and the internal datasets. Furthermore, our results show that CNN-based algorithms are able to identify ill-defined contours from a clinically validated and used multiatlas-based autocontouring tool. Therefore, our CNN-based tool can effectively perform automatic verification of MACS contours.</p>","PeriodicalId":94136,"journal":{"name":"Medical physics","volume":"46 11","pages":"5086-5097"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1002/mp.13814","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49686807","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}