Twaha Kabika, Cai Hongsen, Zhu Hongling, Dong Jingxian, Zhang Siyuan, Mingyue Ding, Deng Xianbo, Hou Wenguang, Wang Yan
{"title":"Improving pose accuracy and geometry in neural radiance field-based medical image synthesis.","authors":"Twaha Kabika, Cai Hongsen, Zhu Hongling, Dong Jingxian, Zhang Siyuan, Mingyue Ding, Deng Xianbo, Hou Wenguang, Wang Yan","doi":"10.1002/mp.17832","DOIUrl":"https://doi.org/10.1002/mp.17832","url":null,"abstract":"<p><strong>Background: </strong>Neural radiance field (NeRF) models have garnered significant attention for their impressive ability to synthesize high-quality novel scene views from posed 2D images. Recently, the MedNeRF algorithm was developed to render complete computed tomography (CT) projections from a single or a few x-ray images further. Despite this advancement, MedNeRF struggles with accurate pose reconstruction, crucial for radiologists during image analysis, leading to blurry geometry in the generated outputs.</p><p><strong>Purpose: </strong>Motivated by these challenges, our research aims to address MedNeRF's limitations in pose accuracy and image clarity. Specifically, we seek to improve the pose accuracy of reconstructed images and enhance the generated output's anatomical detail and quality.</p><p><strong>Methods: </strong>We propose a novel pose-aware discriminator that estimates pose differences between generated and real patches, ensuring accurate poses and deeper anatomical structures in generated images. We enhance volumetric rendering from single-view x-rays by introducing a customized distortion adaptive loss function and present the HTDataset, a new dataset pair that better mimics machine-generated x-rays, offering clearer anatomical depictions with reduced noise.</p><p><strong>Results: </strong>Our method successfully renders images with correct poses and high fidelity, outperforming existing state-of-the-art methods. The results demonstrate superior performance in both qualitative and quantitative metrics.</p><p><strong>Conclusions: </strong>The proposed approach addresses the pose reconstruction challenge in MedNeRF, enhances the anatomical detail, and reduces noise in generated images. The use of HTDataset and the innovative discriminator structure lead to significant improvements in the accuracy and quality of the rendered images, setting a new benchmark in the field.</p>","PeriodicalId":94136,"journal":{"name":"Medical physics","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144061448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Advancing cardiac MRI multi-structure segmentation: A semi-supervised multidimensional consistency constraint learning network.","authors":"Hongzhen Cui, Meihua Piao, Xinghe Huang, Xiaoyue Zhu, Haoming Ma, Yunfeng Peng","doi":"10.1002/mp.17805","DOIUrl":"https://doi.org/10.1002/mp.17805","url":null,"abstract":"<p><strong>Background: </strong>Deep convolutional neural networks (DCNNs) have been proposed for medical Magnetic Resonance Imaging (MRI) segmentation, but their effectiveness is often limited by challenges in semantic discrimination, boundary delineation, and spatial context modeling.</p><p><strong>Purpose: </strong>To address these challenges, we present the Multidimensional Consistency Constraint Learning Network (MDCC-Net) for multi-structure segmentation of cardiac MRI using a semi-supervised approach.</p><p><strong>Methods: </strong>MDCC-Net incorporates a shared encoder, multiple differentiated decoders, and leverages pyramid boundary consistency features and spatial consistency constraints. The model employs mutual consistency constraints and pseudo-labels to enhance segmentation performance. Additionally, MDCC-Net uses a combination of Dice loss and mean squared error loss to facilitate convergence and improve accuracy.</p><p><strong>Results: </strong>Experiments on the ACDC cardiac MRI dataset demonstrate that MDCC-Net achieves state-of-the-art performance in multi-structure segmentation of the left ventricle (LV), myocardium (MYO), and right ventricle (RV). Specifically, MDCC-Net attained a Dice coefficient (Dice) of 0.8763 and a Jaccard index of 0.7906 on average. The right ventricle's Average Surface Distance (ASD) reached a best performance of 0.5391, and the left ventricle's Dice attained an optimal value of 0.8965. These results highlight the model's superior ability to utilize semi-supervised data through consistency and entropy minimization constraints. In addition, the generalization of MDCC-Net is verified on the M&Ms dataset.</p><p><strong>Conclusions: </strong>MDCC-Net significantly enhances the multi-structure segmentation of cardiac MRI under multidimensional consistency constraints. This approach provides a foundational study for integrating multifeature fusion in clinical automated and semiautomated multi-organ and multi-tissue segmentation, thus potentially improving diagnostic and treatment planning processes in clinical settings.</p>","PeriodicalId":94136,"journal":{"name":"Medical physics","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144059238","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jiayao Sun, Lijia Zhang, Weiwei Wang, Lin Kong, Xiyin Guan, Sixue Dong, Dan You, Zhuangming Shen, Yinxiangzi Sheng
{"title":"Comparative analysis of residual setup errors in head and neck patients from upright versus supine radiotherapy postures.","authors":"Jiayao Sun, Lijia Zhang, Weiwei Wang, Lin Kong, Xiyin Guan, Sixue Dong, Dan You, Zhuangming Shen, Yinxiangzi Sheng","doi":"10.1002/mp.17824","DOIUrl":"https://doi.org/10.1002/mp.17824","url":null,"abstract":"<p><strong>Background: </strong>Carbon-ion rotating gantries use is limited by its large size, weight, and high cost. Gantry-free modality enables the reduction of the overall size, weight, and cost. Among them, upright treatment, which utilizes fixed ion beamlines, in combination with a treatment chair capable of 360° rotation and adjustable pitch angle (enabling non-coplanar beam delivery), provides a wider range of beam entry angles compared to conventional couch-based setups and has already been applied in particle radiotherapy for head and neck cancer patients.</p><p><strong>Purpose: </strong>In this study, we analyzed clinical data from the Shanghai Proton and Heavy Ion Center (SPHIC) to quantify residual setup errors across various regions of interest (ROIs) for both upright and supine treatments.</p><p><strong>Methods: </strong>A total of 402 treatment fractions from 28 patients (median 5 fractions, range: 5-16 fractions per posture per patient) were enrolled in this study. All these patients were immobilized and scanned in supine posture and received both supine and upright radiotherapy. Three rectangular-shaped ROIs were delineated based on bone structures, encompassing the mandible, orbit, and neck vertebrae C1-C3. Box-based registration, focusing solely on the anatomical structures within the specific ROIs was performed to subtract the correction vector used in treatment, thereby obtaining the residual setup error for each ROI. Margins for each ROIs were calculated.</p><p><strong>Results: </strong>For both postures, the median values of residual setup error for all translational directions were less than 1 mm. The median values did not exceed 0.2 degrees for rotational errors. More than 78% of the fractions for upright treatment fell within the 1 mm/° threshold, while 94% were within the 2 mm/° threshold. In contrast, for supine treatment, over 61% fell within the 1 mm/° threshold, while 86% were within the 2 mm/° threshold. The maximum margin was 3.3 mm in the AP direction of the C1-C3 region for the supine posture.</p><p><strong>Conclusions: </strong>Upright treatments demonstrated comparable residual setup errors to supine treatments, with most errors falling within clinically acceptable thresholds. This study provides valuable clinical evidence for the continued development and implementation of upright radiotherapy.</p>","PeriodicalId":94136,"journal":{"name":"Medical physics","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144061291","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Duhee Jeon, Younghwan Lim, Hyesun Yang, Myeongkyu Park, Kyong-Woo Kim, Hyosung Cho
{"title":"Improving decomposition image quality in dual-energy chest radiography using two-dimensional crisscrossed anti-scatter grid.","authors":"Duhee Jeon, Younghwan Lim, Hyesun Yang, Myeongkyu Park, Kyong-Woo Kim, Hyosung Cho","doi":"10.1002/mp.17819","DOIUrl":"https://doi.org/10.1002/mp.17819","url":null,"abstract":"<p><strong>Background: </strong>Chest radiography is a widely used medical imaging modality for diagnosing chest-related diseases. However, anatomical structure overlap hinders accurate lesion detection. While the dual-energy x-ray imaging technique addresses this issue by separating soft-tissue and bone images from an original chest radiograph, scattered radiation remains a significant challenge in decomposition image quality.</p><p><strong>Purpose: </strong>This work aims to conduct dual-energy material decomposition (DEMD) in chest radiography using a two-dimensional (2D) crisscrossed anti-scatter grid to improve decomposition image quality by effectively removing scattered radiation.</p><p><strong>Methods: </strong>A 2D graphite-interspaced grid with a strip density of N = 1.724 lines/mm and grid ratio r = 6:1 was fabricated using a high-precision sawing process. The grid characteristics were evaluated using the IEC standard fixture. A 2D-grid-based DEMD process, which involves the acquisition of low- and high-kV radiographs with a 2D grid, generation of a pairwise decomposition function using a calibration wedge phantom, and decomposition of soft-tissue and bone images using the decomposition function, was implemented, followed by software-based grid artifact reduction. Experiments were conducted on a commercially available chest phantom using an x-ray imaging system operating at two tube voltages of 70 and 120 kVp. The decomposition image quality of the proposed DEMD and conventional dual-energy subtraction methods was compared for the cases of no grid, software-based scatter correction, 1D grid (N = 8.475 lines/mm and r = 12:1), and 2D grid.</p><p><strong>Results: </strong>The 2D grid demonstrated superior scatter radiation removal ability with scatter radiation transmission of 6.34% and grid selectivity of 9.67, representing a 2.6-fold decrease and a 2.7-fold improvement over the 1D grid, respectively. Compared to other competitive methods, the 2D-grid-based DEMD method considerably improved decomposition image quality, with improved lung structure visibility in selective soft-tissue images.</p><p><strong>Conclusions: </strong>The proposed DEMD method yielded high-quality dual-energy chest radiographs by effectively removing scattered radiation, demonstrating significant potential for improving lesion detection in clinical practice.</p>","PeriodicalId":94136,"journal":{"name":"Medical physics","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144036157","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lina Mekki, Matthew Ladra, Sahaja Acharya, Junghoon Lee
{"title":"Generative evidential synthesis with integrated segmentation framework for MR-only radiation therapy treatment planning.","authors":"Lina Mekki, Matthew Ladra, Sahaja Acharya, Junghoon Lee","doi":"10.1002/mp.17828","DOIUrl":"https://doi.org/10.1002/mp.17828","url":null,"abstract":"<p><strong>Background: </strong>Radiation therapy (RT) planning is a time-consuming process involving the contouring of target volumes and organs at risk, followed by treatment plan optimization. CT is typically used as the primary planning image modality as it provides electron density information needed for dose calculation. MRI is widely used for contouring after registration to CT due to its high soft tissue contrast. However, there exists uncertainties in registration, which propagate throughout treatment planning as contouring errors, and lead to dose inaccuracies. MR-only RT planning has been proposed as a solution to eliminate the need for CT scan and image registration, by synthesizing CT from MRI. A challenge in deploying MR-only planning in clinic is the lack of a method to estimate the reliability of a synthetic CT in the absence of ground truth. While methods have used sampling-based approaches to estimate model uncertainty over multiple inferences, such methods suffer from long run time and are therefore inconvenient for clinical use.</p><p><strong>Purpose: </strong>To develop a fast and robust method for the joint synthesis of CT from MRI, estimation of model uncertainty related to the synthesis accuracy, and segmentation of organs at risk (OARs), in a single model inference.</p><p><strong>Methods: </strong>In this work, deep evidential regression is applied to MR-only brain RT planning. The proposed framework uses a multi-task vision transformer combining a single joint nested encoder with two distinct convolutional decoder paths for synthesis and segmentation separately. An evidential layer was added at the end of the synthesis decoder to jointly estimate model uncertainty in a single inference. The framework was trained and tested on a dataset of 119 (80 for training, 9 for validation, and 30 for test) paired T1-weighted MRI and CT scans with OARs contours.</p><p><strong>Results: </strong>The proposed method achieved mean ± SD SSIM of 0.820 ± 0.039, MAE of 47.4 ± 8.49 HU, and PSNR of 23.4 ± 1.13 for the synthesis task and dice similarity coefficient of 0.799 ± 0.132 (lenses), 0.945 ± 0.020 (eyes), 0.834 ± 0.059 (optic nerves), 0.679 ± 0.148 (chiasm), 0.947 ± 0.014 (temporal lobes), 0.849 ± 0.027 (hippocampus), 0.953 ± 0.024 (brainstem), 0.752 ± 0.228 (cochleae) for segmentation-in a total run time of 6.71 ± 0.25 s. Additionally, experiments on challenging test cases revealed that the proposed evidential uncertainty estimation highlighted the same uncertain regions as Monte Carlo-based epistemic uncertainty, thus highlighting the reliability of the proposed method.</p><p><strong>Conclusion: </strong>A framework leveraging deep evidential regression to jointly synthesize CT from MRI, predict the related synthesis uncertainty, and segment OARs in a single model inference was developed. The proposed approach has the potential to streamline the planning process and provide clinicians with a measure of the reliability of a synthetic C","PeriodicalId":94136,"journal":{"name":"Medical physics","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144055889","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Guangyu Dan, Cui Feng, Zheng Zhong, Kaibao Sun, Ping-Shou Zhong, Daoyu Hu, Zhen Li, Xiaohong Joe Zhou
{"title":"Tissue classification from raw diffusion-weighted images using machine learning.","authors":"Guangyu Dan, Cui Feng, Zheng Zhong, Kaibao Sun, Ping-Shou Zhong, Daoyu Hu, Zhen Li, Xiaohong Joe Zhou","doi":"10.1002/mp.17810","DOIUrl":"https://doi.org/10.1002/mp.17810","url":null,"abstract":"<p><strong>Background: </strong>In diffusion-weighted imaging (DWI), a large collection of diffusion models is available to provide insights into tissue characteristics. However, these models are limited by predefined assumptions and computational challenges, potentially hindering the full extraction of information from the diffusion MR signal.</p><p><strong>Purpose: </strong>This study aimed at developing a MOdel-free Diffusion-wEighted MRI (MODEM) method for tissue differentiation by using a machine learning (ML) algorithm based on raw diffusion images without relying on any specific diffusion model. MODEM has been applied to both simulation data and cervical cancer diffusion images and compared with several diffusion models.</p><p><strong>Methods: </strong>With Institutional Review Board approval, 54 cervical cancer patients (median age, 52 years; age range, 29-73 years) participated in the study, including 26 in the early FIGO (International Federation of Gynecology and Obstetrics) stage (IB, 16; IIA, 10) and 28 the late stage (IIB, 8; IIIB, 14; IIIC, 1; IVA, 3; IVB, 2). The participants underwent DWI with 17 b-values (0 to 4500 s/mm<sup>2</sup>) at 3 Tesla. Synthetic diffusion MRI signals were also generated using Monte-Carlo simulation with Gaussian noise doping under varying substrates. MODEM with multilayer perceptron and five diffusion models (mono-exponential, intra-voxel incoherent-motion, diffusion kurtosis imaging, fractional order calculus, and continuous-time-random-walk models) were employed to distinguish different substrates in the simulation data and differentiate different pathological states (i.e., normal vs. cancerous tissue; and early-stage vs. late-stage cancers) in the cervical cancer dataset. Accuracy and area under the receiver operating characteristic (ROC) curve were evaluated. Mann-Whitney U-test was used to compare the area under the curve (AUC) and accuracy values between MODEM and the five diffusion models.</p><p><strong>Results: </strong>For the simulation dataset, MODEM produced a higher AUC and better accuracy, particularly in scenarios where the noise level exceeded 5%. For the cervical cancer dataset, MODEM yielded the highest AUC and accuracy in cervical cancer detection (AUC, 0.976; accuracy, 91.9%) and cervical cancer staging (AUC, 0.773; accuracy, 69.2%), significantly outperforming any of the diffusion models (p < 0.05).</p><p><strong>Conclusions: </strong>MODEM is useful for cervical cancer detection and staging and offers considerable advantages over analytical diffusion models for tissue characterization.</p>","PeriodicalId":94136,"journal":{"name":"Medical physics","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143805149","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Deep learning-based estimation of respiration-induced deformation from surface motion: A proof-of-concept study on 4D thoracic image synthesis.","authors":"Jie Zhang, Xue Bai, Guoping Shan","doi":"10.1002/mp.17804","DOIUrl":"https://doi.org/10.1002/mp.17804","url":null,"abstract":"<p><strong>Background: </strong>Four-dimension computed tomography (4D-CT) provides important respiration-related information for thoracic radiotherapy. Its quality is challenged by various respiratory patterns. Its acquisition gives rise to the risk of higher radiation exposure. Based on a continuously estimated deformation, a 4D synthesis by warping a high-quality volumetric image is a possible solution.</p><p><strong>Purpose: </strong>To propose a non-patient-specific cascaded ensemble model (CEM) to estimate respiration-induced thoracic tissue deformation from surface motion.</p><p><strong>Methods: </strong>The CEM is cascaded by three deep learning-based models. By inputting the surface motion, CEM outputs a deformation vector field (DVF) inside thorax. In our work, the surface motion was simulated using the body contours derived from 4D-CT. The CEM was trained on our private database including 62 4D-CT sets, and was tested on a public database encompassing 80 4D-CT sets. To evaluate CEM, we employed the model output DVF to generate a few series of synthesized CTs, and compared them with the ground truth. CEM was also compared with other published works.</p><p><strong>Results: </strong>CEM synthesized CT with an mRMSE (average root mean square error) of 61.06 ± 10.43HU (average ± standard deviation), an mSSIM (average structural similarity index measure) of 0.990 ± 0.004, and an mMAE (average mean absolute error) of 26.80 ± 5.65HU. Compared with other works, CEM showed the best result.</p><p><strong>Conclusions: </strong>The results demonstrated the effectiveness of CEM on estimating tissue DVF inside thorax. CEM requires no patient-specific breathing data sampling and no additional training before treatment. It shows potential for broad applications.</p>","PeriodicalId":94136,"journal":{"name":"Medical physics","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143789483","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Intelligent meningioma grading based on medical features.","authors":"Hua Bai, Jieyu Liu, Chen Wu, Zhuo Zhang, Qiang Gao, Yong Yang","doi":"10.1002/mp.17808","DOIUrl":"https://doi.org/10.1002/mp.17808","url":null,"abstract":"<p><strong>Background: </strong>Meningiomas are the most common primary intracranial tumors in adults. Low-grade meningiomas have a low recurrence rate, whereas high-grade meningiomas are highly aggressive and recurrent. Therefore, the pathological grading information is crucial for treatment, as well as follow-up and prognostic guidance. Most previous studies have used radiomics or deep learning methods to extract feature information for grading meningiomas. However, some radiomics features are pixel-level features that can be influenced by factors such as image resolution and sharpness. Additionally, deep learning models that perform grading directly from MRI images often rely on image features that are ambiguous and uncontrollable, which reduces the reliability of the results to a certain extent.</p><p><strong>Purpose: </strong>We aim to validate that combining medical features with deep neural networks can effectively improve the accuracy and reliability of meningioma grading.</p><p><strong>Methods: </strong>We construct a SNN-Tran model for grading meningiomas by analyzing medical features including tumor volume, peritumoral edema volume, dural tail sign, tumor location, the ratio of peritumoral edema volume to tumor volume, age and gender. This method is able to better capture the complex relationships and interactions in the medical features and enhance the reliability of the prediction results.</p><p><strong>Results: </strong>Our model achieve an accuracy of 0.875, sensitivity of 0.886, specificity of 0.847, and AUC of 0.872. And the method is superior to the deep learning, radiomics and SOTA methods.</p><p><strong>Conclusion: </strong>We demonstrate that combining medical features with SNN-Tran can effectively improve the accuracy and reliability of meningioma grading. The SNN-Tran model excel in capturing long-range dependencies in the medical feature sequence.</p>","PeriodicalId":94136,"journal":{"name":"Medical physics","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143782265","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jianxin Zhou, Massimiliano Salvatori, Kadishe Fejza, Gregory M Hermann, Angela Di Fulvio
{"title":"Point-cloud segmentation with in-silico data augmentation for prostate cancer treatment.","authors":"Jianxin Zhou, Massimiliano Salvatori, Kadishe Fejza, Gregory M Hermann, Angela Di Fulvio","doi":"10.1002/mp.17815","DOIUrl":"https://doi.org/10.1002/mp.17815","url":null,"abstract":"<p><strong>Background: </strong>In external x-ray radiation therapy, the administered dose distribution can deviate from the planned dose due to alterations in patient positioning, changes in intra-fraction anatomy, and the limited precision of the beam delivery system in spatial terms. Adaptive radiation therapy (ART) can potentially improve dose delivery accuracy by re-optimizing the treatment plan before each fraction, maximizing the dose to the target volume while minimizing exposure to surrounding radiosensitive organs. However, to effectively implement ART, the stages of the radiation therapy pipeline, including image acquisition, segmentation, physician directive generation, and treatment plan generation, must be optimized for maximum speed and accuracy to ensure feasibility prior to each treatment fraction. In this work, we focus on image segmentation. By reducing the segmentation computation time, one can reproduce the planning process for each session, enabling routine customization for individual patients, achieving safe dose escalation, better cancer control, and reduced risk of severe radiotoxicity.</p><p><strong>Purpose: </strong>The aim of this study is to develop a fast point-cloud-based segmentation model with novel in-silico-aided data augmentation and demonstrate it on pelvic computed tomography (CT) patient data used in prostate cancer (PCa) treatment. This model can be implemented during ART because it requires only a few seconds to perform organ segmentation.</p><p><strong>Methods: </strong>In this study, a dataset of pelvic CT images was obtained from Order of St. Francis (OSF) Healthcare Hospital (Peoria, IL, USA), comprising 38 images in total. These were divided into 25 for training, seven for validation, and six for testing the developed model. A novel point-cloud-based model was used to reduce the prostate segmentation time, cross-validation was implemented to ensure the robustness of the model. The developed point-cloud-based network is a novel deep-learning (DL) model that adds a loss function that combines region-based with a new boundary loss function. The region-based loss enables the identification of large volumes while the boundary loss, whose relative weight increases with the epochs, increases the network training ability of uneven surfaces, like the interface between the prostate bladder and rectum, which are challenging to resolve. We introduced a new data-augmentation approach to expand the training set. This fully automated method generates synthetic 3-D CT images by creating relevant organs in the extended cardiac-torso (XCAT) computational phantom. The Dice similarity coefficient was used as an assessment metric and compared to state-of-the-art segmentation models. The doses to the prostate and organs at risk (i.e., bladder and rectum) were also calculated for both our automated segmentation and manual expert segmentation to evaluate the practical feasibility of the point-cloud-based approach.</p><p><st","PeriodicalId":94136,"journal":{"name":"Medical physics","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143782218","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hongqin Liang, Feng Wen, Li Kong, Yue Li, Feihua Jing, Zhiguo Sun, Jucai Zhang, Haipeng Zhang, Shan Meng, Jian Wang
{"title":"A novel algorithm for automated analysis of coronary CTA-derived FFR in identifying ischemia-specific CAD: A multicenter study.","authors":"Hongqin Liang, Feng Wen, Li Kong, Yue Li, Feihua Jing, Zhiguo Sun, Jucai Zhang, Haipeng Zhang, Shan Meng, Jian Wang","doi":"10.1002/mp.17803","DOIUrl":"https://doi.org/10.1002/mp.17803","url":null,"abstract":"<p><strong>Background: </strong>Coronary artery fractional flow reserve derived from coronary computed tomography angiography (CTA) is increasingly favored due to its non-invasive nature.</p><p><strong>Purpose: </strong>We aim to validate the ability of a novel on-site analysis model for computed tomography derived fractional flow reserve (CT FFR) using deep learning and level set algorithms to identify lesion-specific ischemic coronary artery disease (CAD).</p><p><strong>Methods: </strong>A retrospective analysis was conducted on 198 vessels from 171 patients from four medical centers who underwent CTA and invasive fractional flow reserve (FFR) examinations. Using invasive FFR and invasive coronary angiography (ICA) as reference standards, a new model based on deep learning and level set algorithm, as well as an artificial intelligence (AI) platform based on deep learning, were used to compare CT FFR values and stenosis rates.</p><p><strong>Results: </strong>Compared with the ai platform, the new model has a single-vessel accuracy of 85.9% [95% confidence interval (95% CI) 80-90), higher than the AI platform's 66.7% (95% CI: 59.6-73.1). The sensitivity is 82.8% (95% CI: 72.8-89.7), specificity is 88.3% (95% CI: 80.5-93.4), and the area under the curve (AUC) is 0.9 (95% CI: 0.85-0.94). The stenosis rate measured by model was much higher than ICA (r = 0.84, p < 0.0001). Using the standard FFR threshold of 0.8, the new model accurately identified 24 vessels with FFR values between 0.75 and 0.8. The AI platform exhibits significant differences in accuracy within different stenosis ranges (p = 0.022).</p><p><strong>Conclusion: </strong>The novel CT FFR algorithm based on a combination of deep learning and level set algorithms to optimize coronary artery 3D reconstruction may have a potential value in fully automatic on-site analysis of specific coronary ischemia.</p>","PeriodicalId":94136,"journal":{"name":"Medical physics","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143766250","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}