Yousuf Babiker M. Osman, Cheng Li, Nazik Elsayed, Alou Diakite, Shuqiang Wang, Shanshan Wang
{"title":"Enhancing semi-supervised learning for fine-grained 3D cerebrovascular segmentation with cross-consistency and uncertainty estimation","authors":"Yousuf Babiker M. Osman, Cheng Li, Nazik Elsayed, Alou Diakite, Shuqiang Wang, Shanshan Wang","doi":"10.1002/mp.70017","DOIUrl":"10.1002/mp.70017","url":null,"abstract":"<div>\u0000 \u0000 \u0000 <section>\u0000 \u0000 <h3> Background</h3>\u0000 \u0000 <p>Accurate delineation of the cerebral blood vessel from time-of-flight magnetic resonance angiography (TOF-MRA) data is essential to the analysis, diagnosis, and treatment of pathologies related to the cerebral blood supply. The limitations of supervised deep learning approaches in terms of annotation cost and applicability necessitate the exploration of alternative approaches that can effectively address these challenges and facilitate the real-world clinical deployment of automatic 3D cerebrovascular segmentation.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Purpose</h3>\u0000 \u0000 <p>To address the challenges of limited labeled data by exploiting the intricate structures of vessels and developing a method to assess the reliability of generated pseudo-labels, with the ultimate goal of enhancing the efficiency of unlabeled data utilization and improving segmentation accuracy.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Methods</h3>\u0000 \u0000 <p>We introduce a cross-consistency dual uncertainty quantification mean teacher method for semi-supervised learning fine-grained 3D cerebrovascular segmentation from TOF-MRA images. To effectively incorporate knowledge from unlabeled samples, we present a dual-consistency learning approach that jointly pertains to pixel-image transformation consistent equivariant and feature perturbation invariance. Following that, in an attempt to guarantee more confidence in unsupervised learning, we evaluate the segmentation uncertainty using the predictions from both the student and teacher models and employ them in collaboration for guiding consistency regularization. Additionally, we boost the pixel-level prediction performance by employing a region-specific supervised loss only for the annotated input samples.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Results</h3>\u0000 \u0000 <p>Quantitative and qualitative results on two publicly available datasets show that the proposed method yielded better results than state-of-the-art semi-supervised learning methods for cerebrovascular segmentation. Specifically, our method achieved a dice similarity coefficient of 83.3% and intersection-over-union of 71.5% on the IXI dataset, surpassing the baseline uncertainty-aware mean teacher method by 1.7% and 2.8%, respectively.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Conclusion</h3>\u0000 \u0000 <p>The framework's ability to achieve competitive performance across various metrics showcases its potential for relieving human annotation efforts for accurate cerebrovascular extraction tasks, where its effectiveness in handling unlabeled data can offer significant advantages.</p>\u0000 </section>\u0000 ","PeriodicalId":18384,"journal":{"name":"Medical physics","volume":"52 10","pages":""},"PeriodicalIF":3.2,"publicationDate":"2025-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145126822","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ibrahim Abdelhalim, Yassir Almalki, Abdelrahman Abdallah, Rasha Karam, Sharifa Alduraibi, Mohammad Basha, Hassan Mohamed, Mohammed Ghazal, Ali Mahmoud, Norah Saleh Alghamdi, Sohail Contractor, Ayman El-Baz
{"title":"A deep learning framework for accurate mammographic mass classification using local context attention module","authors":"Ibrahim Abdelhalim, Yassir Almalki, Abdelrahman Abdallah, Rasha Karam, Sharifa Alduraibi, Mohammad Basha, Hassan Mohamed, Mohammed Ghazal, Ali Mahmoud, Norah Saleh Alghamdi, Sohail Contractor, Ayman El-Baz","doi":"10.1002/mp.18119","DOIUrl":"10.1002/mp.18119","url":null,"abstract":"<div>\u0000 \u0000 \u0000 <section>\u0000 \u0000 <h3> Background</h3>\u0000 \u0000 <p>Dense breast tissue significantly increases breast cancer (BC) risk. However, current mammographic methods for classifying BC are often subjective and unreliable, which complicates the task of accurate evaluation.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Purpose</h3>\u0000 \u0000 <p>This study introduces a deep learning method with a local context attention module (LCAM), using dual mammogram views aligned with BI-RADS to enhance grading consistency and accuracy in BC classification across four groups by leveraging local context around masses.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Methods</h3>\u0000 \u0000 <p>Specific regions of interest (ROIs) containing dense tissue around breast masses are identified from dual mammogram views, providing additional insights for predicting BC BI-RADS categories. These ROIs are then input into a convolutional neural network (CNN)-based model, which is crucial for selecting and differentiating radiomic features associated with BI-RADS. To enhance our model's ability to distinguish salient radiomic features associated with mass malignancy, the LCAM sequentially infers attention maps along two separate dimensions: channel and spatial. These attention maps are subsequently multiplied with the input feature map for adaptive feature refinement.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Results</h3>\u0000 \u0000 <p>Examining 3020 patients across four BI-RADS categories while leveraging dual mammogram views demonstrates the robust performance of the proposed framework, achieving a sensitivity of 82.46% and a specificity of 91.42% in identifying BI-RADS grading relevant to breast masses.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Conclusions</h3>\u0000 \u0000 <p>We introduced a novel CNN-based framework that utilizes dual mammogram views for the BC classification. It utilizes LCAM, which further understands the local characteristics surrounding breast masses, aiming to enhance the accuracy and consistency of classification outcomes.</p>\u0000 </section>\u0000 </div>","PeriodicalId":18384,"journal":{"name":"Medical physics","volume":"52 10","pages":""},"PeriodicalIF":3.2,"publicationDate":"2025-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145126790","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Enhancing auto-contouring with large language model in high-dose rate brachytherapy for cervical cancers","authors":"Jing Wang, Jiahan Zhang, Kaida Yang, Beth Bradshaw Ghavidel, Benyamin Khajetash, Abolfazl Sarikhani, Mohammad Houshyari, Tian Liu, Yang Lei, Meysam Tavakoli","doi":"10.1002/mp.70034","DOIUrl":"10.1002/mp.70034","url":null,"abstract":"<div>\u0000 \u0000 \u0000 <section>\u0000 \u0000 <h3> BACKGROUND</h3>\u0000 \u0000 <p>High-dose-rate brachytherapy (HDR-BT) is a cornerstone of cervical cancer (CC) treatment, requiring the precise delineation of high-risk clinical target volumes (HR-CTV) and organs at risk (OARs) for effective dose delivery and toxicity reduction. However, the time-sensitive nature of HDR-BT planning and its reliance on expert contouring introduce inter- and intra-observer variability, posing challenges for consistent and accurate treatment planning.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> PURPOSE</h3>\u0000 \u0000 <p>This study proposes a novel deep learning (DL)-based auto-segmentation framework, guided by task-specific prompts generated from large language models (LLMs), to address these challenges and improve segmentation accuracy and efficiency.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> METHODS</h3>\u0000 \u0000 <p>A retrospective dataset of 32 CC patients, encompassing 124 planning computed tomography (pCT) images, was utilized. The framework integrates clinical guidelines for organ contouring from the American Brachytherapy Society (ABS), the European Society for Radiotherapy and Oncology (ESTRO), and the International Commission on Radiation Units and Measurements (ICRU). LLMs, particularly Chat-GPT, extracts domain knowledge from these contouring guidelines to generate task-specific prompts, which guide a Swin transformer-based encoder and a fully convolutional network (FCN) decoder for segmentation. The DL pipeline was evaluated on HR-CTV and OARs, including the bladder, rectum, and sigmoid. Metrics such as Dice similarity coefficient (DSC), Hausdorff distance (HD95%), mean surface distance (MSD), and center-of-mass distance (CMD) were used for performance assessment. An ablation study compared the prompt-guided approach with a baseline model without prompt guidance. Statistical differences were tested with two-tailed paired <i>t</i>-tests, and <i>p</i>-values were adjusted using the Benjamini–Hochberg method to address the multiple comparisons correction and results with adjusted <i>p</i> < 0.05 were deemed significant. Cohen's d values were calculated to quantify effect sizes.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> RESULTS</h3>\u0000 \u0000 <p>The proposed framework achieved the highest segmentation for the bladder (DSC of 0.91 ± 0.07), followed by the HR-CTV (DSC of 0.80 ± 0.08) and the rectum (DSC of 0.78 ± 0.07), and a lower accuracy for sigmoid (DSC of 0.63 ± 0.15) due to its small size and irregular shape. Boundary precision was highest for the HR-CTV (HD95%: 6.32 ± 2.31 mm). The ablation study confirmed the contribution of prompt guidance, with statistically significant improvements in DSC and/or HD95% (<i>p</i> < 0.05) for all OARs. Pr","PeriodicalId":18384,"journal":{"name":"Medical physics","volume":"52 10","pages":""},"PeriodicalIF":3.2,"publicationDate":"2025-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145126782","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Noise-augmented deep denoising: A method to boost CT image denoising networks","authors":"Gernot Kristof, Elias Eulig, Marc Kachelrieß","doi":"10.1002/mp.18121","DOIUrl":"10.1002/mp.18121","url":null,"abstract":"<div>\u0000 \u0000 \u0000 <section>\u0000 \u0000 <h3> Background</h3>\u0000 \u0000 <p>Denoising low dose computed tomography (CT) images can have great advantages for the aim of minimizing the radiation risk of the patients, as it can help lower the effective dose to the patient while providing constant image quality. In recent years, deep denoising methods became a popular way to accomplish this task. Conventional deep denoising algorithms, however, cannot handle the correlation between neighboring pixels or voxels very well, because the noise structure in CT is a resultant of the global attenuation properties of the patient and because the receptive field of most denoising approaches is rather small.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Purpose</h3>\u0000 \u0000 <p>The purpose of this study is to improve existing denoising networks, by providing them additional information about the image noise.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Methods</h3>\u0000 \u0000 <p>We here propose to generate <span></span><math>\u0000 <semantics>\u0000 <mi>N</mi>\u0000 <annotation>$N$</annotation>\u0000 </semantics></math> additional noise realizations by simulation, reconstruct them, and use these noise images as additional input into existing denoising networks. This noise augmentation is intended to guide the denoising process. The additional noise realizations are not only required during training, but also during inference. The rationale behind this noise-augmented deep denoising (NADD) is that CT image noise is strongly patient-specific and it is non-local since it depends on the attenuation of X-ray beams. NADD is architecture-agnostic and can thus be used to improve any previously proposed method. We demonstrate NADD using existing denoising networks that we slightly modified in their input layer in order to take the CT image that is to be denoised plus additional noise images as input. To do so, we modified three popular denoising networks, the CNN10, the ResNet, and the WGAN-VGG and apply them to clinical cases with 90% dose reduction.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Results</h3>\u0000 \u0000 <p>In all cases tested, the denoising networks strongly benefit from the noise augmentation. Noise artifacts that are being misinterpreted by the original networks as being anatomical structures, are correctly removed by the NADD version of the same networks. The more noise images are provided, the better the performance.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Conclusions</h3>\u0000 \u0000 <p>Providing additional simulated no","PeriodicalId":18384,"journal":{"name":"Medical physics","volume":"52 10","pages":""},"PeriodicalIF":3.2,"publicationDate":"2025-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://aapm.onlinelibrary.wiley.com/doi/epdf/10.1002/mp.18121","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145126811","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wenbo Gu, Khayrullo Shoniyozov, Kai Mei, Alexander Lin, Wei Zou, Lei Dong, Peter B. Noël, Boon-Keng Kevin Teo
{"title":"Optimization and fabrication of a novel 3D-printed variable density range modulation device for proton FLASH beams","authors":"Wenbo Gu, Khayrullo Shoniyozov, Kai Mei, Alexander Lin, Wei Zou, Lei Dong, Peter B. Noël, Boon-Keng Kevin Teo","doi":"10.1002/mp.70013","DOIUrl":"10.1002/mp.70013","url":null,"abstract":"<div>\u0000 \u0000 \u0000 <section>\u0000 \u0000 <h3> Background</h3>\u0000 \u0000 <p>For proton FLASH therapy, range-modulating devices are inserted in the beam path to create a spread-out-Bragg-peak (SOBP) for ultrafast delivery using a single energy pencil beam scanning technique. Current design typically consists of uniform density spikes with range modulation achieved by changing the area and height of the spikes, which has limited structural stability and modulation flexibility.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Purpose</h3>\u0000 \u0000 <p>We present a new class of 3D-printed range-modulating devices for particle therapy with spatially modulated density.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Methods</h3>\u0000 \u0000 <p>PixelPrint technology (Laboratory for Advanced Computed Tomography Imaging, University of Pennsylvania, PA) was used to 3D-print the variable density range-modulator, by continuously varying the ratio of filament to air in each voxel. With specific thickness and spatial density modulation, SOBP of varying widths can be created. A calibration phantom was 3D printed and scanned by a dual-energy computed tomography (CT) scanner to characterize the physical and radiological properties of the PixelPrint technology. We developed an inverse optimization algorithm to generate the density map for producing SOBP from monoenergetic proton beam and verified by MCsquare (http://www.openmcsquare.org/), an open-source Monte Carlo (MC) simulation platform. The range modulation characteristics were measured using a multi-layer ionization chamber (MLIC) under monoenergetic proton field irradiation.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Results</h3>\u0000 \u0000 <p>The proposed optimization framework generated the density distributions for multiple SOBP widths. MC simulation verified the width and flatness of created SOBPs. The CT scan of a 3-cm SOBP modulator showed good fidelity of the desired density distribution, except for the highest density regions. MLIC measurements confirmed the accuracy of the produced SOBP with multiple proton beam energies.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Conclusion</h3>\u0000 \u0000 <p>A novel variable density range-modulating device for proton therapy was successfully developed. These devices have the potential to be handled easily and significantly speed-up proton therapy treatment delivery.</p>\u0000 </section>\u0000 </div>","PeriodicalId":18384,"journal":{"name":"Medical physics","volume":"52 10","pages":""},"PeriodicalIF":3.2,"publicationDate":"2025-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://aapm.onlinelibrary.wiley.com/doi/epdf/10.1002/mp.70013","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145110749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Amir Moslemi, Laurentius Oscar Osapoetra, Aryan Safakish, Lakshmanan Sannachi, David Alberico, Gregory J Czarnota
{"title":"Radiation therapy response prediction for head and neck cancer using multimodal imaging and multiview dynamic graph autoencoder feature selection","authors":"Amir Moslemi, Laurentius Oscar Osapoetra, Aryan Safakish, Lakshmanan Sannachi, David Alberico, Gregory J Czarnota","doi":"10.1002/mp.70026","DOIUrl":"10.1002/mp.70026","url":null,"abstract":"<div>\u0000 \u0000 \u0000 <section>\u0000 \u0000 <h3> Background</h3>\u0000 \u0000 <p>External beam radiation therapy is a common treatment for head and neck (H&N) cancers. Radiomic features derived from biomedical images have shown promise as effective biomarkers used to assess tumor heterogeneity and predict response to treatment. However, most studies employ only a single biomedical imaging modality to determine radiomic features or naively concatenate radiomic features from different imaging modalities.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Purpose</h3>\u0000 \u0000 <p>The objective of this study is to assess the effectiveness of multiview feature selection (MVFS) in identifying the most discriminative radiomic features determined from pretreatment quantitative ultrasound spectroscopic (QUS) parametric maps, as well as computed tomography (CT), and magnetic resonance imaging (MRI) modalities. These features were used to train predictive models to predict outcomes of radiation therapy for head and neck (H&N) cancer.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Method</h3>\u0000 \u0000 <p>70, 70, and 350 radiomics features were extracted from pre-treatment CT and MRI images, as well as seven QUS parametric maps, respectively. We proposed an MVFS technique named Adaptive Graph Autoencoder Multi-View Feature Selection (AGAMVFS), based on dynamic graph learning and autoencoder. In AGAMVFS, adaptive and collaborative graphs are learned at multiple levels to discriminate among view-specific features. An autoencoder is then applied to concatenated features to select the most discriminative ones. This approach fosters collaboration across different views and achieves a consensus projection for feature selection. Leave-one-patient-out cross-validation was applied to split the data into train and test sets and selected features were used to train two classifiers (support vector machine (SVM) and k-nearest neighbor (KNN)) to build a predictive model, tasked with predicting response to treatment for patients with H&N cancers. Fivefold cross-validation was applied on training set to tune the hyperparameters of SVM and KNN classifiers. Consequently, the performance of classifiers was evaluated using accuracy, F1-score, balanced accuracy, sensitivity, and specificity metrics. Additionally, a two-sided <i>t</i>-test was applied to the selected features. We compared the proposed method with a single imaging modality and state-of-the-art feature selection techniques.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Results</h3>\u0000 \u0000 <p>We recruited 63 (59 male (94%) and 4 female (6%)) H&N cancer patients with bulky metastatic neck lymph node (LN) involvement. The ","PeriodicalId":18384,"journal":{"name":"Medical physics","volume":"52 10","pages":""},"PeriodicalIF":3.2,"publicationDate":"2025-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://aapm.onlinelibrary.wiley.com/doi/epdf/10.1002/mp.70026","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145110868","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Deep self-cleansing for medical image segmentation with noisy labels","authors":"Jiahua Dong, Yue Zhang, Qiuli Wang, Ruofeng Tong, Shihong Ying, Shaolin Gong, Xuanpu Zhang, Lanfen Lin, Yen-Wei Chen, Shaohua Kevin Zhou","doi":"10.1002/mp.70007","DOIUrl":"10.1002/mp.70007","url":null,"abstract":"<div>\u0000 \u0000 \u0000 <section>\u0000 \u0000 <h3> Background</h3>\u0000 \u0000 <p>Medical image segmentation plays a pivotal role in medical imaging, significantly contributing to disease diagnosis and surgical planning. Traditional segmentation methods predominantly rely on supervised deep learning, where the accuracy of manually delineated labels is crucial for model performance. However, these labels often contain noise, such as missing annotations and imprecise boundaries, which can adversely affect the network's ability to accurately model target characteristics.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Purpose</h3>\u0000 \u0000 <p>This study aims to develop a robust segmentation framework capable of mitigating the impact of noisy labels during the training phase. The proposed framework is designed to preserve clean labels while cleansing noisy ones, thereby enhancing the overall segmentation accuracy.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Methods</h3>\u0000 \u0000 <p>We introduce a deep self-cleansing segmentation framework that incorporates two key modules as follows: a Gaussian Mixture Model (GMM)-based label filtering module (LFM) and a label cleansing module (LCM). The GMM-based LFM is employed to differentiate between noisy and clean labels. Subsequently, the LCM generates pseudo low-noise labels for the identified noisy samples. These pseudo-labels, along with the preserved clean labels, are then used to supervise the network training process.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Results</h3>\u0000 \u0000 <p>The framework was evaluated on a clinical liver tumor dataset (231 CT scans) and a public cardiac diagnosis dataset (200 MRI scans). Compared to baseline methods, our approach significantly improves segmentation performance, achieving a +7.31% boost in the B-model and a +12.36% improvement in the L-model. These results demonstrate the framework's ability to effectively suppress the interference of noisy labels and enhance segmentation accuracy. The method's capability to distinguish and cleanse noisy labels ensures more precise modeling of target structures, improving the robustness of the segmentation process.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Conclusions</h3>\u0000 \u0000 <p>The proposed deep self-cleansing segmentation framework offers a promising solution to the challenge of noisy labels in medical image segmentation. By integrating a GMM-based LFM and an LCM, the framework effectively preserves clean labels and generates pseudo low-noise labels, thereby improving the overall segmentation accuracy. The successful validation on both clinical and public datasets underscores the potential of this approach to enhance disease diagnosis and surgical planning ","PeriodicalId":18384,"journal":{"name":"Medical physics","volume":"52 10","pages":""},"PeriodicalIF":3.2,"publicationDate":"2025-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145110748","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lixiang Guo, Banghao Zhou, Yi-Chun Tsai, Kai Jiang, Viktor Iakovenko, Ken Kang-Hsin Wang
{"title":"Comprehensive characterization and validation of a fast-resolving (1000 Hz) plastic scintillator for ultra-high dose rate electron dosimetry","authors":"Lixiang Guo, Banghao Zhou, Yi-Chun Tsai, Kai Jiang, Viktor Iakovenko, Ken Kang-Hsin Wang","doi":"10.1002/mp.70006","DOIUrl":"10.1002/mp.70006","url":null,"abstract":"<div>\u0000 \u0000 \u0000 <section>\u0000 \u0000 <h3> Background</h3>\u0000 \u0000 <p>The normal tissue sparing effect of ultra-high dose rate irradiation (≥40 Gy/s, UHDR), as compared to conventional dose rate (CONV), has attracted significant research interest for FLASH radiotherapy (RT). Accurate, dose rate independent, fast-responding dosimeters capable of resolving the spatiotemporal characteristics of UHDR beams are urgently needed to facilitate FLASH research and support its clinical translation. Tissue-equivalent scintillators, with millimeter-level spatial resolution and millisecond-level temporal resolution, possess these required characteristics and show strong potential for use in UHDR dosimetry.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Purpose</h3>\u0000 \u0000 <p>We investigated the performance of the HYPERSCINT RP-FLASH scintillator system at up to 1000 Hz sampling frequency (<i>f<sub>s</sub></i>) for UHDR electron beam dosimetry.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Methods</h3>\u0000 \u0000 <p>The scintillator was exposed to CONV and UHDR electron irradiation using a LINAC-based FLASH platform. Its spectral characteristics were delineated with a four-component calibration, followed by a signal-to-dose calibration using 18 MeV CONV electron beam. The dose linearity and dosimetric accuracy in response to CONV and UHDR irradiation at 1 and 1000 Hz <i>f<sub>s</sub></i> were quantified against ion chamber and EBT-XD film measurements. The response of the scintillator system was investigated as a function of beam energy (6 and 18 MeV), field size (2 × 2 to 25 × 25 cm<sup>2</sup>), dose per pulse (DPP, 0.8–2.3 Gy/pulse), and pulse repetition frequency (PRF, 30–180 Hz). Relative signal sensitivity was quantified against accumulated dose to account for the scintillator's radiation degradation. Pulse-resolved dose measurements at 18 MeV UHDR, obtained using the scintillator with 1000 Hz <i>f<sub>s</sub></i> for a train of 10 pulses at 180 Hz PRF, were validated with a PMT-fiber optic scattered radiation detector.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Results</h3>\u0000 \u0000 <p>The scintillator system at 1 Hz <i>f<sub>s</sub></i> demonstrated high accuracy in dose measurements, remaining within 0.5% of ion chamber measurements over the dose range of 0.1–35 Gy under CONV irradiation. For the UHDR irradiation, the scintillator showed <3% dose error compared to film measurements up to 40 Gy at 1000 Hz <i>f<sub>s</sub></i>. Its response was found to be minimally dependent on energy, field size, and PRF. The scintillator under-responded by ∼4% over the 0.8–2.3 Gy/pulse range, although the dose difference relative to film remained within 2%. The radiat","PeriodicalId":18384,"journal":{"name":"Medical physics","volume":"52 10","pages":""},"PeriodicalIF":3.2,"publicationDate":"2025-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://aapm.onlinelibrary.wiley.com/doi/epdf/10.1002/mp.70006","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145110714","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xuan Fu, Zihan Zhang, Wenfu Lin, Qi Luo, Lvming Zeng, Yunhua Wen, Xuanrong Ji, Zhongwen Cheng
{"title":"256-channel parallel ultrasound open platform: Enabling high-resolution imaging and stimulation research","authors":"Xuan Fu, Zihan Zhang, Wenfu Lin, Qi Luo, Lvming Zeng, Yunhua Wen, Xuanrong Ji, Zhongwen Cheng","doi":"10.1002/mp.70009","DOIUrl":"10.1002/mp.70009","url":null,"abstract":"<div>\u0000 \u0000 \u0000 <section>\u0000 \u0000 <h3> Background</h3>\u0000 \u0000 <p>Ultrasound imaging and stimulation are essential tools in various medical applications, enabling non-invasive diagnostics and targeted therapies. Ultrasound systems that integrate both imaging and stimulation capabilities offer a versatile solution for fundamental research. A high-performance, programmable platform allows researchers to customize system parameters and explore advanced imaging and stimulation techniques, playing a crucial role in driving innovation in both basic research and clinical applications.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Purpose</h3>\u0000 \u0000 <p>This study aims to introduce and validate a novel open ultrasound research platform designed to support both high-resolution imaging and effective ultrasound stimulation, thereby addressing current research needs in the biomedical field.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Methods</h3>\u0000 \u0000 <p>The proposed platform features 256 parallel transmit/receive channels, a 100 MHz sampling rate, 14-bit analog-to-digital converter resolution, and 10 Gb/s optical data transfer. It incorporates plane wave imaging and full matrix capture for high-resolution, real-time ultrasound imaging. Additionally, the system is capable of generating customized multi-cycle waveforms with pulse voltages up to 200 Vpp, enabling neuromodulation and therapeutic applications. It also offers a programmable development environment and compatibility with various phased array probes, providing flexibility for biomedical research. System performance was evaluated using a tissue-mimicking phantom and a 2 MHz transducer in phased array, plane wave imaging, and full matrix capture modes. Lateral resolution was evaluated using 150 µm tungsten wire imaging, while a 10 MHz transducer validated high-frequency imaging. A flexible transducer was tested for real-time imaging on curved surfaces with recalibration for distortion correction. The system's biomedical monitoring capability was demonstrated through carotid artery imaging, while acoustic field measurements, using a hydrophone, showcased its applicability in low-intensity focused ultrasound therapies.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Results</h3>\u0000 \u0000 <p>Imaging experiments using a tissue-mimicking phantom demonstrate that the platform achieves excellent lateral resolution of 100 µm in ultrasound imaging. Flexible transducer imaging demonstrated a notable improvement in image quality following recalibration, achieving over 100% enhancement. Real-time monitoring of the human common carotid artery demonstrated accurate dynamic imaging and quantification of heart and respiratory rates. For stimulation applications, hydrophone-based acoustic field","PeriodicalId":18384,"journal":{"name":"Medical physics","volume":"52 10","pages":""},"PeriodicalIF":3.2,"publicationDate":"2025-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145110716","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Weinuo Qu, Jing Wang, Jiali Li, Yaqi Shen, Yang Peng, Daoyu Hu, Zhen Li
{"title":"Comparison of reduced field-of-view DWI with conventional DWI for machine learning-based assessment of lymphovascular invasion in rectal cancer","authors":"Weinuo Qu, Jing Wang, Jiali Li, Yaqi Shen, Yang Peng, Daoyu Hu, Zhen Li","doi":"10.1002/mp.70015","DOIUrl":"10.1002/mp.70015","url":null,"abstract":"<div>\u0000 \u0000 \u0000 <section>\u0000 \u0000 <h3> Background</h3>\u0000 \u0000 <p>Lymphovascular invasion (LVI) is an important prognostic factor of rectal cancer and influences treatment planning. MRI-based radiomic features provide phenotypic information on tumor biological behaviors.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Purpose</h3>\u0000 \u0000 <p>We aimed to compare the performance of different models derived from reduced field-of-view diffusion-weighted imaging (rDWI) for prediction of lymphovascular invasion (LVI) in comparison with conventional DWI (fDWI) and high-resolution T2-weighted imaging (T2WI).</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Methods</h3>\u0000 \u0000 <p>Eighty-six rectal cancer patients received rDWI, fDWI, and high-resolution T2WI at 3T. Whole-lesion ROI delineations were performed on above sequences for radiomic feature extractions (60 and 26 patients in training and test cohorts, respectively). A baseline logistic model was applied to all sequences to compare their diagnostic performances in predicting LVI. Different machine learning models, including eXtreme Gradient Boosting (XGBoost), Support Vector Machine (SVM), and Random Forest (RF) were further utilized on rDWI to assess LVI status. The performances of different models from these sequences and visual interpretation by radiologists were evaluated and compared for LVI prediction.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Results</h3>\u0000 \u0000 <p>Radiomic models from DWI sequences performed better than visual interpretation for diagnosing LVI (<i>p </i>= 0.002–0.036). In logistics models, radiomics derived from rDWI outperformed those from T2WI (<i>z </i>= 2.064, <i>p </i>= 0.039) in differentiating-LVI. AUC of rDWI model was higher than that of fDWI but the difference was not statistically significant (<i>z </i>= 1.006, <i>p </i>= 0.315). No significant differences of performance were detected between fDWI and T2WI (<i>p </i>> 0.05). The best performance, with an AUC of 0.957, was achieved by the RF model derived from rDWI in the training cohort, with a significant difference noted between the RF and logistic models for LVI prediction (<i>z </i>= 2.250, <i>p </i>= 0.032).</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Conclusion</h3>\u0000 \u0000 <p>RDWI-derived radiomics performed better than T2WI and fDWI in differentiating LVI. Radiomic models based on rDWI were promising tools for facilitating clinical assessment of LVI status in rectal cancer.</p>\u0000 </section>\u0000 </div>","PeriodicalId":18384,"journal":{"name":"Medical physics","volume":"52 10","pages":""},"PeriodicalIF":3.2,"publicationDate":"2025-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145110717","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}