Journal of Medical Imaging最新文献

筛选
英文 中文
Impact of synthetic data on training a deep learning model for lesion detection and classification in contrast-enhanced mammography. 合成数据对增强乳房x光造影中病变检测和分类的深度学习模型训练的影响。
IF 1.9
Journal of Medical Imaging Pub Date : 2025-11-01 Epub Date: 2025-04-28 DOI: 10.1117/1.JMI.12.S2.S22006
Astrid Van Camp, Henry C Woodruff, Lesley Cockmartin, Marc Lobbes, Michael Majer, Corinne Balleyguier, Nicholas W Marshall, Hilde Bosmans, Philippe Lambin
{"title":"Impact of synthetic data on training a deep learning model for lesion detection and classification in contrast-enhanced mammography.","authors":"Astrid Van Camp, Henry C Woodruff, Lesley Cockmartin, Marc Lobbes, Michael Majer, Corinne Balleyguier, Nicholas W Marshall, Hilde Bosmans, Philippe Lambin","doi":"10.1117/1.JMI.12.S2.S22006","DOIUrl":"10.1117/1.JMI.12.S2.S22006","url":null,"abstract":"<p><strong>Purpose: </strong>Predictive models for contrast-enhanced mammography often perform better at detecting and classifying enhancing masses than (non-enhancing) microcalcification clusters. We aim to investigate whether incorporating synthetic data with simulated microcalcification clusters during training can enhance model performance.</p><p><strong>Approach: </strong>Microcalcification clusters were simulated in low-energy images of lesion-free breasts from 782 patients, considering local texture features. Enhancement was simulated in the corresponding recombined images. A deep learning (DL) model for lesion detection and classification was trained with varying ratios of synthetic and real (850 patients) data. In addition, a handcrafted radiomics classifier was trained using delineations and class labels from real data, and predictions from both models were ensembled. Validation was performed on internal (212 patients) and external (279 patients) real datasets.</p><p><strong>Results: </strong>The DL model trained exclusively with synthetic data detected over 60% of malignant lesions. Adding synthetic data to smaller real training sets improved detection sensitivity for malignant lesions but decreased precision. Performance plateaued at a detection sensitivity of 0.80. The ensembled DL and radiomics models performed worse than the standalone DL model, decreasing the area under this receiver operating characteristic curve from 0.75 to 0.60 on the external validation set, likely due to falsely detected suspicious regions of interest.</p><p><strong>Conclusions: </strong>Synthetic data can enhance DL model performance, provided model setup and data distribution are optimized. The possibility to detect malignant lesions without real data present in the training set confirms the utility of synthetic data. It can serve as a helpful tool, especially when real data are scarce, and it is most effective when complementing real data.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 Suppl 2","pages":"S22006"},"PeriodicalIF":1.9,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12036226/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144001778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robust evaluation of tissue-specific radiomic features for classifying breast tissue density grades. 对乳腺组织密度分级的组织特异性放射学特征进行稳健评估。
IF 1.9
Journal of Medical Imaging Pub Date : 2025-11-01 Epub Date: 2025-05-29 DOI: 10.1117/1.JMI.12.S2.S22010
Vincent Dong, Walter Mankowski, Telmo M Silva Filho, Anne Marie McCarthy, Despina Kontos, Andrew D A Maidment, Bruno Barufaldi
{"title":"Robust evaluation of tissue-specific radiomic features for classifying breast tissue density grades.","authors":"Vincent Dong, Walter Mankowski, Telmo M Silva Filho, Anne Marie McCarthy, Despina Kontos, Andrew D A Maidment, Bruno Barufaldi","doi":"10.1117/1.JMI.12.S2.S22010","DOIUrl":"10.1117/1.JMI.12.S2.S22010","url":null,"abstract":"<p><strong>Purpose: </strong>Breast cancer risk depends on an accurate assessment of breast density due to lesion masking. Although governed by standardized guidelines, radiologist assessment of breast density is still highly variable. Automated breast density assessment tools leverage deep learning but are limited by model robustness and interpretability.</p><p><strong>Approach: </strong>We assessed the robustness of a feature selection methodology (RFE-SHAP) for classifying breast density grades using tissue-specific radiomic features extracted from raw central projections of digital breast tomosynthesis screenings ( <math> <mrow> <msub><mrow><mi>n</mi></mrow> <mrow><mi>I</mi></mrow> </msub> <mo>=</mo> <mn>651</mn></mrow> </math> , <math> <mrow> <msub><mrow><mi>n</mi></mrow> <mrow><mi>II</mi></mrow> </msub> <mo>=</mo> <mn>100</mn></mrow> </math> ). RFE-SHAP leverages traditional and explainable AI methods to identify highly predictive and influential features. A simple logistic regression (LR) classifier was used to assess classification performance, and unsupervised clustering was employed to investigate the intrinsic separability of density grade classes.</p><p><strong>Results: </strong>LR classifiers yielded cross-validated areas under the receiver operating characteristic (AUCs) per density grade of [ <math><mrow><mi>A</mi></mrow> </math> : <math><mrow><mn>0.909</mn> <mo>±</mo> <mn>0.032</mn></mrow> </math> , <math><mrow><mi>B</mi></mrow> </math> : <math><mrow><mn>0.858</mn> <mo>±</mo> <mn>0.027</mn></mrow> </math> , <math><mrow><mi>C</mi></mrow> </math> : <math><mrow><mn>0.927</mn> <mo>±</mo> <mn>0.013</mn></mrow> </math> , <math><mrow><mi>D</mi></mrow> </math> : <math><mrow><mn>0.890</mn> <mo>±</mo> <mn>0.089</mn></mrow> </math> ] and an AUC of <math><mrow><mn>0.936</mn> <mo>±</mo> <mn>0.016</mn></mrow> </math> for classifying patients as nondense or dense. In external validation, we observed per density grade AUCs of [ <math><mrow><mi>A</mi></mrow> </math> : 0.880, <math><mrow><mi>B</mi></mrow> </math> : 0.779, <math><mrow><mi>C</mi></mrow> </math> : 0.878, <math><mrow><mi>D</mi></mrow> </math> : 0.673] and nondense/dense AUC of 0.823. Unsupervised clustering highlighted the ability of these features to characterize different density grades.</p><p><strong>Conclusions: </strong>Our RFE-SHAP feature selection methodology for classifying breast tissue density generalized well to validation datasets after accounting for natural class imbalance, and the identified radiomic features properly captured the progression of density grades. Our results potentiate future research into correlating selected radiomic features with clinical descriptors of breast tissue density.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 Suppl 2","pages":"S22010"},"PeriodicalIF":1.9,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12120562/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144200544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Assessing mammographic density change within individuals across screening rounds using deep learning-based software. 使用基于深度学习的软件评估个体在筛查轮中的乳房x光密度变化。
IF 1.7
Journal of Medical Imaging Pub Date : 2025-11-01 Epub Date: 2025-08-13 DOI: 10.1117/1.JMI.12.S2.S22017
Jakob Olinder, Daniel Förnvik, Victor Dahlblom, Viktor Lu, Anna Åkesson, Kristin Johnson, Sophia Zackrisson
{"title":"Assessing mammographic density change within individuals across screening rounds using deep learning-based software.","authors":"Jakob Olinder, Daniel Förnvik, Victor Dahlblom, Viktor Lu, Anna Åkesson, Kristin Johnson, Sophia Zackrisson","doi":"10.1117/1.JMI.12.S2.S22017","DOIUrl":"10.1117/1.JMI.12.S2.S22017","url":null,"abstract":"<p><strong>Purpose: </strong>The purposes are to evaluate the change in mammographic density within individuals across screening rounds using automatic density software, to evaluate whether a change in breast density is associated with a future breast cancer diagnosis, and to provide insight into breast density evolution.</p><p><strong>Approach: </strong>Mammographic breast density was analyzed in women screened in Malmö, Sweden, between 2010 and 2015 who had undergone at least two consecutive screening rounds <math><mrow><mo><</mo> <mn>30</mn></mrow> </math> months apart. The volumetric and area-based densities were measured with deep learning-based software and fully automated software, respectively. The change in volumetric breast density percentage (VBD%) between two consecutive screening examinations was determined. Multiple linear regression was used to investigate the association between VBD% change in percentage points and future breast cancer, as well as the initial VBD%, adjusting for age group and the time between examinations. Examinations with potential positioning issues were removed in a sensitivity analysis.</p><p><strong>Results: </strong>In 26,056 included women, the mean VBD% decreased from 10.7% [95% confidence interval (CI) 10.6 to 10.8] to 10.3% (95% CI: 10.2 to 10.3) ( <math><mrow><mi>p</mi> <mo><</mo> <mn>0.001</mn></mrow> </math> ) between the two examinations. The decline in VBD% was more pronounced in women with initially denser breasts (adjusted <math><mrow><mi>β</mi> <mo>=</mo> <mo>-</mo> <mn>0.10</mn></mrow> </math> , <math><mrow><mi>p</mi> <mo><</mo> <mn>0.001</mn></mrow> </math> ) and less pronounced in women with a future breast cancer diagnosis (adjusted <math><mrow><mi>β</mi> <mo>=</mo> <mn>0.16</mn></mrow> </math> , <math><mrow><mi>p</mi> <mo>=</mo> <mn>0.02</mn></mrow> </math> ).</p><p><strong>Conclusions: </strong>The demonstrated density changes over time support the potential of using breast density change in risk assessment tools and provide insights for future risk-based screening.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 Suppl 2","pages":"S22017"},"PeriodicalIF":1.7,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12350635/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144876009","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fine-grained multiclass nuclei segmentation with molecular empowered all-in-SAM model. 细粒度多类核分割与分子赋能的all-in-SAM模型。
IF 1.7
Journal of Medical Imaging Pub Date : 2025-09-01 Epub Date: 2025-09-04 DOI: 10.1117/1.JMI.12.5.057501
Xueyuan Li, Can Cui, Ruining Deng, Yucheng Tang, Quan Liu, Tianyuan Yao, Shunxing Bao, Naweed Chowdhury, Haichun Yang, Yuankai Huo
{"title":"Fine-grained multiclass nuclei segmentation with molecular empowered all-in-SAM model.","authors":"Xueyuan Li, Can Cui, Ruining Deng, Yucheng Tang, Quan Liu, Tianyuan Yao, Shunxing Bao, Naweed Chowdhury, Haichun Yang, Yuankai Huo","doi":"10.1117/1.JMI.12.5.057501","DOIUrl":"10.1117/1.JMI.12.5.057501","url":null,"abstract":"<p><strong>Purpose: </strong>Recent developments in computational pathology have been driven by advances in vision foundation models (VFMs), particularly the Segment Anything Model (SAM). This model facilitates nuclei segmentation through two primary methods: prompt-based zero-shot segmentation and the use of cell-specific SAM models for direct segmentation. These approaches enable effective segmentation across a range of nuclei and cells. However, general VFMs often face challenges with fine-grained semantic segmentation, such as identifying specific nuclei subtypes or particular cells.</p><p><strong>Approach: </strong>In this paper, we propose the molecular empowered all-in-SAM model to advance computational pathology by leveraging the capabilities of VFMs. This model incorporates a full-stack approach, focusing on (1) annotation-engaging lay annotators through molecular empowered learning to reduce the need for detailed pixel-level annotations, (2) learning-adapting the SAM model to emphasize specific semantics, which utilizes its strong generalizability with SAM adapter, and (3) refinement-enhancing segmentation accuracy by integrating molecular oriented corrective learning.</p><p><strong>Results: </strong>Experimental results from both in-house and public datasets show that the all-in-SAM model significantly improves cell classification performance, even when faced with varying annotation quality.</p><p><strong>Conclusions: </strong>Our approach not only reduces the workload for annotators but also extends the accessibility of precise biomedical image analysis to resource-limited settings, thereby advancing medical diagnostics and automating pathology image analysis.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 5","pages":"057501"},"PeriodicalIF":1.7,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12410749/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145015261","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comprehensive mixed reality surgical navigation system for liver surgery. 肝脏外科综合混合现实手术导航系统。
IF 1.7
Journal of Medical Imaging Pub Date : 2025-09-01 Epub Date: 2025-10-06 DOI: 10.1117/1.JMI.12.5.055001
Bowen Xiang, Jon S Heiselman, Michael I Miga
{"title":"Comprehensive mixed reality surgical navigation system for liver surgery.","authors":"Bowen Xiang, Jon S Heiselman, Michael I Miga","doi":"10.1117/1.JMI.12.5.055001","DOIUrl":"10.1117/1.JMI.12.5.055001","url":null,"abstract":"<p><strong>Purpose: </strong>Intraoperative liver deformation and the need to glance repeatedly between the operative field and a remote monitor undermine the precision and workflow of image-guided liver surgery. Existing mixed reality (MR) prototypes address only isolated aspects of this challenge and lack quantitative validation in deformable anatomy.</p><p><strong>Approach: </strong>We introduce a fully self-contained MR navigation system for liver surgery that runs on a MR headset and bridges this clinical gap by (1) stabilizing holographic content with an external retro-reflective reference tool that defines a fixed world origin, (2) tracking instruments and surface points in real time with the headset's depth camera, and (3) compensating soft-tissue deformation through a weighted ICP + linearized iterative boundary reconstruction pipeline. A lightweight server-client architecture streams deformation-corrected 3D models to the headset and enables hands-free control via voice commands.</p><p><strong>Results: </strong>Validation on a multistate liver-phantom protocol demonstrated that the reference tool reduced mean hologram drift from <math><mrow><mn>4.0</mn> <mo>±</mo> <mn>1.2</mn> <mtext>  </mtext> <mi>mm</mi></mrow> </math> to <math><mrow><mn>1.1</mn> <mo>±</mo> <mn>0.3</mn> <mtext>  </mtext> <mi>mm</mi></mrow> </math> and improved tracking accuracy from <math><mrow><mn>3.6</mn> <mo>±</mo> <mn>1.3</mn></mrow> </math> to <math><mrow><mn>2.3</mn> <mo>±</mo> <mn>0.8</mn> <mtext>  </mtext> <mi>mm</mi></mrow> </math> . Across five simulated deformation states, nonrigid registration lowered surface target registration error from <math><mrow><mn>7.4</mn> <mo>±</mo> <mn>4.8</mn></mrow> </math> to <math><mrow><mn>3.0</mn> <mo>±</mo> <mn>2.7</mn> <mtext>  </mtext> <mi>mm</mi></mrow> </math> -an average 57% error reduction-yielding sub-4 mm guidance accuracy.</p><p><strong>Conclusions: </strong>By unifying stable MR visualization, tool tracking, and biomechanical deformation correction in a single headset, the proposed platform eliminates monitor-related context switching and restores spatial fidelity lost to liver motion. The device-agnostic framework is extendable to open approaches and potentially laparoscopic workflows and other soft-tissue interventions, marking a significant step toward MR-enabled surgical navigation.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 5","pages":"055001"},"PeriodicalIF":1.7,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12499930/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145245641","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
BigReg: an efficient registration pipeline for high-resolution X-ray and light-sheet fluorescence microscopy. BigReg:用于高分辨率x射线和光片荧光显微镜的高效配准管道。
IF 1.7
Journal of Medical Imaging Pub Date : 2025-09-01 Epub Date: 2025-10-06 DOI: 10.1117/1.JMI.12.5.054004
Siyuan Mei, Fuxin Fan, Mareike Thies, Mingxuan Gu, Fabian Wagner, Oliver Aust, Ina Erceg, Zeynab Mirzaei, Georgiana Neag, Yipeng Sun, Yixing Huang, Andreas Maier
{"title":"BigReg: an efficient registration pipeline for high-resolution X-ray and light-sheet fluorescence microscopy.","authors":"Siyuan Mei, Fuxin Fan, Mareike Thies, Mingxuan Gu, Fabian Wagner, Oliver Aust, Ina Erceg, Zeynab Mirzaei, Georgiana Neag, Yipeng Sun, Yixing Huang, Andreas Maier","doi":"10.1117/1.JMI.12.5.054004","DOIUrl":"https://doi.org/10.1117/1.JMI.12.5.054004","url":null,"abstract":"<p><strong>Purpose: </strong>We aim to propose a reliable registration pipeline tailored for multimodal mouse bone imaging using X-ray microscopy (XRM) and light-sheet fluorescence microscopy (LSFM). These imaging modalities have emerged as pivotal tools in preclinical research, particularly for studying bone remodeling diseases such as osteoporosis. Although multimodal registration enables micrometer-level structural correspondence and facilitates functional analysis, conventional landmark-, feature-, or intensity-based approaches are often infeasible due to inconsistent signal characteristics and significant misalignment resulting from independent scanning, especially in real-world and reference-free scenarios.</p><p><strong>Approach: </strong>To address these challenges, we introduce BigReg, an automatic, two-stage registration pipeline optimized for high-resolution XRM and LSFM volumes. The first stage involves extracting surface features and applying two successive global-to-local point-cloud-based methods for coarse alignment. The subsequent stage refines this alignment in the 3D Fourier domain using a modified cross-correlation technique, achieving precise volumetric registration.</p><p><strong>Results: </strong>Evaluations using expert-annotated landmarks and augmented test data demonstrate that BigReg approaches the accuracy of landmark-based registration with a landmark distance (LMD) of <math><mrow><mn>8.36</mn> <mo>±</mo> <mn>0.12</mn> <mtext>  </mtext> <mi>μ</mi> <mi>m</mi></mrow> </math> and a landmark fitness (LM fitness) of <math><mrow><mn>85.71</mn> <mo>%</mo> <mo>±</mo> <mn>1.02</mn> <mo>%</mo></mrow> </math> . Moreover, BigReg can provide an optimal initialization for mutual information-based methods that otherwise fail independently, further reducing LMD to <math><mrow><mn>7.24</mn> <mo>±</mo> <mn>0.11</mn> <mtext>  </mtext> <mi>μ</mi> <mi>m</mi></mrow> </math> and increasing LM fitness to <math><mrow><mn>93.90</mn> <mo>%</mo> <mo>±</mo> <mn>0.77</mn> <mo>%</mo></mrow> </math> .</p><p><strong>Conclusions: </strong>To the best of our knowledge, BigReg is the first automated method to successfully register XRM and LSFM volumes without requiring manual intervention or prior alignment cues. Its ability to accurately align fine-scale structures, such as lacunae in XRM and osteocytes in LSFM, opens up new avenues for quantitative, multimodal analysis of bone microarchitecture and disease pathology, particularly in studies of osteoporosis.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 5","pages":"054004"},"PeriodicalIF":1.7,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12499931/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145245576","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TFKT V2: task-focused knowledge transfer from natural images for computed tomography perceptual image quality assessment. TFKT V2:用于计算机断层扫描感知图像质量评估的以任务为中心的自然图像知识转移。
IF 1.9
Journal of Medical Imaging Pub Date : 2025-09-01 Epub Date: 2025-05-28 DOI: 10.1117/1.JMI.12.5.051805
Kazi Ramisa Rifa, Md Atik Ahamed, Jie Zhang, Abdullah Imran
{"title":"TFKT V2: task-focused knowledge transfer from natural images for computed tomography perceptual image quality assessment.","authors":"Kazi Ramisa Rifa, Md Atik Ahamed, Jie Zhang, Abdullah Imran","doi":"10.1117/1.JMI.12.5.051805","DOIUrl":"10.1117/1.JMI.12.5.051805","url":null,"abstract":"<p><strong>Purpose: </strong>The accurate assessment of computed tomography (CT) image quality is crucial for ensuring diagnostic reliability while minimizing radiation dose. Radiologists' evaluations are time-consuming and labor-intensive. Existing automated approaches often require large CT datasets with predefined image quality assessment (IQA) scores, which often do not align well with clinical evaluations. We aim to develop a reference-free, automated method for CT IQA that closely reflects radiologists' evaluations, reducing the dependency on large annotated datasets.</p><p><strong>Approach: </strong>We propose Task-Focused Knowledge Transfer (TFKT), a deep learning-based IQA method leveraging knowledge transfer from task-similar natural image datasets. TFKT incorporates a hybrid convolutional neural network-transformer model, enabling accurate quality predictions by learning from natural image distortions with human-annotated mean opinion scores. The model is pre-trained on natural image datasets and fine-tuned on low-dose computed tomography perceptual image quality assessment data to ensure task-specific adaptability.</p><p><strong>Results: </strong>Extensive evaluations demonstrate that the proposed TFKT method effectively predicts IQA scores aligned with radiologists' assessments on in-domain datasets and generalizes well to out-of-domain clinical pediatric CT exams. The model achieves robust performance without requiring high-dose reference images. Our model is capable of assessing the quality of <math><mrow><mo>∼</mo> <mn>30</mn></mrow> </math> CT image slices in a second.</p><p><strong>Conclusions: </strong>The proposed TFKT approach provides a scalable, accurate, and reference-free solution for CT IQA. The model bridges the gap between traditional and deep learning-based IQA, offering clinically relevant and computationally efficient assessments applicable to real-world clinical settings.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 5","pages":"051805"},"PeriodicalIF":1.9,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12116730/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144182165","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Full-head segmentation of MRI with abnormal brain anatomy: model and data release. 脑解剖异常的MRI全头分割:模型与数据发布。
IF 1.7
Journal of Medical Imaging Pub Date : 2025-09-01 Epub Date: 2025-09-17 DOI: 10.1117/1.JMI.12.5.054001
Andrew M Birnbaum, Adam Buchwald, Peter Turkeltaub, Adam Jacks, George Carr, Shreya Kannan, Yu Huang, Abhisheck Datta, Lucas C Parra, Lukas A Hirsch
{"title":"Full-head segmentation of MRI with abnormal brain anatomy: model and data release.","authors":"Andrew M Birnbaum, Adam Buchwald, Peter Turkeltaub, Adam Jacks, George Carr, Shreya Kannan, Yu Huang, Abhisheck Datta, Lucas C Parra, Lukas A Hirsch","doi":"10.1117/1.JMI.12.5.054001","DOIUrl":"10.1117/1.JMI.12.5.054001","url":null,"abstract":"<p><strong>Purpose: </strong>Our goal was to develop a deep network for whole-head segmentation, including clinical magnetic resonance imaging (MRI) with abnormal anatomy, and compile the first public benchmark dataset for this purpose. We collected 98 MRIs with volumetric segmentation labels for a diverse set of human subjects, including normal and abnormal anatomy in clinical cases of stroke and disorders of consciousness.</p><p><strong>Approach: </strong>Training labels were generated by manually correcting initial automated segmentations for skin/scalp, skull, cerebro-spinal fluid, gray matter, white matter, air cavity, and extracephalic air. We developed a \"MultiAxial\" network consisting of three 2D U-Net that operate independently in sagittal, axial, and coronal planes, which are then combined to produce a single 3D segmentation.</p><p><strong>Results: </strong>The MultiAxial network achieved a test-set Dice scores of <math><mrow><mn>0.88</mn> <mo>±</mo> <mn>0.04</mn></mrow> </math> (median ± interquartile range) on whole-head segmentation, including gray and white matter. This was compared with <math><mrow><mn>0.86</mn> <mo>±</mo> <mn>0.04</mn></mrow> </math> for Multipriors and <math><mrow><mn>0.79</mn> <mo>±</mo> <mn>0.10</mn></mrow> </math> for SPM12, two standard tools currently available for this task. The MultiAxial network gains in robustness by avoiding the need for coregistration with an atlas. It performed well in regions with abnormal anatomy and on images that have been de-identified. It enables more accurate and robust current flow modeling when incorporated into ROAST, a widely used modeling toolbox for transcranial electric stimulation.</p><p><strong>Conclusions: </strong>We are releasing a new state-of-the-art tool for whole-head MRI segmentation in abnormal anatomy, along with the largest volume of labeled clinical head MRIs, including labels for nonbrain structures. Together, the model and data may serve as a benchmark for future efforts.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 5","pages":"054001"},"PeriodicalIF":1.7,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12442731/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145087827","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Segmentation variability and radiomics stability for predicting triple-negative breast cancer subtype using magnetic resonance imaging. 磁共振成像预测三阴性乳腺癌亚型的分割变异性和放射组学稳定性。
IF 1.7
Journal of Medical Imaging Pub Date : 2025-09-01 Epub Date: 2025-09-17 DOI: 10.1117/1.JMI.12.5.054501
Isabella Cama, Alejandro Guzmán, Cristina Campi, Michele Piana, Karim Lekadir, Sara Garbarino, Oliver Díaz
{"title":"Segmentation variability and radiomics stability for predicting triple-negative breast cancer subtype using magnetic resonance imaging.","authors":"Isabella Cama, Alejandro Guzmán, Cristina Campi, Michele Piana, Karim Lekadir, Sara Garbarino, Oliver Díaz","doi":"10.1117/1.JMI.12.5.054501","DOIUrl":"https://doi.org/10.1117/1.JMI.12.5.054501","url":null,"abstract":"<p><strong>Purpose: </strong>Many studies caution against using radiomic features that are sensitive to contouring variability in predictive models for disease stratification. Consequently, metrics such as the intraclass correlation coefficient (ICC) are recommended to guide feature selection based on stability. However, the direct impact of segmentation variability on the performance of predictive models remains underexplored. We examine how segmentation variability affects both feature stability and predictive performance in the radiomics-based classification of triple-negative breast cancer (TNBC) using breast magnetic resonance imaging.</p><p><strong>Approach: </strong>We analyzed 244 images from the Duke dataset, introducing segmentation variability through controlled modifications of manual segmentations. For each segmentation mask, explainable radiomic features were selected using Shapley Additive exPlanations and used to train logistic regression models. Feature stability across segmentations was assessed via ICC, Pearson's correlation, and reliability scores quantifying the relationship between segmentation variability and feature robustness.</p><p><strong>Results: </strong>Model performances in predicting TNBC do not exhibit a significant difference across varying segmentations. The most explicative and predictive features exhibit decreasing ICC as segmentation accuracy decreases. However, their predictive power remains intact due to low ICC combined with high Pearson's correlation. No shared numerical relationship is found between feature stability and segmentation variability among the most predictive features.</p><p><strong>Conclusions: </strong>Moderate segmentation variability has a limited impact on model performance. Although incorporating peritumoral information may reduce feature reproducibility, it does not compromise predictive utility. Notably, feature stability is not a strict prerequisite for predictive relevance, highlighting that exclusive reliance on ICC or stability metrics for feature selection may inadvertently discard informative features.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 5","pages":"054501"},"PeriodicalIF":1.7,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12443385/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145087856","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Gamification for emergency radiology education and image perception: stab the diagnosis. 游戏化急诊放射学教育与影像感知:切入诊断。
IF 1.7
Journal of Medical Imaging Pub Date : 2025-09-01 Epub Date: 2025-09-24 DOI: 10.1117/1.JMI.12.5.051808
William F Auffermann, Nathan Barber, Ryan Stockard, Soham Banerjee
{"title":"Gamification for emergency radiology education and image perception: stab the diagnosis.","authors":"William F Auffermann, Nathan Barber, Ryan Stockard, Soham Banerjee","doi":"10.1117/1.JMI.12.5.051808","DOIUrl":"https://doi.org/10.1117/1.JMI.12.5.051808","url":null,"abstract":"<p><strong>Purpose: </strong>Gamification can be a helpful adjunct to education and is increasingly used in radiology. We aim to determine if using a gamified framework to teach medical trainees about emergency radiology can improve perceptual and interpretive skills and facilitate learning.</p><p><strong>Approach: </strong>We obtained approval from the Institutional Review Board, and participation was voluntary. Participants received training at the RadSimPE radiology workstation simulator and were shown three sets of computed tomography images related to emergency radiology diagnoses. Participants were asked to state their certainty that an abnormality was not present, localize it if present, and give their confidence in localization. Between case sets 1 and 2, the experimental group was provided with gamified emergency radiology training on the Stab the Diagnosis program, whereas the control group was not. Following the session, participants completed an eight-question survey to assess their thoughts about the training.</p><p><strong>Results: </strong>A total of 36 medical trainees participated. Both the experimental group and control group improved in localization accuracy, but the experimental group's localization confidence was significantly greater than the control group ( <math><mrow><mi>p</mi> <mo>=</mo> <mn>0.0364</mn></mrow> </math> ). Survey results were generally positive and were statistically significantly greater than the neutral value of 3, with <math><mrow><mi>p</mi></mrow> </math> -values <math><mrow><mo><</mo> <mn>0.05</mn></mrow> </math> for all eight questions. For example, survey results indicated that participants felt the training was a helpful educational experience ( <math><mrow><mi>p</mi> <mo><</mo> <mn>0.001</mn></mrow> </math> ) and that the session was more effective for learning than traditional educational techniques ( <math><mrow><mi>p</mi> <mo>=</mo> <mn>0.001</mn></mrow> </math> ).</p><p><strong>Conclusions: </strong>Gamification may be a valuable adjunct to conventional methods in radiology education and may improve trainee confidence.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 5","pages":"051808"},"PeriodicalIF":1.7,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12458100/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145151419","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信