Journal of Medical Imaging最新文献

筛选
英文 中文
Breast tumor diagnosis via multimodal deep learning using ultrasound B-mode and Nakagami images. 基于b超和Nakagami图像的多模态深度学习诊断乳腺肿瘤。
IF 1.9
Journal of Medical Imaging Pub Date : 2025-11-01 Epub Date: 2025-05-14 DOI: 10.1117/1.JMI.12.S2.S22009
Sabiq Muhtadi, Caterina M Gallippi
{"title":"Breast tumor diagnosis via multimodal deep learning using ultrasound B-mode and Nakagami images.","authors":"Sabiq Muhtadi, Caterina M Gallippi","doi":"10.1117/1.JMI.12.S2.S22009","DOIUrl":"https://doi.org/10.1117/1.JMI.12.S2.S22009","url":null,"abstract":"<p><strong>Purpose: </strong>We propose and evaluate multimodal deep learning (DL) approaches that combine ultrasound (US) B-mode and Nakagami parametric images for breast tumor classification. It is hypothesized that integrating tissue brightness information from B-mode images with scattering properties from Nakagami images will enhance diagnostic performance compared with single-input approaches.</p><p><strong>Approach: </strong>An EfficientNetV2B0 network was used to develop multimodal DL frameworks that took as input (i) numerical two-dimensional (2D) maps or (ii) rendered red-green-blue (RGB) representations of both B-mode and Nakagami data. The diagnostic performance of these frameworks was compared with single-input counterparts using 831 US acquisitions from 264 patients. In addition, gradient-weighted class activation mapping was applied to evaluate diagnostically relevant information utilized by the different networks.</p><p><strong>Results: </strong>The multimodal architectures demonstrated significantly higher area under the receiver operating characteristic curve (AUC) values ( <math><mrow><mi>p</mi> <mo><</mo> <mn>0.05</mn></mrow> </math> ) than their monomodal counterparts, achieving an average improvement of 10.75%. In addition, the multimodal networks incorporated, on average, 15.70% more diagnostically relevant tissue information. Among the multimodal models, those using RGB representations as input outperformed those that utilized 2D numerical data maps ( <math><mrow><mi>p</mi> <mo><</mo> <mn>0.05</mn></mrow> </math> ). The top-performing multimodal architecture achieved a mean AUC of 0.896 [95% confidence interval (CI): 0.813 to 0.959] when performance was assessed at the image level and 0.848 (95% CI: 0.755 to 0.903) when assessed at the lesion level.</p><p><strong>Conclusions: </strong>Incorporating B-mode and Nakagami information together in a multimodal DL framework improved classification outcomes and increased the amount of diagnostically relevant information accessed by networks, highlighting the potential for automating and standardizing US breast cancer diagnostics to enhance clinical outcomes.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 Suppl 2","pages":"S22009"},"PeriodicalIF":1.9,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12077846/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144081335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Semi-supervised semantic segmentation of cell nuclei with diffusion model and collaborative learning. 利用扩散模型和协作学习对细胞核进行半监督语义分割
IF 1.9
Journal of Medical Imaging Pub Date : 2025-11-01 Epub Date: 2025-03-20 DOI: 10.1117/1.JMI.12.6.061403
Zhuchen Shao, Sourya Sengupta, Mark A Anastasio, Hua Li
{"title":"Semi-supervised semantic segmentation of cell nuclei with diffusion model and collaborative learning.","authors":"Zhuchen Shao, Sourya Sengupta, Mark A Anastasio, Hua Li","doi":"10.1117/1.JMI.12.6.061403","DOIUrl":"10.1117/1.JMI.12.6.061403","url":null,"abstract":"<p><strong>Purpose: </strong>Automated segmentation and classification of the cell nuclei in microscopic images is crucial for disease diagnosis and tissue microenvironment analysis. Given the difficulties in acquiring large labeled datasets for supervised learning, semi-supervised methods offer alternatives by utilizing unlabeled data alongside labeled data. Effective semi-supervised methods to address the challenges of extremely limited labeled data or diverse datasets with varying numbers and types of annotations remain under-explored.</p><p><strong>Approach: </strong>Unlike other semi-supervised learning methods that iteratively use labeled and unlabeled data for model training, we introduce a semi-supervised learning framework that combines a latent diffusion model (LDM) with a transformer-based decoder, allowing for independent usage of unlabeled data to optimize their contribution to model training. The model is trained based on a sequential training strategy. LDM is trained in an unsupervised manner on diverse datasets, independent of cell nuclei types, thereby expanding the training data and enhancing training performance. The pre-trained LDM serves as a powerful feature extractor to support the transformer-based decoder's supervised training on limited labeled data and improve final segmentation performance. In addition, the paper explores a collaborative learning strategy to enhance segmentation performance on out-of-distribution (OOD) data.</p><p><strong>Results: </strong>Extensive experiments conducted on four diverse datasets demonstrated that the proposed framework significantly outperformed other semi-supervised and supervised methods for both in-distribution and OOD cases. Through collaborative learning with supervised methods, diffusion model and transformer decoder-based segmentation (DTSeg) achieved consistent performance across varying cell types and different amounts of labeled data.</p><p><strong>Conclusions: </strong>The proposed DTSeg framework addresses cell nuclei segmentation under limited labeled data by integrating unsupervised LDM training on diverse unlabeled datasets. Collaborative learning demonstrated effectiveness in enhancing the generalization capability of DTSeg to achieve superior results across diverse datasets and cases. Furthermore, the method supports multi-channel inputs and demonstrates strong generalization to both in-distribution and OOD scenarios.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 6","pages":"061403"},"PeriodicalIF":1.9,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11924957/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143694064","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Impact of menopause and age on breast density and background parenchymal enhancement in dynamic contrast-enhanced magnetic resonance imaging. 绝经和年龄对动态增强磁共振成像中乳腺密度和背景实质增强的影响。
IF 1.9
Journal of Medical Imaging Pub Date : 2025-11-01 Epub Date: 2025-03-11 DOI: 10.1117/1.JMI.12.S2.S22002
Grey Kuling, Jennifer D Brooks, Belinda Curpen, Ellen Warner, Anne L Martel
{"title":"Impact of menopause and age on breast density and background parenchymal enhancement in dynamic contrast-enhanced magnetic resonance imaging.","authors":"Grey Kuling, Jennifer D Brooks, Belinda Curpen, Ellen Warner, Anne L Martel","doi":"10.1117/1.JMI.12.S2.S22002","DOIUrl":"10.1117/1.JMI.12.S2.S22002","url":null,"abstract":"<p><strong>Purpose: </strong>Breast density (BD) and background parenchymal enhancement (BPE) are important imaging biomarkers for breast cancer (BC) risk. We aim to evaluate longitudinal changes in quantitative BD and BPE in high-risk women undergoing dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI), focusing on the effects of age and transition into menopause.</p><p><strong>Approach: </strong>A retrospective cohort study analyzed 834 high-risk women undergoing breast DCE-MRI for screening between 2005 and 2020. Quantitative BD and BPE were derived using deep-learning segmentation. Linear mixed-effects models assessed longitudinal changes and the effects of age, menopausal status, weeks since the last menstrual period (LMP-wks), body mass index (BMI), and hormone replacement therapy (HRT) on these imaging biomarkers.</p><p><strong>Results: </strong>BD decreased with age across all menopausal stages, whereas BPE declined with age in postmenopausal women but remained stable in premenopausal women. HRT elevated BPE in postmenopausal women. Perimenopausal women exhibited decreases in both BD and BPE during the menopausal transition, though cross-sectional age at menopause had no significant effect on either measure. Fibroglandular tissue was positively associated with BPE in perimenopausal women.</p><p><strong>Conclusions: </strong>We highlight the dynamic impact of menopause on BD and BPE and correlate well with the known relationship between risk and age at menopause. These findings advance the understanding of imaging biomarkers in high-risk populations and may contribute to the development of improved risk assessment leading to personalized chemoprevention and BC screening recommendations.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 Suppl 2","pages":"S22002"},"PeriodicalIF":1.9,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11894108/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143617600","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sureness of classification of breast cancers as pure ductal carcinoma in situ or with invasive components on dynamic contrast-enhanced magnetic resonance imaging: application of likelihood assurance metrics for computer-aided diagnosis. 动态增强磁共振成像将乳腺癌分类为单纯导管原位癌或浸润性成分的确定性:可能性保证指标在计算机辅助诊断中的应用
IF 1.9
Journal of Medical Imaging Pub Date : 2025-11-01 Epub Date: 2025-06-18 DOI: 10.1117/1.JMI.12.S2.S22012
Heather M Whitney, Karen Drukker, Alexandra Edwards, Maryellen L Giger
{"title":"Sureness of classification of breast cancers as pure ductal carcinoma <i>in situ</i> or with invasive components on dynamic contrast-enhanced magnetic resonance imaging: application of likelihood assurance metrics for computer-aided diagnosis.","authors":"Heather M Whitney, Karen Drukker, Alexandra Edwards, Maryellen L Giger","doi":"10.1117/1.JMI.12.S2.S22012","DOIUrl":"10.1117/1.JMI.12.S2.S22012","url":null,"abstract":"<p><strong>Purpose: </strong>Breast cancer may persist within milk ducts (ductal carcinoma <i>in situ</i>, DCIS) or advance into surrounding breast tissue (invasive ductal carcinoma, IDC). Occasionally, invasiveness in cancer may be underestimated during biopsy, leading to adjustments in the treatment plan based on unexpected surgical findings. Artificial intelligence/computer-aided diagnosis (AI/CADx) techniques in medical imaging may have the potential to predict whether a lesion is purely DCIS or exhibits a mixture of IDC and DCIS components, serving as a valuable supplement to biopsy findings. To enhance the evaluation of AI/CADx performance, assessing variability on a lesion-by-lesion basis via likelihood assurance measures could add value.</p><p><strong>Approach: </strong>We evaluated the performance in the task of distinguishing between pure DCIS and mixed IDC/DCIS breast cancers using computer-extracted radiomic features from dynamic contrast-enhanced magnetic resonance imaging using 0.632+ bootstrapping methods (2000 folds) on 550 lesions (135 pure DCIS, 415 mixed IDC/DCIS). Lesion-based likelihood assurance was measured using a sureness metric based on the 95% confidence interval of the classifier output for each lesion.</p><p><strong>Results: </strong>The median and 95% CI of the 0.632+-corrected area under the receiver operating characteristic curve for the task of classifying lesions as pure DCIS or mixed IDC/DCIS were 0.81 [0.75, 0.86]. The sureness metric varied across the dataset with a range of 0.0002 (low sureness) to 0.96 (high sureness), with combinations of high and low classifier output and high and low sureness for some lesions.</p><p><strong>Conclusions: </strong>Sureness metrics can provide additional insights into the ability of CADx algorithms to pre-operatively predict whether a lesion is invasive.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 Suppl 2","pages":"S22012"},"PeriodicalIF":1.9,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12175085/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144334195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing breast cancer detection on screening mammogram using self-supervised learning and a hybrid deep model of Swin Transformer and convolutional neural networks. 基于自监督学习和Swin Transformer与卷积神经网络混合深度模型的乳房x光筛查增强乳腺癌检测。
IF 1.9
Journal of Medical Imaging Pub Date : 2025-11-01 Epub Date: 2025-05-14 DOI: 10.1117/1.JMI.12.S2.S22007
Han Chen, Anne L Martel
{"title":"Enhancing breast cancer detection on screening mammogram using self-supervised learning and a hybrid deep model of Swin Transformer and convolutional neural networks.","authors":"Han Chen, Anne L Martel","doi":"10.1117/1.JMI.12.S2.S22007","DOIUrl":"https://doi.org/10.1117/1.JMI.12.S2.S22007","url":null,"abstract":"<p><strong>Purpose: </strong>The scarcity of high-quality curated labeled medical training data remains one of the major limitations in applying artificial intelligence systems to breast cancer diagnosis. Deep models for mammogram analysis and mass (or micro-calcification) detection require training with a large volume of labeled images, which are often expensive and time-consuming to collect. To reduce this challenge, we proposed a method that leverages self-supervised learning (SSL) and a deep hybrid model, named HybMNet, which combines local self-attention and fine-grained feature extraction to enhance breast cancer detection on screening mammograms.</p><p><strong>Approach: </strong>Our method employs a two-stage learning process: (1) SSL pretraining: We utilize Efficient Self-Supervised Vision Transformers, an SSL technique, to pretrain a Swin Transformer (Swin-T) using a limited set of mammograms. The pretrained Swin-T then serves as the backbone for the downstream task. (2) Downstream training: The proposed HybMNet combines the Swin-T backbone with a convolutional neural network (CNN)-based network and a fusion strategy. The Swin-T employs local self-attention to identify informative patch regions from the high-resolution mammogram, whereas the CNN-based network extracts fine-grained local features from the selected patches. A fusion module then integrates global and local information from both networks to generate robust predictions. The HybMNet is trained end-to-end, with the loss function combining the outputs of the Swin-T and CNN modules to optimize feature extraction and classification performance.</p><p><strong>Results: </strong>The proposed method was evaluated for its ability to detect breast cancer by distinguishing between benign (normal) and malignant mammograms. Leveraging SSL pretraining and the HybMNet model, it achieved an area under the ROC curve of 0.864 (95% CI: 0.852, 0.875) on the Chinese Mammogram Database (CMMD) dataset and 0.889 (95% CI: 0.875, 0.903) on the INbreast dataset, highlighting its effectiveness.</p><p><strong>Conclusions: </strong>The quantitative results highlight the effectiveness of our proposed HybMNet and the SSL pretraining approach. In addition, visualizations of the selected region of interest patches show the model's potential for weakly supervised detection of microcalcifications, despite being trained using only image-level labels.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 Suppl 2","pages":"S22007"},"PeriodicalIF":1.9,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12076021/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144081336","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Simulating dynamic tumor contrast enhancement in breast MRI using conditional generative adversarial networks. 使用条件生成对抗网络模拟乳腺MRI动态肿瘤对比增强。
IF 1.9
Journal of Medical Imaging Pub Date : 2025-11-01 Epub Date: 2025-06-28 DOI: 10.1117/1.JMI.12.S2.S22014
Richard Osuala, Smriti Joshi, Apostolia Tsirikoglou, Lidia Garrucho, Walter H L Pinaya, Daniel M Lang, Julia A Schnabel, Oliver Diaz, Karim Lekadir
{"title":"Simulating dynamic tumor contrast enhancement in breast MRI using conditional generative adversarial networks.","authors":"Richard Osuala, Smriti Joshi, Apostolia Tsirikoglou, Lidia Garrucho, Walter H L Pinaya, Daniel M Lang, Julia A Schnabel, Oliver Diaz, Karim Lekadir","doi":"10.1117/1.JMI.12.S2.S22014","DOIUrl":"10.1117/1.JMI.12.S2.S22014","url":null,"abstract":"<p><strong>Purpose: </strong>Deep generative models and synthetic data generation have become essential for advancing computer-assisted diagnosis and treatment. We explore one such emerging and particularly promising application of deep generative models, namely, the generation of virtual contrast enhancement. This allows to predict and simulate contrast enhancement in breast magnetic resonance imaging (MRI) without physical contrast agent injection, thereby unlocking lesion localization and categorization even in patient populations where the lengthy, costly, and invasive process of physical contrast agent injection is contraindicated.</p><p><strong>Approach: </strong>We define a framework for desirable properties of synthetic data, which leads us to propose the scaled aggregate measure (SAMe) consisting of a balanced set of scaled complementary metrics for generative model training and convergence evaluation. We further adopt a conditional generative adversarial network to translate from non-contrast-enhanced <math><mrow><mi>T</mi> <mn>1</mn></mrow> </math> -weighted fat-saturated breast MRI slices to their dynamic contrast-enhanced (DCE) counterparts, thus learning to detect, localize, and adequately highlight breast cancer lesions. Next, we extend our model approach to jointly generate multiple DCE-MRI time points, enabling the simulation of contrast enhancement across temporal DCE-MRI acquisitions. In addition, three-dimensional U-Net tumor segmentation models are implemented and trained on combinations of synthetic and real DCE-MRI data to investigate the effect of data augmentation with synthetic DCE-MRI volumes.</p><p><strong>Results: </strong>Conducting four main sets of experiments, (i) the variation across single metrics demonstrated the value of SAMe, and (ii) the quality and potential of virtual contrast injection for tumor detection and localization were shown. Segmentation models (iii) augmented with synthetic DCE-MRI data were more robust in the presence of domain shifts between pre-contrast and DCE-MRI domains. The joint synthesis approach of multi-sequence DCE-MRI (iv) resulted in temporally coherent synthetic DCE-MRI sequences and indicated the generative model's capability of learning complex contrast enhancement patterns.</p><p><strong>Conclusions: </strong>Virtual contrast injection can result in accurate synthetic DCE-MRI images, potentially enhancing breast cancer diagnosis and treatment protocols. We demonstrate that detecting, localizing, and segmenting tumors using synthetic DCE-MRI is feasible and promising, particularly considering patients where contrast agent injection is risky or contraindicated. Jointly generating multiple subsequent DCE-MRI sequences can increase image quality and unlock clinical applications assessing tumor characteristics related to its response to contrast media injection as a pillar for personalized treatment planning.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 Suppl 2","pages":"S22014"},"PeriodicalIF":1.9,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12205897/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144530364","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HID-CON: weakly supervised intrahepatic cholangiocarcinoma subtype classification of whole slide images using contrastive hidden class detection. HID-CON:弱监督肝内胆管癌亚型分类全幻灯片图像使用对比隐藏类检测。
IF 1.9
Journal of Medical Imaging Pub Date : 2025-11-01 Epub Date: 2025-03-12 DOI: 10.1117/1.JMI.12.6.061402
Jing Wei Tan, Kyoungbun Lee, Won-Ki Jeong
{"title":"HID-CON: weakly supervised intrahepatic cholangiocarcinoma subtype classification of whole slide images using contrastive hidden class detection.","authors":"Jing Wei Tan, Kyoungbun Lee, Won-Ki Jeong","doi":"10.1117/1.JMI.12.6.061402","DOIUrl":"10.1117/1.JMI.12.6.061402","url":null,"abstract":"<p><strong>Purpose: </strong>Biliary tract cancer, also known as intrahepatic cholangiocarcinoma (IHCC), is a rare disease that shows no clear symptoms during its early stage, but its prognosis depends highly on the cancer subtype. Hence, an accurate cancer subtype classification model is necessary to provide better treatment plans to patients and to reduce mortality. However, annotating histopathology images at the pixel or patch level is time-consuming and labor-intensive for giga-pixel whole slide images. To address this problem, we propose a weakly supervised method for classifying IHCC subtypes using only image-level labels.</p><p><strong>Approach: </strong>The core idea of the proposed method is to detect regions (i.e., subimages or patches) commonly included in all subtypes, which we name the \"hidden class,\" and to remove them via iterative application of contrastive loss and label smoothing. Doing so will enable us to obtain only patches that faithfully represent each subtype, which are then used to train the image-level classification model by multiple instance learning (MIL).</p><p><strong>Results: </strong>Our method outperforms the state-of-the-art weakly supervised learning methods ABMIL, TransMIL, and DTFD-MIL by <math><mrow><mo>∼</mo> <mn>17</mn> <mo>%</mo></mrow> </math> , 18%, and 8%, respectively, and achieves performance comparable to that of supervised methods.</p><p><strong>Conclusions: </strong>The introduction of a hidden class to represent patches commonly found across all subtypes enhances the accuracy of IHCC classification and addresses the weak labeling problem in histopathology images.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 6","pages":"061402"},"PeriodicalIF":1.9,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11898109/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143626473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Self-supervision enhances instance-based multiple instance learning methods in digital pathology: a benchmark study. 自我监督增强基于实例的多实例学习方法在数字病理学:一个基准研究。
IF 1.9
Journal of Medical Imaging Pub Date : 2025-11-01 Epub Date: 2025-06-03 DOI: 10.1117/1.JMI.12.6.061404
Ali Mammadov, Loïc Le Folgoc, Julien Adam, Anne Buronfosse, Gilles Hayem, Guillaume Hocquet, Pietro Gori
{"title":"Self-supervision enhances instance-based multiple instance learning methods in digital pathology: a benchmark study.","authors":"Ali Mammadov, Loïc Le Folgoc, Julien Adam, Anne Buronfosse, Gilles Hayem, Guillaume Hocquet, Pietro Gori","doi":"10.1117/1.JMI.12.6.061404","DOIUrl":"10.1117/1.JMI.12.6.061404","url":null,"abstract":"<p><strong>Purpose: </strong>Multiple instance learning (MIL) has emerged as the best solution for whole slide image (WSI) classification. It consists of dividing each slide into patches, which are treated as a bag of instances labeled with a global label. MIL includes two main approaches: instance-based and embedding-based. In the former, each patch is classified independently, and then, the patch scores are aggregated to predict the bag label. In the latter, bag classification is performed after aggregating patch embeddings. Even if instance-based methods are naturally more interpretable, embedding-based MILs have usually been preferred in the past due to their robustness to poor feature extractors. Recently, the quality of feature embeddings has drastically increased using self-supervised learning (SSL). Nevertheless, many authors continue to endorse the superiority of embedding-based MIL.</p><p><strong>Approach: </strong>We conduct 710 experiments across 4 datasets, comparing 10 MIL strategies, 6 self-supervised methods with 4 backbones, 4 foundation models, and various pathology-adapted techniques. Furthermore, we introduce 4 instance-based MIL methods, never used before in the pathology domain.</p><p><strong>Results: </strong>We show that with a good SSL feature extractor, simple instance-based MILs, with very few parameters, obtain similar or better performance than complex, state-of-the-art (SOTA) embedding-based MIL methods, setting new SOTA results on the BRACS and Camelyon16 datasets.</p><p><strong>Conclusion: </strong>As simple instance-based MIL methods are naturally more interpretable and explainable to clinicians, our results suggest that more effort should be put into well-adapted SSL methods for WSI rather than into complex embedding-based MIL methods.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 6","pages":"061404"},"PeriodicalIF":1.9,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12134610/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144235588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Asymmetric scatter kernel estimation neural network for digital breast tomosynthesis. 数字乳房断层合成的非对称散射核估计神经网络。
IF 1.9
Journal of Medical Imaging Pub Date : 2025-11-01 Epub Date: 2025-06-12 DOI: 10.1117/1.JMI.12.S2.S22008
Subong Hyun, Seoyoung Lee, Ilwong Choi, Choul Woo Shin, Seungryong Cho
{"title":"Asymmetric scatter kernel estimation neural network for digital breast tomosynthesis.","authors":"Subong Hyun, Seoyoung Lee, Ilwong Choi, Choul Woo Shin, Seungryong Cho","doi":"10.1117/1.JMI.12.S2.S22008","DOIUrl":"10.1117/1.JMI.12.S2.S22008","url":null,"abstract":"<p><strong>Purpose: </strong>Various deep learning (DL) approaches have been developed for estimating scatter radiation in digital breast tomosynthesis (DBT). Existing DL methods generally employ an end-to-end training approach, overlooking the underlying physics of scatter formation. We propose a deep learning approach inspired by asymmetric scatter kernel superposition to estimate scatter in DBT.</p><p><strong>Approach: </strong>We use the network to generate the scatter amplitude distribution as well as the scatter kernel width and asymmetric factor map. To account for variations in local breast thickness and shape in DBT projection data, we integrated the Euclidean distance map and projection angle information into the network design for estimating the asymmetric factor.</p><p><strong>Results: </strong>Systematic experiments on numerical phantom data and physical experimental data demonstrated the outperformance of the proposed approach to UNet-based end-to-end scatter estimation and symmetric kernel-based approaches in terms of signal-to-noise ratio and structure similarity index measure of the resulting scatter corrected images.</p><p><strong>Conclusions: </strong>The proposed method is believed to have achieved significant advancement in scatter estimation of DBT projections, allowing a robust and reliable physics-informed scatter correction.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 Suppl 2","pages":"S22008"},"PeriodicalIF":1.9,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12162176/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144303312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Artificial intelligence in medical imaging diagnosis: are we ready for its clinical implementation? 人工智能在医学影像诊断中的应用:我们准备好临床应用了吗?
IF 1.9
Journal of Medical Imaging Pub Date : 2025-11-01 Epub Date: 2025-06-19 DOI: 10.1117/1.JMI.12.6.061405
Oscar Ramos-Soto, Itzel Aranguren, Manuel Carrillo M, Diego Oliva, Sandra E Balderas-Mata
{"title":"Artificial intelligence in medical imaging diagnosis: are we ready for its clinical implementation?","authors":"Oscar Ramos-Soto, Itzel Aranguren, Manuel Carrillo M, Diego Oliva, Sandra E Balderas-Mata","doi":"10.1117/1.JMI.12.6.061405","DOIUrl":"10.1117/1.JMI.12.6.061405","url":null,"abstract":"<p><strong>Purpose: </strong>We examine the transformative potential of artificial intelligence (AI) in medical imaging diagnosis, focusing on improving diagnostic accuracy and efficiency through advanced algorithms. It addresses the significant challenges preventing immediate clinical adoption of AI, specifically from technical, ethical, and legal perspectives. The aim is to highlight the current state of AI in medical imaging and outline the necessary steps to ensure safe, effective, and ethically sound clinical implementation.</p><p><strong>Approach: </strong>We conduct a comprehensive discussion, with special emphasis on the technical requirements for robust AI models, the ethical frameworks needed for responsible deployment, and the legal implications, including data privacy and regulatory compliance. Explainable artificial intelligence (XAI) is examined as a means to increase transparency and build trust among healthcare professionals and patients.</p><p><strong>Results: </strong>The analysis reveals key challenges to AI integration in clinical settings, including the need for extensive high-quality datasets, model reliability, advanced infrastructure, and compliance with regulatory standards. The lack of explainability in AI outputs remains a barrier, with XAI identified as crucial for meeting transparency standards and enhancing trust among end users.</p><p><strong>Conclusions: </strong>Overcoming these barriers requires a collaborative, multidisciplinary approach to integrate AI into clinical practice responsibly. Addressing technical, ethical, and legal issues will support a softer transition, fostering a more accurate, efficient, and patient-centered healthcare system where AI augments traditional medical practices.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 6","pages":"061405"},"PeriodicalIF":1.9,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12177575/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144477323","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信