Medical image analysis最新文献

筛选
英文 中文
CLASS-M: Adaptive stain separation-based contrastive learning with pseudo-labeling for histopathological image classification CLASS-M:基于自适应染色分离的伪标记对比学习,用于组织病理图像分类
IF 10.7 1区 医学
Medical image analysis Pub Date : 2025-07-08 DOI: 10.1016/j.media.2025.103711
Bodong Zhang , Hamid Manoochehri , Man Minh Ho , Fahimeh Fooladgar , Yosep Chong , Beatrice S. Knudsen , Deepika Sirohi , Tolga Tasdizen
{"title":"CLASS-M: Adaptive stain separation-based contrastive learning with pseudo-labeling for histopathological image classification","authors":"Bodong Zhang ,&nbsp;Hamid Manoochehri ,&nbsp;Man Minh Ho ,&nbsp;Fahimeh Fooladgar ,&nbsp;Yosep Chong ,&nbsp;Beatrice S. Knudsen ,&nbsp;Deepika Sirohi ,&nbsp;Tolga Tasdizen","doi":"10.1016/j.media.2025.103711","DOIUrl":"10.1016/j.media.2025.103711","url":null,"abstract":"<div><div>Histopathological image classification is an important task in medical image analysis. Recent approaches generally rely on weakly supervised learning due to the ease of acquiring case-level labels from pathology reports. However, patch-level classification is preferable in applications where only a limited number of cases are available or when local prediction accuracy is critical. On the other hand, acquiring extensive datasets with localized labels for training is not feasible. In this paper, we propose a semi-supervised patch-level histopathological image classification model, named CLASS-M, that does not require extensively labeled datasets. CLASS-M is formed by two main parts: a contrastive learning module that uses separated Hematoxylin images and Eosin images generated through an adaptive stain separation process, and a module with pseudo-labels using MixUp. We compare our model with other state-of-the-art models on two clear cell renal cell carcinoma datasets. We demonstrate that our CLASS-M model has the best performance on both datasets. Our code is available at <span><span>github.com/BzhangURU/Paper_CLASS-M/tree/main</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"105 ","pages":"Article 103711"},"PeriodicalIF":10.7,"publicationDate":"2025-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144622435","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Knowledge-driven interpretative conditional diffusion model for contrast-free myocardial infarction enhancement synthesis 无造影剂心肌梗死增强综合的知识驱动解释性条件扩散模型
IF 10.7 1区 医学
Medical image analysis Pub Date : 2025-07-07 DOI: 10.1016/j.media.2025.103701
Ronghui Qi , Min Tao , Chenchu Xu , Xiaohu Li , Siyuan Pan , Jie Chen , Shuo Li
{"title":"Knowledge-driven interpretative conditional diffusion model for contrast-free myocardial infarction enhancement synthesis","authors":"Ronghui Qi ,&nbsp;Min Tao ,&nbsp;Chenchu Xu ,&nbsp;Xiaohu Li ,&nbsp;Siyuan Pan ,&nbsp;Jie Chen ,&nbsp;Shuo Li","doi":"10.1016/j.media.2025.103701","DOIUrl":"10.1016/j.media.2025.103701","url":null,"abstract":"<div><div>Synthesis of myocardial infarction enhancement (MIE) images without contrast agents (CAs) has shown great potential to advance myocardial infarction (MI) diagnosis and treatment. It provides results comparable to late gadolinium enhancement (LGE) images, thereby reducing the risks associated with CAs and streamlining clinical workflows. The existing knowledge-and-data-driven approach has made progress in addressing the complex challenges of synthesizing MIE images (i.e., invisible myocardial scars and high inter-individual variability) but still has limitations in the interpretability of kinematic inference, morphological knowledge integration, and kinematic-morphological fusion, thereby reducing the transparency and reliability of the model and causing information loss during synthesis. In this paper, we proposed a knowledge-driven interpretative conditional diffusion model (K-ICDM), which learns kinematic and morphological information from non-enhanced cardiac MR images (CINE sequence and T1 sequence) guided by cardiac knowledge, enabling the synthesis of MIE images. Importantly, our K-ICDM introduces three key innovations that address these limitations, thereby providing interpretability and improving synthesis quality. (1) A novel cardiac causal intervention that generates counterfactual strain to intervene in the inference process from motion maps to abnormal myocardial information, thereby establishing an explicit relationship and providing the clear causal interpretability. (2) A knowledge-driven cognitive combination strategy that utilizes cardiac signal topology knowledge to analyze T1 signal variations, enabling the model to understand how to learn morphological features, thus providing interpretability for morphology capture. (3) An information-specific adaptive fusion strategy that integrates kinematic and morphological information into the conditioning input of the diffusion model based on their specific contributions and adaptively learns their interactions, thereby preserving more detailed information. Experiments on a broad MI dataset with 315 patients show that our K-ICDM achieves state-of-the-art performance in contrast-free MIE image synthesis, improving structural similarity index measure (SSIM) by at least 2.1% over recent methods. These results demonstrate that our method effectively overcomes the limitations of existing methods in capturing the complex relationship between myocardial motion and scar distribution and integrating of static and dynamic sequences, thus enabling the accurate synthesis of subtle scar boundaries.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"105 ","pages":"Article 103701"},"PeriodicalIF":10.7,"publicationDate":"2025-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144587716","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Vision-skeleton dual-modality framework for generalizable assessment of Parkinson’s disease gait 帕金森病步态的视觉-骨骼双模态评估框架
IF 10.7 1区 医学
Medical image analysis Pub Date : 2025-07-07 DOI: 10.1016/j.media.2025.103727
Weiping Liu , Xiaozhen Lin , Xinghong Chen , Yifang Liu , Zengxin Zhong , Rong Chen , Guannan Chen , Yu Lin
{"title":"Vision-skeleton dual-modality framework for generalizable assessment of Parkinson’s disease gait","authors":"Weiping Liu ,&nbsp;Xiaozhen Lin ,&nbsp;Xinghong Chen ,&nbsp;Yifang Liu ,&nbsp;Zengxin Zhong ,&nbsp;Rong Chen ,&nbsp;Guannan Chen ,&nbsp;Yu Lin","doi":"10.1016/j.media.2025.103727","DOIUrl":"10.1016/j.media.2025.103727","url":null,"abstract":"<div><div>Gait abnormalities in Parkinson’s disease (PD) can reflect the extent of dysfunction, and making their assessment crucial for the diagnosis and treatment of PD. Current video-based methods of PD gait assessment are limited to only focusing on skeleton motion information and are confined to evaluations from a single perspective. To overcome these limitations, we propose a novel vision-skeleton dual-modality framework, which integrates keypoints vision features with skeleton motion information to enable a more accurate and comprehensive assessment of PD gait. We firstly introduce the Keypoints Vision Transformer, a novel architecture designed to extract vision features of human keypoints. This model encompasses both the spatial locations and connectivity relationships of human keypoints. Subsequently, through the proposed temporal fusion encoder, we integrate the extracted skeleton motion with keypoints vision features to enhance the extraction of temporal motion features. In a video dataset of 241 PD participants recorded from the front, our proposed framework achieves an assessment accuracy of 78.05%, which demonstrates superior performance compared to other methods. To enhance the interpretability of our method, we also conduct a feature visualization analysis of the proposed dual-modality framework, which reveal the mechanisms of different body parts and dual-modality branch in PD gait assessment. Additionally, when applied to another video dataset recorded from a more general perspective, our method still achieves a commendable accuracy of 73.07%. This achievement demonstrates the robust generalization capability of the proposed model in PD gait assessment from cross-view, which offers a novel approach for realizing unrestricted PD gait assessment in home monitoring. The latest version of the code is available at <span><span>https://github.com/FJNU-LWP/PD-gait-VSDF</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"105 ","pages":"Article 103727"},"PeriodicalIF":10.7,"publicationDate":"2025-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144614076","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hierarchical Vision Transformers for prostate biopsy grading: Towards bridging the generalization gap 前列腺活检分级的分层视觉变压器:弥合泛化差距
IF 10.7 1区 医学
Medical image analysis Pub Date : 2025-07-07 DOI: 10.1016/j.media.2025.103663
Clément Grisi , Kimmo Kartasalo , Martin Eklund , Lars Egevad , Jeroen van der Laak , Geert Litjens
{"title":"Hierarchical Vision Transformers for prostate biopsy grading: Towards bridging the generalization gap","authors":"Clément Grisi ,&nbsp;Kimmo Kartasalo ,&nbsp;Martin Eklund ,&nbsp;Lars Egevad ,&nbsp;Jeroen van der Laak ,&nbsp;Geert Litjens","doi":"10.1016/j.media.2025.103663","DOIUrl":"10.1016/j.media.2025.103663","url":null,"abstract":"<div><div>Practical deployment of Vision Transformers in computational pathology has largely been constrained by the sheer size of whole-slide images. Transformers faced a similar limitation when applied to long documents, and Hierarchical Transformers were introduced to circumvent it. This work explores the capabilities of Hierarchical Vision Transformers for prostate cancer grading in WSIs and presents a novel technique to combine attention scores smartly across hierarchical transformers. Our best-performing model matches state-of-the-art algorithms with a 0.916 quadratic kappa on the Prostate cANcer graDe Assessment (PANDA) test set. It exhibits superior generalization capacities when evaluated in more diverse clinical settings, achieving a quadratic kappa of 0.877, outperforming existing solutions. These results demonstrate our approach’s robustness and practical applicability, paving the way for its broader adoption in computational pathology and possibly other medical imaging tasks. Our code is publicly available at <span><span>https://github.com/computationalpathologygroup/hvit</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"105 ","pages":"Article 103663"},"PeriodicalIF":10.7,"publicationDate":"2025-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144587577","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DIOR-ViT: Differential ordinal learning Vision Transformer for cancer classification in pathology images DIOR-ViT:病理图像中癌症分类的微分有序学习视觉转换器
IF 10.7 1区 医学
Medical image analysis Pub Date : 2025-07-04 DOI: 10.1016/j.media.2025.103708
Ju Cheon Lee , Keunho Byeon , Boram Song , Kyungeun Kim , Jin Tae Kwak
{"title":"DIOR-ViT: Differential ordinal learning Vision Transformer for cancer classification in pathology images","authors":"Ju Cheon Lee ,&nbsp;Keunho Byeon ,&nbsp;Boram Song ,&nbsp;Kyungeun Kim ,&nbsp;Jin Tae Kwak","doi":"10.1016/j.media.2025.103708","DOIUrl":"10.1016/j.media.2025.103708","url":null,"abstract":"<div><div>In computational pathology, cancer grading has been mainly studied as a categorical classification problem, which does not utilize the ordering nature of cancer grades such as the higher the grade is, the worse the cancer is. To incorporate the ordering relationship among cancer grades, we introduce a differential ordinal learning problem in which we define and learn the degree of difference in the categorical class labels between pairs of samples by using their differences in the feature space. To this end, we propose a transformer-based neural network that simultaneously conducts both categorical classification and differential ordinal classification for cancer grading. We also propose a tailored loss function for differential ordinal learning. Evaluating the proposed method on three different types of cancer datasets, we demonstrate that the adoption of differential ordinal learning can improve the accuracy and reliability of cancer grading, outperforming conventional cancer grading approaches. The proposed approach should be applicable to other diseases and problems as they involve ordinal relationship among class labels.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"105 ","pages":"Article 103708"},"PeriodicalIF":10.7,"publicationDate":"2025-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144587715","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring Complexity-Calibrated morphological distribution for whole slide image classification and difficulty-grading 探索基于复杂度校正的形态学分布在全幻灯片图像分类和难度分级中的应用
IF 10.7 1区 医学
Medical image analysis Pub Date : 2025-07-03 DOI: 10.1016/j.media.2025.103707
Jiahui Yu , Xuna Wang , Weiming Fan , Yuping Guo , Junfen Fu , Yingke Xu
{"title":"Exploring Complexity-Calibrated morphological distribution for whole slide image classification and difficulty-grading","authors":"Jiahui Yu ,&nbsp;Xuna Wang ,&nbsp;Weiming Fan ,&nbsp;Yuping Guo ,&nbsp;Junfen Fu ,&nbsp;Yingke Xu","doi":"10.1016/j.media.2025.103707","DOIUrl":"10.1016/j.media.2025.103707","url":null,"abstract":"<div><div>Multiple Instance Learning (MIL) is essential for accurate pathological image classification under limited annotations. Global-local morphological modeling approaches have shown promise in whole slide image (WSI) analysis by aligning patches with spatial positions. However, these methods fail to differentiate samples by complexity during morphological distribution construction, treating all samples equally important for model training. This oversight disregards the impact of difficult-to-recognize samples, leading to a morphological fitting bottleneck that hinders the clinical application of deep learning across centers, subtypes, and imaging standards. To address this, we propose Complexity-Calibrated MIL (CoCaMIL) for WSI classification and difficulty grading. CoCaMIL emphasizes the synergistic effects between morphological distribution and key complexity factors, including blur, tumor size, coloring style, brightness, and stain. Specifically, we developed an image–text contrastive pretraining framework to jointly learn multiple complexity factors, enhancing morphological distribution fitting. Additionally, to reduce the tendency to focus on difficult samples overly, we introduce a complexity calibration method, which forms a distance-prioritized feature distribution by incorporating objective factors during training. CoCaMIL achieved top classification performance across three large benchmarks and established a reliable system for grading sample difficulty. To our knowledge, CoCaMIL is the first approach to construct WSI morphological representations based on the collaborative integration of complexity factors, offering a new perspective to broaden the clinical use of deep learning in digital pathology. The code is available at <span><span>https://github.com/sm8754/cocamil</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"105 ","pages":"Article 103707"},"PeriodicalIF":10.7,"publicationDate":"2025-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144563211","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Skeleton2Mask: Skeleton-supervised airway segmentation 骷髅监督的气道分割
IF 10.7 1区 医学
Medical image analysis Pub Date : 2025-07-02 DOI: 10.1016/j.media.2025.103693
Mingyue Zhao , Han Li , Di Zhang , Jin Zhang , Xiuxiu Zhou , Li Fan , Xiaolan Qiu , Shiyuan Liu , S. Kevin Zhou
{"title":"Skeleton2Mask: Skeleton-supervised airway segmentation","authors":"Mingyue Zhao ,&nbsp;Han Li ,&nbsp;Di Zhang ,&nbsp;Jin Zhang ,&nbsp;Xiuxiu Zhou ,&nbsp;Li Fan ,&nbsp;Xiaolan Qiu ,&nbsp;Shiyuan Liu ,&nbsp;S. Kevin Zhou","doi":"10.1016/j.media.2025.103693","DOIUrl":"10.1016/j.media.2025.103693","url":null,"abstract":"<div><div>Airway segmentation has achieved considerable success. However, it still hinges on precise voxel-wise annotations, which are not only labor-intensive and time-consuming but also subject to challenges like missing branches, discontinuous branch labeling, and erroneous edge delineation. To tackle this, this paper introduces two novel contributions: a skeleton annotation (SKA) strategy for airway tree structures, and a sparse supervision learning approach — Skeleton2Mask, built upon SKA for dense airway prediction. The SKA strategy replaces traditional slice-by-slice, voxel-wise labeling with a branch-by-branch, control-point-based skeleton delineation. This approach not only enhances the preservation of topological integrity but also reduces annotation time by approximately 80%. Its effectiveness and reliability have been validated through <strong>clinical experiments</strong>, demonstrating its potential to streamline airway segmentation tasks. Nevertheless, the absolute sparsity of this annotation, along with the typical tree structure, can easily cause the failure of sparse supervision learning. To tackle this, we further propose Skeleton2Mask, a two-stage label propagation learning method, involving dual-stream buffer propagation and hierarchical geometry-aware learning, to ensure reliable and structure-friendly dense prediction. Experiments reveal that 1) Skeleton2Mask outperforms other sparsely supervised approaches on two public datasets by a large margin, achieving comparable results to full supervision with no more than 3% of airway annotations. 2) With the same annotation cost, our algorithm demonstrated significantly superior performance in both topological and voxel-wise metrics.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"105 ","pages":"Article 103693"},"PeriodicalIF":10.7,"publicationDate":"2025-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144549703","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unleashing the potential of open-set noisy samples against label noise for medical image classification 释放开集噪声样本对抗标签噪声的潜力,用于医学图像分类
IF 10.7 1区 医学
Medical image analysis Pub Date : 2025-07-02 DOI: 10.1016/j.media.2025.103702
Zehui Liao , Shishuai Hu , Yanning Zhang , Yong Xia
{"title":"Unleashing the potential of open-set noisy samples against label noise for medical image classification","authors":"Zehui Liao ,&nbsp;Shishuai Hu ,&nbsp;Yanning Zhang ,&nbsp;Yong Xia","doi":"10.1016/j.media.2025.103702","DOIUrl":"10.1016/j.media.2025.103702","url":null,"abstract":"<div><div>Addressing the coexistence of closed-set and open-set label noise in medical image classification remains a largely unexplored challenge. Unlike natural image classification, where noisy samples can often be clearly separated from clean ones, medical image classification is complicated by high inter-class similarity, which makes the identification of open-set noisy samples particularly difficult. Moreover, existing methods typically fail to fully exploit open-set noisy samples for label noise mitigation, either discarding them or assigning uniform soft labels, thus limiting their utility. To address these challenges, we propose the ENCOFA: the Extended Noise-robust Contrastive and Open-set Feature Augmentation framework for medical image classification. This framework introduces the Extended Noise-robust Supervised Contrastive Loss, which enhances feature discrimination across both in-distribution and out-of-distribution classes. By treating open-set noisy samples as an extended class and weighting contrastive pairs based on label reliability, this loss effectively improves the robustness to label noise. In addition, we develop the Open-set Feature Augmentation module, which enriches open-set samples at the feature level and dynamically assigns class labels, thereby leveraging model capacity while mitigating overfitting to noisy data. We evaluated the proposed framework on two synthetic noisy datasets and one real-world noisy dataset. The results demonstrate the superiority of ENCOFA over six state-of-the-art methods and highlight the effectiveness of explicitly leveraging open-set noisy samples in combating label noise. The code will be publicly available at <span><span>https://github.com/Merrical/ENCOFA</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"105 ","pages":"Article 103702"},"PeriodicalIF":10.7,"publicationDate":"2025-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144557406","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A holistic approach for classifying dental conditions from textual reports and panoramic radiographs 从文本报告和全景x光片分类牙齿状况的整体方法
IF 10.7 1区 医学
Medical image analysis Pub Date : 2025-07-02 DOI: 10.1016/j.media.2025.103709
Bernardo Silva , Jefferson Fontinele , Carolina Letícia Zilli Vieira , João Manuel R.S. Tavares , Patricia Ramos Cury , Luciano Oliveira
{"title":"A holistic approach for classifying dental conditions from textual reports and panoramic radiographs","authors":"Bernardo Silva ,&nbsp;Jefferson Fontinele ,&nbsp;Carolina Letícia Zilli Vieira ,&nbsp;João Manuel R.S. Tavares ,&nbsp;Patricia Ramos Cury ,&nbsp;Luciano Oliveira","doi":"10.1016/j.media.2025.103709","DOIUrl":"10.1016/j.media.2025.103709","url":null,"abstract":"<div><div>Dental panoramic radiographs offer vast diagnostic opportunities, but the shortage of labeled data hampers the training of supervised deep-learning networks for the automatic analysis of these images. To address this issue, we introduce a holistic learning approach to classify dental conditions on panoramic radiographs, exploring tooth segmentation and textual reports, without a direct tooth-level annotated dataset. Large language models were used to identify the prevalent dental conditions in these reports, acting as an auto-labeling procedure. After an instance segmentation network segments the teeth, a linkage approach is in charge of matching each tooth with the corresponding condition found in the textual report. The proposed framework was validated using two of the most extensive datasets in the literature, specially gathered for this study, consisting of 8,795 panoramic radiographs and 8,029 paired reports and images. Encouragingly, the results consistently exceeded the baseline for the Matthews correlation coefficient. A comparative analysis against specialist and dental student ratings, supported by statistical evaluation, highlighted its effectiveness. Using specialist consensus as the ground truth, the system achieved precision comparable to final-year undergraduate students and was within 8.1 percentage points of specialist performance.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"105 ","pages":"Article 103709"},"PeriodicalIF":10.7,"publicationDate":"2025-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144579827","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Upper-body free-breathing Magnetic Resonance Fingerprinting applied to the quantification of water T1 and fat fraction 上半身自由呼吸磁共振指纹技术应用于水T1和脂肪分数的定量
IF 10.7 1区 医学
Medical image analysis Pub Date : 2025-07-01 DOI: 10.1016/j.media.2025.103699
Constantin Slioussarenko , Marc Lapert , Pierre-Yves Baudin , Benjamin Marty
{"title":"Upper-body free-breathing Magnetic Resonance Fingerprinting applied to the quantification of water T1 and fat fraction","authors":"Constantin Slioussarenko ,&nbsp;Marc Lapert ,&nbsp;Pierre-Yves Baudin ,&nbsp;Benjamin Marty","doi":"10.1016/j.media.2025.103699","DOIUrl":"10.1016/j.media.2025.103699","url":null,"abstract":"<div><div>Over the past decade, Magnetic Resonance Fingerprinting (MRF) has emerged as an efficient paradigm for the rapid and simultaneous quantification of multiple MRI parameters, including fat fraction (FF), water T1 (<span><math><mrow><mi>T</mi><msub><mrow><mn>1</mn></mrow><mrow><mi>H</mi><mn>2</mn><mi>O</mi></mrow></msub></mrow></math></span>), water T2 (<span><math><mrow><mi>T</mi><msub><mrow><mn>2</mn></mrow><mrow><mi>H</mi><mn>2</mn><mi>O</mi></mrow></msub></mrow></math></span>), and fat T1 (<span><math><mrow><mi>T</mi><msub><mrow><mn>1</mn></mrow><mrow><mi>f</mi><mi>a</mi><mi>t</mi></mrow></msub></mrow></math></span>). These parameters serve as promising imaging biomarkers in various anatomical targets such as the heart, liver, and skeletal muscles. However, measuring these parameters in the upper body poses challenges due to physiological motion, particularly respiratory motion. In this work, we propose a novel approach, motion-corrected (MoCo) MRF T1-FF, which estimates the motion field using an optimized preliminary motion scan and uses it to correct the MRF acquisition data before dictionary search for reconstructing motion-corrected FF and <span><math><mrow><mi>T</mi><msub><mrow><mn>1</mn></mrow><mrow><mi>H</mi><mn>2</mn><mi>O</mi></mrow></msub></mrow></math></span> parametric maps of the upper-body region. We validated this framework using an <em>in vivo</em> dataset comprising 18 healthy volunteers (12 men, 6 women, mean age = 40 ± 14 years old) and a 3 subjects with different neuromuscular disorders. At the ROI level, in regions minimally affected by motion, no significant bias was observed between the uncorrected and MoCo reconstructions for FF (mean difference of -0.6%) and <span><math><mrow><mi>T</mi><msub><mrow><mn>1</mn></mrow><mrow><mi>H</mi><mn>2</mn><mi>O</mi></mrow></msub></mrow></math></span> (<span><math><mrow><mo>−</mo><mn>5</mn><mo>.</mo><mn>5</mn></mrow></math></span> ms) values. Moreover, MoCo MRF T1-FF significantly reduced the standard deviations of distributions assessed in these regions, indicating improved precision. Notably, in regions heavily affected by motion, such as respiratory muscles, liver, and kidneys, the MRF parametric maps exhibited a marked reduction in motion blurring and streaking artifacts after motion correction. Furthermore, the diaphragm was consistently discernible on parametric maps after motion correction. This approach lays the groundwork for the joint 3D quantification of FF and <span><math><mrow><mi>T</mi><msub><mrow><mn>1</mn></mrow><mrow><mi>H</mi><mn>2</mn><mi>O</mi></mrow></msub></mrow></math></span> in regions that are rarely studied, such as the respiratory muscles, particularly the intercostal muscles and diaphragm.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"105 ","pages":"Article 103699"},"PeriodicalIF":10.7,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144518412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信