Medical image analysis最新文献

筛选
英文 中文
ViFT: Visual field transformer for visual field testing via deep reinforcement learning ViFT:通过深度强化学习的视觉领域测试的视觉领域转换器
IF 10.7 1区 医学
Medical image analysis Pub Date : 2025-07-17 DOI: 10.1016/j.media.2025.103721
Shozo Saeki, Minoru Kawahara, Hirohisa Aman
{"title":"ViFT: Visual field transformer for visual field testing via deep reinforcement learning","authors":"Shozo Saeki,&nbsp;Minoru Kawahara,&nbsp;Hirohisa Aman","doi":"10.1016/j.media.2025.103721","DOIUrl":"10.1016/j.media.2025.103721","url":null,"abstract":"<div><div>Visual field testing (perimetry) quantifies a patient’s visual field sensitivity to diagnosis and follow-up on their visual impairments. Visual field testing would require the patients to concentrate on the test for a long time. However, a longer testing time makes patients more exhausted and leads to a decrease in testing accuracy. Thus, it is helpful to develop a well-designed strategy to finish the testing more quickly while maintaining high accuracy. This paper proposes the visual field transformer (ViFT) for visual field testing with deep reinforcement learning. This study contributes to the following four: (1) ViFT can fully control the visual field testing process. (2) ViFT learns the relationships of visual field locations without any pre-defined information. (3) ViFT learning process can consider the patient perception uncertainty. (4) ViFT achieves the same or higher accuracy than the other strategies, and half as test time as the other strategies. Our experiments demonstrate the ViFT efficiency on the 24-2 test pattern compared with other strategies.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"105 ","pages":"Article 103721"},"PeriodicalIF":10.7,"publicationDate":"2025-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144662148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning homeomorphic image registration via conformal-invariant hyperelastic regularisation 通过共形不变超弹性正则化学习同胚图像配准
IF 10.7 1区 医学
Medical image analysis Pub Date : 2025-07-15 DOI: 10.1016/j.media.2025.103712
Jing Zou , Noémie Debroux , Lihao Liu , Jing Qin , Carola-Bibiane Schönlieb , Angelica I. Aviles-Rivero
{"title":"Learning homeomorphic image registration via conformal-invariant hyperelastic regularisation","authors":"Jing Zou ,&nbsp;Noémie Debroux ,&nbsp;Lihao Liu ,&nbsp;Jing Qin ,&nbsp;Carola-Bibiane Schönlieb ,&nbsp;Angelica I. Aviles-Rivero","doi":"10.1016/j.media.2025.103712","DOIUrl":"10.1016/j.media.2025.103712","url":null,"abstract":"<div><div>Deformable image registration is a fundamental task in medical image analysis and plays a crucial role in a wide range of clinical applications. Recently, deep learning-based approaches have been widely studied for deformable medical image registration and achieved promising results. However, existing deep learning image registration techniques do not theoretically guarantee topology-preserving transformations. This is a key property to preserve anatomical structures and achieve plausible transformations that can be used in real clinical settings. We propose a novel framework for deformable image registration. Firstly, we introduce a novel regulariser based on conformal-invariant properties in a nonlinear elasticity setting. Our regulariser enforces the deformation field to be mooth, invertible and orientation-preserving. More importantly, we strictly guarantee topology preservation yielding to a clinical meaningful registration. Secondly, we boost the performance of our regulariser through coordinate MLPs, where one can view the to-be-registered images as continuously differentiable entities. We demonstrate, through numerical and visual experiments, that our framework is able to outperform current techniques for image registration.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"105 ","pages":"Article 103712"},"PeriodicalIF":10.7,"publicationDate":"2025-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144654661","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ROP lesion segmentation via sequence coding and block balancing 基于序列编码和块平衡的ROP病变分割
IF 10.7 1区 医学
Medical image analysis Pub Date : 2025-07-15 DOI: 10.1016/j.media.2025.103723
Xiping Jia , Jianying Qiu , Dong Nie , Tian Liu
{"title":"ROP lesion segmentation via sequence coding and block balancing","authors":"Xiping Jia ,&nbsp;Jianying Qiu ,&nbsp;Dong Nie ,&nbsp;Tian Liu","doi":"10.1016/j.media.2025.103723","DOIUrl":"10.1016/j.media.2025.103723","url":null,"abstract":"<div><div>Retinopathy of prematurity (ROP) is a potentially blinding retinal disease that often affects low birth weight premature infants. Lesion detection and recognition are crucial for ROP diagnosis and clinical treatment. However, this task poses challenges for both ophthalmologists and computer-based systems due to the small size and subtle nature of many ROP lesions. To address these challenges, we present a Sequence encoding and Block balancing-based Segmentation Network (SeBSNet), which incorporates domain knowledge coding, sequence coding learning (SCL), and block-weighted balancing (BWB) techniques into the segmentation of ROP lesions. The experimental results demonstrate that SeBSNet outperforms existing state-of-the-art methods in the segmentation of ROP lesions, with average ROC_AUC, PR_AUC, and Dice scores of 98.84%, 71.90%, and 66.88%, respectively. Furthermore, the integration of the proposed techniques into ROP classification networks as an enhancing module leads to considerable improvements in classification performance.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"105 ","pages":"Article 103723"},"PeriodicalIF":10.7,"publicationDate":"2025-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144670066","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DCrownFormer+: Morphology-aware mesh generation and refinement transformer for dental crown prosthesis from 3D scan data of preparation and antagonist teeth DCrownFormer[公式略]:基于预备牙和拮抗牙三维扫描数据的牙冠修复体形态感知网格生成和细化转换器
IF 11.8 1区 医学
Medical image analysis Pub Date : 2025-07-14 DOI: 10.1016/j.media.2025.103717
Su Yang , Ji Yong Han , Sang-Heon Lim , Sujeong Kim , Jungro Lee , Keun-Suh Kim , Jun-Min Kim , Won-Jin Yi
{"title":"DCrownFormer+: Morphology-aware mesh generation and refinement transformer for dental crown prosthesis from 3D scan data of preparation and antagonist teeth","authors":"Su Yang ,&nbsp;Ji Yong Han ,&nbsp;Sang-Heon Lim ,&nbsp;Sujeong Kim ,&nbsp;Jungro Lee ,&nbsp;Keun-Suh Kim ,&nbsp;Jun-Min Kim ,&nbsp;Won-Jin Yi","doi":"10.1016/j.media.2025.103717","DOIUrl":"10.1016/j.media.2025.103717","url":null,"abstract":"<div><div>Dental prostheses are important in designing artificial replacements to restore the function and appearance of teeth. However, designing a patient-specific dental prosthesis is still labor-intensive and depends on dental professionals with knowledge of oral anatomy and their experience. Also, this procedure is time-consuming because the initial tooth template for designing dental crowns is not personalized. In this paper, we propose a novel end-to-end morphology-aware mesh generation and refinement transformer called DCrownFormer<span><math><msup><mrow></mrow><mrow><mo>+</mo></mrow></msup></math></span> to directly and efficiently generate high-fidelity and realistic meshes for dental crowns from the mesh inputs of 3D scans of preparation and antagonist teeth. DCrownFormer<span><math><msup><mrow></mrow><mrow><mo>+</mo></mrow></msup></math></span> captures local and global geometric features from mesh inputs using a geometric feature attention descriptor and the transformer encoder. We leverage a morphology-aware cross-attention module with curvature-penalty Chamfer distance loss (<span><math><mrow><mi>C</mi><mi>P</mi><mi>L</mi></mrow></math></span>) to generate the points and normals of a dental crown from geometric features at the transformer decoder. Then, a coarse indicator grid is directly estimated from the generated points and normals of the dental crown using differentiable Poisson surface reconstruction. To further improve the fine details of the occlusal surfaces, we propose a learning-based refinement method called implicit grid refinement network with a gradient-penalty mesh reconstruction loss (<span><math><mrow><mi>G</mi><mi>P</mi><mi>L</mi></mrow></math></span>) to generate high-fidelity and realistic dental crown meshes by refining the details of the coarse indicator grid. Our experimental results demonstrate that DCrownFormer<span><math><msup><mrow></mrow><mrow><mo>+</mo></mrow></msup></math></span> is superior to other methods in improving the shape completeness, surface smoothness, and morphological details of occlusal surfaces, such as dental grooves and cusps. We further validate the effectiveness of key components and the significant benefits of <span><math><mrow><mi>C</mi><mi>P</mi><mi>L</mi></mrow></math></span> and <span><math><mrow><mi>G</mi><mi>P</mi><mi>L</mi></mrow></math></span> through ablation studies.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"105 ","pages":"Article 103717"},"PeriodicalIF":11.8,"publicationDate":"2025-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144670084","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Attention-enhanced Dual-stream Registration Network via Mixed Attention Transformer and Gated Adaptive Fusion 基于混合注意转换器和门控自适应融合的注意力增强双流注册网络
IF 10.7 1区 医学
Medical image analysis Pub Date : 2025-07-14 DOI: 10.1016/j.media.2025.103713
Yuan Chang, Zheng Li
{"title":"Attention-enhanced Dual-stream Registration Network via Mixed Attention Transformer and Gated Adaptive Fusion","authors":"Yuan Chang,&nbsp;Zheng Li","doi":"10.1016/j.media.2025.103713","DOIUrl":"10.1016/j.media.2025.103713","url":null,"abstract":"<div><div>Deformable registration requires extracting salient features within each image and finding feature pairs with potential matching possibilities between the moving and fixed images, thereby estimating the deformation field used to align the images to be registered. With the development of deep learning, various deformable registration networks utilizing advanced architectures such as CNNs or Transformers have been proposed, showing excellent registration performance. However, existing works fail to effectively achieve both feature extraction within images and feature matching between images simultaneously. In this paper, we propose a novel Attention-enhanced Dual-stream Registration Network (ADRNet) for deformable brain MRI registration. First, we use parallel CNN modules to extract shallow features from the moving and fixed images separately. Then, we propose a Mixed Attention Transformer (MAT) module with self-attention, cross-attention, and local attention to model self-correlation and cross-correlation to find features for matching. Finally, we improve skip connections, a key component of U-shape networks ignored by existing methods. We propose a Gated Adaptive Fusion (GAF) module with a gate mechanism, using decoding features to control the encoding features transmitted through skip connections, to better integrate encoder–decoder features, thereby obtaining matching features with more accurate one-to-one correspondence. The extensive and comprehensive experiments on three public brain MRI datasets demonstrate that our method achieves state-of-the-art registration performance.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"105 ","pages":"Article 103713"},"PeriodicalIF":10.7,"publicationDate":"2025-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144670083","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient one-shot federated learning on medical data using knowledge distillation with image synthesis and client model adaptation 利用知识蒸馏、图像合成和客户端模型自适应对医疗数据进行高效的一次性联邦学习
IF 10.7 1区 医学
Medical image analysis Pub Date : 2025-07-14 DOI: 10.1016/j.media.2025.103714
Myeongkyun Kang , Philip Chikontwe , Soopil Kim , Kyong Hwan Jin , Ehsan Adeli , Kilian M. Pohl , Sang Hyun Park
{"title":"Efficient one-shot federated learning on medical data using knowledge distillation with image synthesis and client model adaptation","authors":"Myeongkyun Kang ,&nbsp;Philip Chikontwe ,&nbsp;Soopil Kim ,&nbsp;Kyong Hwan Jin ,&nbsp;Ehsan Adeli ,&nbsp;Kilian M. Pohl ,&nbsp;Sang Hyun Park","doi":"10.1016/j.media.2025.103714","DOIUrl":"10.1016/j.media.2025.103714","url":null,"abstract":"<div><div>One-shot federated learning (FL) has emerged as a promising solution in scenarios where multiple communication rounds are not practical. Though previous methods using knowledge distillation (KD) with synthetic images have shown promising results in transferring clients’ knowledge to the global model on one-shot FL, overfitting and extensive computations still persist. To tackle these issues, we propose a novel one-shot FL framework that generates pseudo intermediate samples using mixup, which incorporates synthesized images with diverse types of structure noise. This approach (i) enhances the diversity of training samples, preventing overfitting and providing informative visual clues for effective training and (ii) allows for the reuse of synthesized images, reducing computational resources and improving overall training efficiency. To mitigate domain disparity introduced by noise, we design noise-adapted client models by updating batch normalization statistics on noise to enhance KD. With these in place, the training process involves iteratively updating the global model through KD with both the original and noise-adapted client models using pseudo-generated images. Extensive evaluations on five small-sized and three regular-sized medical image classification datasets demonstrate the superiority of our approach over previous methods.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"105 ","pages":"Article 103714"},"PeriodicalIF":10.7,"publicationDate":"2025-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144654660","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Uncertainty quantification for White Matter Hyperintensity segmentation detects silent failures and improves automated Fazekas quantification 白质高强度分割的不确定度量化检测沉默故障和改进自动化Fazekas量化
IF 10.7 1区 医学
Medical image analysis Pub Date : 2025-07-12 DOI: 10.1016/j.media.2025.103697
Ben Philps , Maria del C. Valdés Hernández , Chen Qin , Una Clancy , Eleni Sakka , Susana Muñoz Maniega , Mark E. Bastin , Angela C.C. Jochems , Joanna M. Wardlaw , Miguel O. Bernabeu , Alzheimer’s Disease Neuroimaging Initiative (ADNI)
{"title":"Uncertainty quantification for White Matter Hyperintensity segmentation detects silent failures and improves automated Fazekas quantification","authors":"Ben Philps ,&nbsp;Maria del C. Valdés Hernández ,&nbsp;Chen Qin ,&nbsp;Una Clancy ,&nbsp;Eleni Sakka ,&nbsp;Susana Muñoz Maniega ,&nbsp;Mark E. Bastin ,&nbsp;Angela C.C. Jochems ,&nbsp;Joanna M. Wardlaw ,&nbsp;Miguel O. Bernabeu ,&nbsp;Alzheimer’s Disease Neuroimaging Initiative (ADNI)","doi":"10.1016/j.media.2025.103697","DOIUrl":"10.1016/j.media.2025.103697","url":null,"abstract":"<div><div>White Matter Hyperintensities (WMH) are key neuroradiological markers of small vessel disease present in brain MRI. Assessment of WMH is important in research and clinics. However, WMH are challenging to segment due to their high variability in shape, location, size, poorly defined borders, and similar intensity profile to other pathologies (e.g stroke lesions) and artefacts (e.g head motion). In this work, we assess the utility and semantic properties of the most effective techniques for uncertainty quantification (UQ) in segmentation for the WMH segmentation task across multiple test-time data distributions. We find UQ techniques reduce ‘silent failure’ by identifying in UQ maps small WMH clusters in the deep white matter that are unsegmented by the model. A combination of Stochastic Segmentation Networks with Deep Ensembles also yields the highest Dice and lowest Absolute Volume Difference % (AVD) score and can highlight areas where there is ambiguity between WMH and stroke lesions. We further demonstrate the downstream utility of UQ, proposing a novel method for classification of the clinical Fazekas score using spatial features extracted from voxelwise WMH probability and UQ maps. We show that incorporating WMH uncertainty information improves Fazekas classification performance and calibration. Our model with (UQ and spatial WMH features)/(spatial WMH features)/(WMH volume only) achieves a balanced accuracy score of 0.74/0.67/0.62, and root brier score (<span><math><mi>↓</mi></math></span>) of 0.65/0.72/0.74 in the Deep WMH and balanced accuracy of 0.74/0.73/0.71 and root brier score of 0.64/0.66/0.68 in the Periventricular region. We further demonstrate that stochastic UQ techniques with high sample diversity can improve the detection of poor quality segmentations.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"105 ","pages":"Article 103697"},"PeriodicalIF":10.7,"publicationDate":"2025-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144622430","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PathFL: Multi-alignment Federated Learning for pathology image segmentation PathFL:病理图像分割的多对齐联邦学习
IF 10.7 1区 医学
Medical image analysis Pub Date : 2025-07-12 DOI: 10.1016/j.media.2025.103670
Yuan Zhang , Feng Chen , Yaolei Qi , Guanyu Yang , Huazhu Fu
{"title":"PathFL: Multi-alignment Federated Learning for pathology image segmentation","authors":"Yuan Zhang ,&nbsp;Feng Chen ,&nbsp;Yaolei Qi ,&nbsp;Guanyu Yang ,&nbsp;Huazhu Fu","doi":"10.1016/j.media.2025.103670","DOIUrl":"10.1016/j.media.2025.103670","url":null,"abstract":"<div><div>Pathology image segmentation across multiple centers encounters significant challenges due to diverse sources of heterogeneity including imaging modalities, organs, and scanning equipment, whose variability brings representation bias and impedes the development of generalizable segmentation models. In this paper, we propose PathFL, a novel multi-alignment Federated Learning framework for pathology image segmentation that addresses these challenges through three-level alignment strategies of image, feature, and model aggregation. Firstly, at the image level, a collaborative style enhancement module aligns and diversifies local data by facilitating style information exchange across clients. Secondly, at the feature level, an adaptive feature alignment module ensures implicit alignment in the representation space by infusing local features with global insights, promoting consistency across heterogeneous client features learning. Finally, at the model aggregation level, a stratified similarity aggregation strategy hierarchically aligns and aggregates models on the server, using layer-specific similarity to account for client discrepancies and enhance global generalization. Comprehensive evaluations on four sets of heterogeneous pathology image datasets, encompassing cross-source, cross-modality, cross-organ, and cross-scanner variations, validate the effectiveness of our PathFL in achieving better performance and robustness against data heterogeneity. The code is available at <span><span>https://github.com/yuanzhang7/PathFL</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"105 ","pages":"Article 103670"},"PeriodicalIF":10.7,"publicationDate":"2025-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144622432","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Recursive variational autoencoders for 3D blood vessel generative modeling 三维血管生成建模的递归变分自编码器
IF 10.7 1区 医学
Medical image analysis Pub Date : 2025-07-11 DOI: 10.1016/j.media.2025.103703
Paula Feldman , Miguel Fainstein , Viviana Siless , Claudio Delrieux , Emmanuel Iarussi
{"title":"Recursive variational autoencoders for 3D blood vessel generative modeling","authors":"Paula Feldman ,&nbsp;Miguel Fainstein ,&nbsp;Viviana Siless ,&nbsp;Claudio Delrieux ,&nbsp;Emmanuel Iarussi","doi":"10.1016/j.media.2025.103703","DOIUrl":"10.1016/j.media.2025.103703","url":null,"abstract":"<div><div>Anatomical trees play an important role in clinical diagnosis and treatment planning. Yet, accurately representing these structures poses significant challenges owing to their intricate and varied topology and geometry. Most existing methods to synthesize vasculature are rule based, and despite providing some degree of control and variation in the structures produced, they fail to capture the diversity and complexity of actual anatomical data. We developed a Recursive variational Neural Network (RvNN) that fully exploits the hierarchical organization of the vessel and learns a low-dimensional manifold encoding branch connectivity along with geometry features describing the target surface. After training, the RvNN latent space can be sampled to generate new vessel geometries. By leveraging the power of generative neural networks, we generate 3D models of blood vessels that are both accurate and diverse, which is crucial for medical and surgical training, hemodynamic simulations, and many other purposes. These results closely resemble real data, achieving high similarity in vessel radii, length, and tortuosity across various datasets, including those with aneurysms. To the best of our knowledge, this work is the first to utilize this technique for synthesizing blood vessels.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"105 ","pages":"Article 103703"},"PeriodicalIF":10.7,"publicationDate":"2025-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144622434","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning to segment anatomy and lesions from disparately labeled sources in brain MRI 学习在脑MRI中从不同标记的来源中分割解剖和病变
IF 11.8 1区 医学
Medical image analysis Pub Date : 2025-07-11 DOI: 10.1016/j.media.2025.103705
Meva Himmetoglu , I. Frank Ciernik , Ender Konukoglu , Alzheimer’s Disease Neuroimaging Initiative
{"title":"Learning to segment anatomy and lesions from disparately labeled sources in brain MRI","authors":"Meva Himmetoglu ,&nbsp;I. Frank Ciernik ,&nbsp;Ender Konukoglu ,&nbsp;Alzheimer’s Disease Neuroimaging Initiative","doi":"10.1016/j.media.2025.103705","DOIUrl":"10.1016/j.media.2025.103705","url":null,"abstract":"<div><div>Segmenting healthy tissue structures alongside lesions in brain Magnetic Resonance Images (MRI) remains a challenge for today’s algorithms due to lesion-caused disruption of the anatomy and lack of jointly labeled training datasets, where both healthy tissues and lesions are labeled on the same images. In this paper, we propose a method that is robust to lesion-caused disruptions and can be trained from disparately labeled training sets, i.e., without requiring jointly labeled samples, to automatically segment both. In contrast to prior work, we decouple healthy tissue and lesion segmentation in two paths to leverage multi-sequence acquisitions and merge information with an attention mechanism. During inference, an image-specific adaptation reduces adverse influences of lesion regions on healthy tissue predictions. During training, the adaptation is taken into account through meta-learning and co-training is used to learn from disparately labeled training images. Our model shows an improved performance on several anatomical structures and lesions on a publicly available brain glioblastoma dataset compared to the state-of-the-art segmentation methods.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"105 ","pages":"Article 103705"},"PeriodicalIF":11.8,"publicationDate":"2025-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144622436","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信