Medical image analysis最新文献

筛选
英文 中文
Template-based semantic-guided orthodontic teeth alignment previewer 基于模板的语义引导正畸牙齿对齐预览器
IF 11.8 1区 医学
Medical image analysis Pub Date : 2025-09-16 DOI: 10.1016/j.media.2025.103802
Qianwen Ji , Yizhou Chen , Xiaojun Chen
{"title":"Template-based semantic-guided orthodontic teeth alignment previewer","authors":"Qianwen Ji ,&nbsp;Yizhou Chen ,&nbsp;Xiaojun Chen","doi":"10.1016/j.media.2025.103802","DOIUrl":"10.1016/j.media.2025.103802","url":null,"abstract":"<div><div>Intuitive visualization of orthodontic prediction results is of great significance in helping patients make up their minds about orthodontics and maintain an optimistic attitude during treatment. To address this, we propose a semantically guided orthodontic simulation prediction framework that predicts orthodontic outcomes using only a frontal photograph. Our method comprises four key steps. Firstly, we perform semantic segmentation of oral and the teeth cavity, enabling the extraction of category-specific tooth contours from frontal images with misaligned teeth. Secondly, these extracted contours are employed to adapt the predefined teeth templates to reconstruct 3D models of the teeth. Thirdly, using the reconstructed tooth positions, sizes, and postures, we fit the dental arch curve to guide tooth movement, producing a 3D model of the teeth after simulated orthodontic adjustments. Ultimately, we apply a semantically guided diffusion model for structural control and generate orthodontic prediction images which are consistent with the style of input images by applying texture transformation. Notably, our tooth semantic segmentation model attains an average intersection of union of 0.834 for 24 tooth classes excluding the second and third molars. The average Chamfer distance between our reconstructed teeth models and their corresponding ground-truth counterparts measures at 1.272 mm<span><math><msup><mrow></mrow><mrow><mn>2</mn></mrow></msup></math></span> in test cases. The teeth alignment, as predicted by our approach, exhibits a high degree of consistency with the actual post-orthodontic results in frontal images. This comprehensive qualitative and quantitative evaluation indicates the practicality and effectiveness of our framework in orthodontics and facial beautification.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"107 ","pages":"Article 103802"},"PeriodicalIF":11.8,"publicationDate":"2025-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145097602","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Model-unrolled fast MRI with weakly supervised lesion enhancement 弱监督下病变增强的模型展开快速MRI。
IF 11.8 1区 医学
Medical image analysis Pub Date : 2025-09-15 DOI: 10.1016/j.media.2025.103806
Fangmao Ju , Yuzhu He , Fan Wang , Xianjun Li , Chen Niu , Chunfeng Lian , Jianhua Ma
{"title":"Model-unrolled fast MRI with weakly supervised lesion enhancement","authors":"Fangmao Ju ,&nbsp;Yuzhu He ,&nbsp;Fan Wang ,&nbsp;Xianjun Li ,&nbsp;Chen Niu ,&nbsp;Chunfeng Lian ,&nbsp;Jianhua Ma","doi":"10.1016/j.media.2025.103806","DOIUrl":"10.1016/j.media.2025.103806","url":null,"abstract":"<div><div>The utility of Magnetic Resonance Imaging (MRI) in anomaly detection and disease diagnosis is well recognized. However, the current imaging protocol is often hindered by long scanning durations and a misalignment between the scanning process and the specific requirements of subsequent clinical assessments. While recent studies have actively explored accelerated MRI techniques, the majority have concentrated on improving overall image quality across all voxel locations, overlooking the attention to specific abnormalities that hold clinical significance. To address this discrepancy, we propose a model-unrolled deep-learning method, guided by weakly supervised lesion attention, for accelerated MRI oriented by downstream clinical needs. In particular, we construct a lesion-focused MRI reconstruction model, which incorporates customized learnable regularizations that can be learned efficiently by using only image-level labels to improve potential lesion reconstruction but preserve overall image quality. We then design a dedicated iterative algorithm to solve this task-driven reconstruction model, which is further unfolded as a cascaded deep network for lesion-focused fast imaging. Comprehensive experiments on two public datasets, i.e., fastMRI and Stanford Knee MRI Multi-Task Evaluation (SKM-TEA), demonstrate that our approach, referred to as Lesion-Focused MRI (LF-MRI), surpassed existing accelerated MRI methods by relatively large margins. Remarkably, LF-MRI led to substantial improvements in areas showing pathology. The source code and pretrained models will be publicly available at <span><span>https://github.com/ladderlab-xjtu/LF-MRI</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"107 ","pages":"Article 103806"},"PeriodicalIF":11.8,"publicationDate":"2025-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145086249","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Correlation Routing Network for Explainable Lesion Classification in Multi-Parametric Liver MRI 多参数肝脏MRI可解释病变分类的相关路径网络
IF 11.8 1区 医学
Medical image analysis Pub Date : 2025-09-15 DOI: 10.1016/j.media.2025.103790
Fakai Wang , Zhehan Shen , Huimin Lin , Fuhua Yan
{"title":"Correlation Routing Network for Explainable Lesion Classification in Multi-Parametric Liver MRI","authors":"Fakai Wang ,&nbsp;Zhehan Shen ,&nbsp;Huimin Lin ,&nbsp;Fuhua Yan","doi":"10.1016/j.media.2025.103790","DOIUrl":"10.1016/j.media.2025.103790","url":null,"abstract":"<div><div>Liver tumor diagnosis is a key task in abdominal imaging examination, and there are numerous researches on automatic classifying focal liver lesions (FLL). More are in CT and ultrasound and fewer utilize MRI despite unique diagnostic advantages. The obstacles lie in dataset curation, technical complexity, and clinical explainability in liver MRI. In this paper, we propose the Correlation Routing Network (CRN) which takes in 10 MRI sequences and predicts lesion types (HCC, Cholangioma, Metastasis, Hemangioma, FNH, Cyst) as well as imaging features, to achieve both high accuracy and explainability. The CRN model consists of encoding branches, correlation routing/relay modules, and the self-attention module. The independent encoding paradigm facilitates information disentangling, the correlation routing scheme helps redirection and decoupling effectively, and the self-attention enforces global feature sharing and prediction consistency. The model predicts detailed lesion imaging features, promoting explainable classification and clinical accountability. We also identify the signal relations and derive quantitative explainability. Our liver lesion classification model achieves malignant-benign accuracy of 97.2%, six-class accuracy of 88%, and averaged imaging feature accuracy of 84.9%, outperforming popular CNN and transformer-based models. We hope to spark insights for multimodal lesion classification and model explainability.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"107 ","pages":"Article 103790"},"PeriodicalIF":11.8,"publicationDate":"2025-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145103903","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hi-End-MAE: Hierarchical encoder-driven masked autoencoders are stronger vision learners for medical image segmentation Hi-End-MAE:分层编码器驱动的掩码自编码器是医学图像分割中较强的视觉学习器
IF 11.8 1区 医学
Medical image analysis Pub Date : 2025-09-12 DOI: 10.1016/j.media.2025.103770
Fenghe Tang , Qingsong Yao , Wenxin Ma , Chenxu Wu , Zihang Jiang , S. Kevin Zhou
{"title":"Hi-End-MAE: Hierarchical encoder-driven masked autoencoders are stronger vision learners for medical image segmentation","authors":"Fenghe Tang ,&nbsp;Qingsong Yao ,&nbsp;Wenxin Ma ,&nbsp;Chenxu Wu ,&nbsp;Zihang Jiang ,&nbsp;S. Kevin Zhou","doi":"10.1016/j.media.2025.103770","DOIUrl":"10.1016/j.media.2025.103770","url":null,"abstract":"<div><div>Medical image segmentation remains a formidable challenge due to the label scarcity. Pre-training Vision Transformer (ViT) through masked image modeling (MIM) on large-scale unlabeled medical datasets presents a promising solution, providing both computational efficiency and model generalization for various downstream tasks. However, current ViT-based MIM pre-training frameworks predominantly emphasize local aggregation representations in output layers and fail to exploit the rich representations across different ViT layers that better capture fine-grained semantic information needed for more precise medical downstream tasks. To fill the above gap, we hereby present <strong>Hi</strong>erarchical <strong>En</strong>coder-<strong>d</strong>riven <strong>MAE</strong> (<strong>Hi-End-MAE</strong>), a simple yet effective ViT-based pre-training solution, which centers on two key innovations: (1) Encoder-driven reconstruction, which encourages the encoder to learn more informative features to guide the reconstruction of masked patches; and (2) Hierarchical dense decoding, which implements a hierarchical decoding structure to capture rich representations across different layers. We pre-train Hi-End-MAE on a large-scale dataset of 10K CT scans and evaluated its performance across nine public medical image segmentation benchmarks. Extensive experiments demonstrate that Hi-End-MAE achieves superior transfer learning capabilities across various downstream tasks, revealing the potential of ViT in medical imaging applications. The code is available at: <span><span>https://github.com/FengheTan9/Hi-End-MAE</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"107 ","pages":"Article 103770"},"PeriodicalIF":11.8,"publicationDate":"2025-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145103904","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A novel gradient inversion attack framework to investigate privacy vulnerabilities during retinal image-based federated learning 研究基于视网膜图像的联邦学习过程中隐私漏洞的梯度反转攻击框架
IF 11.8 1区 医学
Medical image analysis Pub Date : 2025-09-12 DOI: 10.1016/j.media.2025.103807
Christopher Nielsen , Matthias Wilms , Nils D. Forkert
{"title":"A novel gradient inversion attack framework to investigate privacy vulnerabilities during retinal image-based federated learning","authors":"Christopher Nielsen ,&nbsp;Matthias Wilms ,&nbsp;Nils D. Forkert","doi":"10.1016/j.media.2025.103807","DOIUrl":"10.1016/j.media.2025.103807","url":null,"abstract":"<div><div>Machine learning models trained on retinal images have shown great potential in diagnosing various diseases. However, effectively training these models, especially in resource-limited regions, is often impeded by a lack of diverse data. Federated learning (FL) offers a solution to this problem by utilizing distributed data across a network of clients to enhance the training dataset volume and diversity. Nonetheless, significant privacy concerns have been raised for this approach, notably due to gradient inversion attacks that could expose private patient data used during FL training. Therefore, it is crucial to assess the vulnerability of FL models to such attacks because privacy breaches may discourage data sharing, potentially impacting the models' generalizability and clinical relevance. To tackle this issue, we introduce a novel framework to evaluate the vulnerability of federated deep learning models trained using retinal images to gradient inversion attacks. Importantly, we demonstrate how publicly available data can be used to enhance the quality of reconstructed images through an innovative image-to-image translation technique. The effectiveness of the proposed method was measured by evaluating the similarity between real fundus images and the corresponding reconstructed images using three different convolutional neural network architectures: ResNet-18, VGG-16, and DenseNet-121. Experimental results for the task of retinal age prediction demonstrate that, across all models, over 92 % of the participants in the training set could be identified from their reconstructed retinal vessel structure alone. Furthermore, even with the implementation of differential privacy countermeasures, we show that substantial information can still be extracted from the reconstructed images. Therefore, this work underscores the urgent need for improved defensive strategies to safeguard patient privacy during federated learning.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"107 ","pages":"Article 103807"},"PeriodicalIF":11.8,"publicationDate":"2025-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145229885","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Switch-UMamba: Dynamic scanning vision Mamba UNet for medical image segmentation Switch-UMamba:用于医学图像分割的动态扫描视觉Mamba UNet。
IF 11.8 1区 医学
Medical image analysis Pub Date : 2025-09-10 DOI: 10.1016/j.media.2025.103792
Ziyao Zhang , Qiankun Ma , Tong Zhang , Jie Chen , Hairong Zheng , Wen Gao
{"title":"Switch-UMamba: Dynamic scanning vision Mamba UNet for medical image segmentation","authors":"Ziyao Zhang ,&nbsp;Qiankun Ma ,&nbsp;Tong Zhang ,&nbsp;Jie Chen ,&nbsp;Hairong Zheng ,&nbsp;Wen Gao","doi":"10.1016/j.media.2025.103792","DOIUrl":"10.1016/j.media.2025.103792","url":null,"abstract":"<div><div>Recently, State Space Models (SSMs), particularly the Mamba-based framework, have demonstrated exceptional performance in medical image segmentation. This is attributed to their capacity to capture long-range dependencies efficiently with linear computational complexity. Nonetheless, current Mamba-based models encounter challenges in preserving the spatial context of 2D visual features, which is a consequence of their reliance on static 1D selective scanning patterns. In this study, we present Switch-UMamba, an innovative hybrid UNet framework that integrates local feature extraction power of Convolutional Neural Networks (CNNs) with the abilities of SSMs for capturing the long-range dependency. Switch-UMamba capitalizes on the Switch Visual State Space (VSS) module to leverage the Mixture-of-Scans (MoS) approach, a new scanning mechanism that amalgamates diverse scanning policies by considering each scan head as an expert within the Mixture-of-Experts (MoE) framework. MoS employs a router to dynamically allocate appropriate scanning policies and corresponding scan heads for each sample. This sparse-activated dynamic scanning approach not only ensures a rich and comprehensive acquisition of spatial information but also curtails computational expenses. Our comprehensive experimental evaluation on several medical image segmentation benchmarks indicates that Switch-UMamba has achieved state-of-the-art performances without using any pretrained weights. It is also worth highlighting that our approach outperforms other Mamba-based models with fewer parameters.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"107 ","pages":"Article 103792"},"PeriodicalIF":11.8,"publicationDate":"2025-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145081243","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SurfGNN: A robust surface-based prediction model with interpretability for coactivation maps of spatial and cortical features SurfGNN:一个强大的基于表面的预测模型,具有空间和皮质特征的协同激活图的可解释性。
IF 11.8 1区 医学
Medical image analysis Pub Date : 2025-09-08 DOI: 10.1016/j.media.2025.103793
Zhuoshuo Li , Jiong Zhang , Youbing Zeng , Jiaying Lin , Dan Zhang , Jianjia Zhang , Duan Xu , Hosung Kim , Bingguang Liu , Mengting Liu
{"title":"SurfGNN: A robust surface-based prediction model with interpretability for coactivation maps of spatial and cortical features","authors":"Zhuoshuo Li ,&nbsp;Jiong Zhang ,&nbsp;Youbing Zeng ,&nbsp;Jiaying Lin ,&nbsp;Dan Zhang ,&nbsp;Jianjia Zhang ,&nbsp;Duan Xu ,&nbsp;Hosung Kim ,&nbsp;Bingguang Liu ,&nbsp;Mengting Liu","doi":"10.1016/j.media.2025.103793","DOIUrl":"10.1016/j.media.2025.103793","url":null,"abstract":"<div><div>Current brain surface-based prediction models often overlook the variability of regional attributes at the cortical feature level. While graph neural networks (GNNs) excel at capturing regional differences, they encounter challenges when dealing with complex, high-density graph structures. In this work, we consider the cortical surface mesh as a sparse graph and propose an interpretable prediction model—Surface Graph Neural Network (SurfGNN). SurfGNN employs topology-sampling learning (TSL) and region-specific learning (RSL) structures to manage individual cortical features at both lower and higher scales of the surface mesh, effectively tackling the challenges posed by the overly abundant mesh nodes and addressing the issue of heterogeneity in cortical regions. Building on this, a novel score-weighted fusion (SWF) method is implemented to merge nodal representations associated with each cortical feature for prediction. We apply our model to a neonatal brain age prediction task using a dataset of harmonized MR images from 481 subjects (503 scans). SurfGNN outperforms all existing state-of-the-art methods, demonstrating an improvement of at least 9.0% and achieving a mean absolute error (MAE) of 0.827 ± 0.056 in postmenstrual weeks. Furthermore, it generates feature-level activation maps, indicating its capability to identify robust regional variations in different morphometric contributions for prediction. The codes will be available at <span><span>https://github.com/ZhuoshL/SurfGNN</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"107 ","pages":"Article 103793"},"PeriodicalIF":11.8,"publicationDate":"2025-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145040553","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Detecting low-amplitude biomarker activations via decomposition of complex-valued fMRI data with collaborative phase and magnitude sparsity 通过分解具有协同相位和量级稀疏性的复杂值fMRI数据来检测低幅度生物标志物激活。
IF 11.8 1区 医学
Medical image analysis Pub Date : 2025-09-07 DOI: 10.1016/j.media.2025.103803
Jia-Yang Song , Qiu-Hua Lin , Chi Zhou , Yi-Ran Wang , Yu-Ping Wang , Vince D. Calhoun
{"title":"Detecting low-amplitude biomarker activations via decomposition of complex-valued fMRI data with collaborative phase and magnitude sparsity","authors":"Jia-Yang Song ,&nbsp;Qiu-Hua Lin ,&nbsp;Chi Zhou ,&nbsp;Yi-Ran Wang ,&nbsp;Yu-Ping Wang ,&nbsp;Vince D. Calhoun","doi":"10.1016/j.media.2025.103803","DOIUrl":"10.1016/j.media.2025.103803","url":null,"abstract":"<div><div>Sparse decomposition of complex-valued functional magnetic resonance imaging (fMRI) data is promising in finding qualified biomarkers for brain disorders such as schizophrenia, by simultaneously using intrinsic spatial sparsity and full functional information of the brain. However, previous methods may miss disease-related low-amplitude activations, since it is challenging to determine if a low-amplitude voxel is signal or noise during the iterative update process based solely on magnitude or phase sparsity. To this end, we propose a novel sparse decomposition model with collaborative phase and magnitude sparsity constraints at the voxel level. Specifically, we impose a sparsity constraint on the product of the magnitude and phase of a voxel above a pre-defined phase threshold. The low-amplitude activations with larger phase changes can survive the update process, despite temporarily violating the small-phase-change characteristic of signal voxels. Moreover, we eliminate phase ambiguity during iterations by proving no additional phase change is introduced by the update rules and by initializing the dictionary matrix atoms using the observed time series with fixed phase angles. We evaluate the proposed method using complex-valued simulated data and experimental resting-state fMRI data from schizophrenia patients and healthy controls. Compared with three state-of-the-art algorithms, the proposed method retains more low-amplitude activations in biomarker regions such as the anterior cingulate cortex and yields sensitive phase maps to disease-related spatial changes. This provides a new tool to estimate an informative fMRI biomarker of mental disorders.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"107 ","pages":"Article 103803"},"PeriodicalIF":11.8,"publicationDate":"2025-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145149769","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Trimming-then-augmentation: Towards robust depth and odometry estimation for endoscopic images 修剪-然后增强:对内窥镜图像的鲁棒深度和里程估计
IF 11.8 1区 医学
Medical image analysis Pub Date : 2025-09-03 DOI: 10.1016/j.media.2025.103736
Junyang Wu , Yun Gu , Guang-Zhong Yang
{"title":"Trimming-then-augmentation: Towards robust depth and odometry estimation for endoscopic images","authors":"Junyang Wu ,&nbsp;Yun Gu ,&nbsp;Guang-Zhong Yang","doi":"10.1016/j.media.2025.103736","DOIUrl":"10.1016/j.media.2025.103736","url":null,"abstract":"<div><div>Depth and odometry estimation for endoscopic imaging is an essential task for robot assisted endoluminal intervention. Due to the difficulty of obtaining sufficient <em>in vivo</em> ground truth data, unsupervised learning is preferred in practical settings. Existing methods, however, are hampered by imaging artifacts and the paucity of unique anatomical markers, coupled with tissue motion and specular reflections, leading to the poor accuracy and generalizability. In this work, a trimming-then-augmentation framework is proposed. It uses a “mask-then-recover” training strategy to firstly mask out the artifact regions and then reconstruct the depth and pose information based on the global perception of a convolutional network. Subsequently, an augmentation module is used to provide stable correspondence between endoscopic image pairs. A task-specific loss function guides the augmentation module to adaptively establish stable feature pairs for improving the overall accuracy of subsequent 3D structural reconstruction. Detailed validation has been performed with results showing that the proposed method can significantly improve the accuracy of existing state-of-the-art unsupervised methods, demonstrating the effectiveness of the method and its resilience to image artifacts, in addition to its stability when applied to <em>in vivo</em> settings.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"107 ","pages":"Article 103736"},"PeriodicalIF":11.8,"publicationDate":"2025-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145109734","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reinforced physiology-informed learning for image completion from partial-frame dynamic PET imaging 从部分帧动态PET成像增强生理信息学习图像补全
IF 11.8 1区 医学
Medical image analysis Pub Date : 2025-09-03 DOI: 10.1016/j.media.2025.103767
Hengjia Ran , Jianan Cui , Xuhui Feng , Yubo Ye , Yufei Jin , Yunmei Chen , Bo Zhao , Rui Hu , Min Guo , Xinhui Su , Huafeng Liu
{"title":"Reinforced physiology-informed learning for image completion from partial-frame dynamic PET imaging","authors":"Hengjia Ran ,&nbsp;Jianan Cui ,&nbsp;Xuhui Feng ,&nbsp;Yubo Ye ,&nbsp;Yufei Jin ,&nbsp;Yunmei Chen ,&nbsp;Bo Zhao ,&nbsp;Rui Hu ,&nbsp;Min Guo ,&nbsp;Xinhui Su ,&nbsp;Huafeng Liu","doi":"10.1016/j.media.2025.103767","DOIUrl":"10.1016/j.media.2025.103767","url":null,"abstract":"<div><div>Dynamic positron emission tomography(PET) imaging using <span><math><msup><mrow></mrow><mrow><mn>18</mn></mrow></msup></math></span>F-FDG typically requires over an hour to acquire a complete time series of images. Therefore, reducing dynamic PET scan time is crucial for minimizing errors caused by patient movement and increasing the throughput of the imaging equipment. However, shortening the scanning time will lead to the loss of images in some frames, affecting the accuracy of PET parameter estimation. In this paper, we proposed a method that combined physiology-informed learning with time-implicit neural representations for kinetic modeling and missing-frame dynamic PET image completion. Based on the two-tissue compartment model, three types of constraint terms were constructed for network training, including data terms, boundary terms, and reinforced physiology residual terms. The method works effectively without the need for specific training datasets, making it feasible even with limited data. Three commonly used scanning schemes were defined to verify the feasibility of the proposed method and the performance was evaluated based on simulation data and real rat data. The best-performing scheme was selected for detailed analysis of PET images and parameter maps on datasets of four human organs obtained from Biograph Vision Quadra. Our method outperforms traditional nonlinear least squares (NLLS) fitting in both reconstruction quality and computational efficiency. The metrics calculated from different organs, such as the brain (SSIM <span><math><mo>&gt;</mo></math></span> 0.98) and the thorax (PSNR <span><math><mo>&gt;</mo></math></span> 40), show that the proposed network can achieve promising performance.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"107 ","pages":"Article 103767"},"PeriodicalIF":11.8,"publicationDate":"2025-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145047698","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信