Medical image analysis最新文献

筛选
英文 中文
Segment Like A Doctor: Learning reliable clinical thinking and experience for pancreas and pancreatic cancer segmentation 像医生一样分割:学习可靠的胰腺和胰腺癌分割的临床思维和经验
IF 10.7 1区 医学
Medical image analysis Pub Date : 2025-03-13 DOI: 10.1016/j.media.2025.103539
Liwen Zou , Yingying Cao , Ziwei Nie , Liang Mao , Yudong Qiu , Zhongqiu Wang , Zhenghua Cai , Xiaoping Yang
{"title":"Segment Like A Doctor: Learning reliable clinical thinking and experience for pancreas and pancreatic cancer segmentation","authors":"Liwen Zou ,&nbsp;Yingying Cao ,&nbsp;Ziwei Nie ,&nbsp;Liang Mao ,&nbsp;Yudong Qiu ,&nbsp;Zhongqiu Wang ,&nbsp;Zhenghua Cai ,&nbsp;Xiaoping Yang","doi":"10.1016/j.media.2025.103539","DOIUrl":"10.1016/j.media.2025.103539","url":null,"abstract":"<div><div>Pancreatic cancer is a lethal invasive tumor with one of the worst prognosis. Accurate and reliable segmentation for pancreas and pancreatic cancer on computerized tomography (CT) images is vital in clinical diagnosis and treatment. Although certain deep learning-based techniques have been tentatively applied to this task, current performance of pancreatic cancer segmentation is far from meeting the clinical needs due to the tiny size, irregular shape and extremely uncertain boundary of the cancer. Besides, most of the existing studies are established on the black-box models which only learn the annotation distribution instead of the logical thinking and diagnostic experience of high-level medical experts, the latter is more credible and interpretable. To alleviate the above issues, we propose a novel <strong>S</strong>egment-<strong>L</strong>ike-<strong>A</strong>-<strong>D</strong>octor (<strong>SLAD</strong>) framework to learn the reliable clinical thinking and experience for pancreas and pancreatic cancer segmentation on CT images. Specifically, SLAD aims to simulate the essential logical thinking and experience of doctors in the progressive diagnostic stages of pancreatic cancer: organ, lesion and boundary stage. Firstly, in the organ stage, an Anatomy-aware Masked AutoEncoder (AMAE) is introduced to model the doctors’ overall cognition for the anatomical distribution of abdominal organs on CT images by self-supervised pretraining. Secondly, in the lesion stage, a Causality-driven Graph Reasoning Module (CGRM) is designed to learn the global judgment of doctors for lesion detection by exploring topological feature difference between the causal lesion and the non-causal organ. Finally, in the boundary stage, a Diffusion-based Discrepancy Calibration Module (DDCM) is developed to fit the refined understanding of doctors for uncertain boundary of pancreatic cancer by inferring the ambiguous segmentation discrepancy based on the trustworthy lesion core. Experimental results on three independent datasets demonstrate that our approach boosts pancreatic cancer segmentation accuracy by <span><math><mrow><mn>4</mn><mtext>%–</mtext><mn>9</mn><mtext>%</mtext></mrow></math></span> compared with the state-of-the-art methods. Additionally, the tumor-vascular involvement analysis is also conducted to verify the superiority of our method in clinical applications. Our source codes will be publicly available at <span><span>https://github.com/ZouLiwen-1999/SLAD</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"102 ","pages":"Article 103539"},"PeriodicalIF":10.7,"publicationDate":"2025-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143642992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Predicting infant brain connectivity with federated multi-trajectory GNNs using scarce data 利用稀缺数据联合多轨迹gnn预测婴儿大脑连通性
IF 10.7 1区 医学
Medical image analysis Pub Date : 2025-03-13 DOI: 10.1016/j.media.2025.103541
Michalis Pistos , Gang Li , Weili Lin , Dinggang Shen , Islem Rekik
{"title":"Predicting infant brain connectivity with federated multi-trajectory GNNs using scarce data","authors":"Michalis Pistos ,&nbsp;Gang Li ,&nbsp;Weili Lin ,&nbsp;Dinggang Shen ,&nbsp;Islem Rekik","doi":"10.1016/j.media.2025.103541","DOIUrl":"10.1016/j.media.2025.103541","url":null,"abstract":"<div><div>The understanding of the convoluted evolution of infant brain networks during the first postnatal year is pivotal for identifying the dynamics of early brain connectivity development. Thanks to the valuable insights into the brain’s anatomy, existing deep learning frameworks focused on forecasting the brain evolution trajectory from a single baseline observation. While yielding remarkable results, they suffer from three major limitations. First, they lack the ability to generalize to multi-trajectory prediction tasks, where each graph trajectory corresponds to a particular imaging modality or connectivity type (e.g., T1-w MRI). Second, existing models require extensive training datasets to achieve satisfactory performance which are often challenging to obtain. Third, they do not efficiently utilize incomplete time series data. To address these limitations, we introduce FedGmTE-Net++, a <em>federated graph-based multi-trajectory evolution network</em>. Using the power of federation, we aggregate local learnings among diverse hospitals with limited datasets. As a result, we enhance the performance of each hospital’s local generative model, while preserving data privacy. The three key innovations of FedGmTE-Net++ are: (i) presenting the first federated learning framework specifically designed for brain multi-trajectory evolution prediction in a data-scarce environment, (ii) incorporating an <em>auxiliary regularizer</em> in the local objective function to exploit all the longitudinal brain connectivity within the evolution trajectory and maximize data utilization, (iii) introducing a two-step imputation process, comprising a preliminary K-Nearest Neighbours based precompletion followed by an <em>imputation refinement</em> step that employs regressors to improve similarity scores and refine imputations. Our comprehensive experimental results showed the outperformance of FedGmTE-Net++ in brain multi-trajectory prediction from a single baseline graph in comparison with benchmark methods. Our source code is available at <span><span>https://github.com/basiralab/FedGmTE-Net-plus</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"102 ","pages":"Article 103541"},"PeriodicalIF":10.7,"publicationDate":"2025-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143642445","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
UniSAL: Unified Semi-supervised Active Learning for histopathological image classification UniSAL:用于组织病理学图像分类的统一半监督主动学习
IF 10.7 1区 医学
Medical image analysis Pub Date : 2025-03-12 DOI: 10.1016/j.media.2025.103542
Lanfeng Zhong , Kun Qian , Xin Liao , Zongyao Huang , Yang Liu , Shaoting Zhang , Guotai Wang
{"title":"UniSAL: Unified Semi-supervised Active Learning for histopathological image classification","authors":"Lanfeng Zhong ,&nbsp;Kun Qian ,&nbsp;Xin Liao ,&nbsp;Zongyao Huang ,&nbsp;Yang Liu ,&nbsp;Shaoting Zhang ,&nbsp;Guotai Wang","doi":"10.1016/j.media.2025.103542","DOIUrl":"10.1016/j.media.2025.103542","url":null,"abstract":"<div><div>Histopathological image classification using deep learning is crucial for accurate and efficient cancer diagnosis. However, annotating a large amount of histopathological images for training is costly and time-consuming, leading to a scarcity of available labeled data for training deep neural networks. To reduce human efforts and improve efficiency for annotation, we propose a Unified Semi-supervised Active Learning framework (UniSAL) that effectively selects informative and representative samples for annotation. First, unlike most existing active learning methods that only train from labeled samples in each round, dual-view high-confidence pseudo training is proposed to utilize both labeled and unlabeled images to train a model for selecting query samples, where two networks operating on different augmented versions of an input image provide diverse pseudo labels for each other, and pseudo label-guided class-wise contrastive learning is introduced to obtain better feature representations for effective sample selection. Second, based on the trained model at each round, we design novel uncertain and representative sample selection strategy. It contains a Disagreement-aware Uncertainty Selector (DUS) to select informative uncertain samples with inconsistent predictions between the two networks, and a Compact Selector (CS) to remove redundancy of selected samples. We extensively evaluate our method on three public pathological image classification datasets, i.e., CRC5000, Chaoyang and CRC100K datasets, and the results demonstrate that our UniSAL significantly surpasses several state-of-the-art active learning methods, and reduces the annotation cost to around 10% to achieve a performance comparable to full annotation. Code is available at <span><span>https://github.com/HiLab-git/UniSAL</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"102 ","pages":"Article 103542"},"PeriodicalIF":10.7,"publicationDate":"2025-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143636662","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MonoPCC: Photometric-invariant cycle constraint for monocular depth estimation of endoscopic images MonoPCC:用于内窥镜图像的单眼深度估计的光度不变循环约束
IF 10.7 1区 医学
Medical image analysis Pub Date : 2025-03-10 DOI: 10.1016/j.media.2025.103534
Zhiwei Wang , Ying Zhou , Shiquan He , Ting Li , Fan Huang , Qiang Ding , Xinxia Feng , Mei Liu , Qiang Li
{"title":"MonoPCC: Photometric-invariant cycle constraint for monocular depth estimation of endoscopic images","authors":"Zhiwei Wang ,&nbsp;Ying Zhou ,&nbsp;Shiquan He ,&nbsp;Ting Li ,&nbsp;Fan Huang ,&nbsp;Qiang Ding ,&nbsp;Xinxia Feng ,&nbsp;Mei Liu ,&nbsp;Qiang Li","doi":"10.1016/j.media.2025.103534","DOIUrl":"10.1016/j.media.2025.103534","url":null,"abstract":"<div><div>Photometric constraint is indispensable for self-supervised monocular depth estimation. It involves warping a source image onto a target view using estimated depth&amp;pose, and then minimizing the difference between the warped and target images. However, the endoscopic built-in light causes significant brightness fluctuations, and thus makes the photometric constraint unreliable. Previous efforts only mitigate this relying on extra models to calibrate image brightness. In this paper, we propose MonoPCC to address the brightness inconsistency radically by reshaping the photometric constraint into a cycle form. Instead of only warping the source image, MonoPCC constructs a closed loop consisting of two opposite forward–backward warping paths: from target to source and then back to target. Thus, the target image finally receives an image cycle-warped from itself, which naturally makes the constraint invariant to brightness changes. Moreover, MonoPCC transplants the source image’s phase-frequency into the intermediate warped image to avoid structure lost, and also stabilizes the training via an exponential moving average (EMA) strategy to avoid frequent changes in the forward warping. The comprehensive and extensive experimental results on five datasets demonstrate that our proposed MonoPCC shows a great robustness to the brightness inconsistency, and exceeds other state-of-the-arts by reducing the absolute relative error by 7.27%, 9.38%, 9.90% and 3.17% on four endoscopic datasets, respectively; superior results on the outdoor dataset verify the competitiveness of MonoPCC for the natural scenario. Codes are available at <span><span>https://github.com/adam99goat/MonoPCC</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"102 ","pages":"Article 103534"},"PeriodicalIF":10.7,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143610384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SpinFlowSim: A blood flow simulation framework for histology-informed diffusion MRI microvasculature mapping in cancer SpinFlowSim:一个血流模拟框架,用于肿瘤组织信息弥散MRI微血管制图
IF 10.7 1区 医学
Medical image analysis Pub Date : 2025-03-07 DOI: 10.1016/j.media.2025.103531
Anna Kira Voronova , Athanasios Grigoriou , Kinga Bernatowicz , Sara Simonetti , Garazi Serna , Núria Roson , Manuel Escobar , Maria Vieito , Paolo Nuciforo , Rodrigo Toledo , Elena Garralda , Els Fieremans , Dmitry S. Novikov , Marco Palombo , Raquel Perez-Lopez , Francesco Grussu
{"title":"SpinFlowSim: A blood flow simulation framework for histology-informed diffusion MRI microvasculature mapping in cancer","authors":"Anna Kira Voronova ,&nbsp;Athanasios Grigoriou ,&nbsp;Kinga Bernatowicz ,&nbsp;Sara Simonetti ,&nbsp;Garazi Serna ,&nbsp;Núria Roson ,&nbsp;Manuel Escobar ,&nbsp;Maria Vieito ,&nbsp;Paolo Nuciforo ,&nbsp;Rodrigo Toledo ,&nbsp;Elena Garralda ,&nbsp;Els Fieremans ,&nbsp;Dmitry S. Novikov ,&nbsp;Marco Palombo ,&nbsp;Raquel Perez-Lopez ,&nbsp;Francesco Grussu","doi":"10.1016/j.media.2025.103531","DOIUrl":"10.1016/j.media.2025.103531","url":null,"abstract":"<div><div>Diffusion Magnetic Resonance Imaging (dMRI) sensitises the MRI signal to spin motion. This includes Brownian diffusion, but also flow across intricate networks of capillaries. This effect, the intra-voxel incoherent motion (IVIM), enables microvasculature characterisation with dMRI, through metrics such as the vascular signal fraction <span><math><msub><mrow><mi>f</mi></mrow><mrow><mi>V</mi></mrow></msub></math></span> or the vascular Apparent Diffusion Coefficient (ADC) <span><math><msup><mrow><mi>D</mi></mrow><mrow><mo>∗</mo></mrow></msup></math></span>. The IVIM metrics, while sensitive to perfusion, are protocol-dependent, and their interpretation can change depending on the flow regime spins experience during the dMRI measurements (e.g., diffusive vs ballistic), which is in general not known for a given voxel. These facts hamper their practical clinical utility, and innovative vascular dMRI models are needed to enable the <em>in vivo</em> calculation of biologically meaningful markers of capillary flow. These could have relevant applications in cancer, as in the assessment of the response to anti-angiogenic therapies targeting tumour vessels. This paper tackles this need by introducing <em>SpinFlowSim</em>, an open-source simulator of dMRI signals arising from blood flow within pipe networks. SpinFlowSim, tailored for the laminar flow patterns within capillaries, enables the synthesis of highly-realistic microvascular dMRI signals, given networks reconstructed from histology. We showcase the simulator by generating synthetic signals for 15 networks, reconstructed from liver biopsies, and containing cancerous and non-cancerous tissue. Signals exhibit complex, non-mono-exponential behaviours, consistent with <em>in vivo</em> signal patterns, and pointing towards the co-existence of different flow regimes within the same network, as well as diffusion time dependence. We also demonstrate the potential utility of SpinFlowSim by devising a strategy for microvascular property mapping informed by the synthetic signals, and focussing on the quantification of blood velocity distribution moments and of an <em>apparent network branching</em> index. These were estimated <em>in silico</em> and <em>in vivo</em>, in healthy volunteers scanned at 1.5T and 3T and in 13 cancer patients, scanned at 1.5T. In conclusion, realistic flow simulations, as those enabled by <em>SpinFlowSim</em>, may play a key role in the development of the next-generation of dMRI methods for microvascular mapping, with immediate applications in oncology.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"102 ","pages":"Article 103531"},"PeriodicalIF":10.7,"publicationDate":"2025-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143593072","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Local salient location-aware anomaly mask synthesis for pulmonary disease anomaly detection and lesion localization in CT images 用于 CT 图像中肺部疾病异常检测和病灶定位的局部突出位置感知异常掩膜合成技术
IF 10.7 1区 医学
Medical image analysis Pub Date : 2025-03-07 DOI: 10.1016/j.media.2025.103523
Huaying Hao , Yitian Zhao , Shaoyi Leng , Yuanyuan Gu , Yuhui Ma , Feiming Wang , Qi Dai , Jianjun Zheng , Yue Liu , Jingfeng Zhang
{"title":"Local salient location-aware anomaly mask synthesis for pulmonary disease anomaly detection and lesion localization in CT images","authors":"Huaying Hao ,&nbsp;Yitian Zhao ,&nbsp;Shaoyi Leng ,&nbsp;Yuanyuan Gu ,&nbsp;Yuhui Ma ,&nbsp;Feiming Wang ,&nbsp;Qi Dai ,&nbsp;Jianjun Zheng ,&nbsp;Yue Liu ,&nbsp;Jingfeng Zhang","doi":"10.1016/j.media.2025.103523","DOIUrl":"10.1016/j.media.2025.103523","url":null,"abstract":"<div><div>Automated pulmonary anomaly detection using computed tomography (CT) examinations is important for the early warning of pulmonary diseases and can support clinical diagnosis and decision-making. Most training of existing pulmonary disease detection and lesion segmentation models requires expert annotations, which is time-consuming and labour-intensive, and struggles to generalize across atypical diseases. In contrast, unsupervised anomaly detection alleviates the demand for dataset annotation and is more generalizable than supervised methods in detecting rare pathologies. However, due to the large distribution differences of CT scans in a volume and the high similarity between lesion and normal tissues, existing anomaly detection methods struggle to accurately localize small lesions, leading to a low anomaly detection rate. To alleviate these challenges, we propose a local salient location-aware anomaly mask generation and reconstruction framework for pulmonary disease anomaly detection and lesion localization. The framework consists of four components: (1) a Vector Quantized Variational AutoEncoder (VQVAE)-based reconstruction network that generates a codebook storing high-dimensional features; (2) a unsupervised feature statistics based anomaly feature synthesizer to synthesize features that match the realistic anomaly distribution by filtering salient features and interacting with the codebook; (3) a transformer-based feature classification network that identifies synthetic anomaly features; (4) a residual neighbourhood aggregation feature classification loss that mitigates network overfitting by penalizing the classification loss of recoverable corrupted features. Our approach is based on two intuitions. First, generating synthetic anomalies in feature space is more effective due to the fact that lesions have different morphologies in image space and may not have much in common. Secondly, regions with salient features or high reconstruction errors in CT images tend to be similar to lesions and are more prone to synthesize abnormal features. The performance of the proposed method is validated on one public dataset with COVID-19 and one in-house dataset containing 63,610 CT images with five lung diseases. Experimental results show that compared to feature-based, synthesis-based and reconstruction-based methods, the proposed method is adaptable to CT images with four pneumonia types (COVID-19, bacteria, fungal, and mycoplasma) and one non-pneumonia (cancer) diseases and achieves state-of-the-art performance in image-level anomaly detection and lesion localization.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"102 ","pages":"Article 103523"},"PeriodicalIF":10.7,"publicationDate":"2025-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143619302","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bridging multi-level gaps: Bidirectional reciprocal cycle framework for text-guided label-efficient segmentation in echocardiography 弥合多层次的差距:双向互惠循环框架文本引导标签高效分割超声心动图
IF 10.7 1区 医学
Medical image analysis Pub Date : 2025-03-07 DOI: 10.1016/j.media.2025.103536
Zhenxuan Zhang , Heye Zhang , Tieyong Zeng , Guang Yang , Zhenquan Shi , Zhifan Gao
{"title":"Bridging multi-level gaps: Bidirectional reciprocal cycle framework for text-guided label-efficient segmentation in echocardiography","authors":"Zhenxuan Zhang ,&nbsp;Heye Zhang ,&nbsp;Tieyong Zeng ,&nbsp;Guang Yang ,&nbsp;Zhenquan Shi ,&nbsp;Zhifan Gao","doi":"10.1016/j.media.2025.103536","DOIUrl":"10.1016/j.media.2025.103536","url":null,"abstract":"<div><div>Text-guided visual understanding is a potential solution for downstream task learning in echocardiography. It can reduce reliance on labeled large datasets and facilitate learning clinical tasks. This is because the text can embed highly condensed clinical information into predictions for visual tasks. The contrastive language-image pretraining (CLIP) based methods extract image-text features by constructing a contrastive learning pre-train process in a sequence of matched text and images. These methods adapt the pre-trained network parameters to improve downstream task performance with text guidance. However, these methods still have the challenge of the multi-level gap between image and text. It mainly stems from spatial-level, contextual-level, and domain-level gaps. It is difficult to deal with medical image–text pairs and dense prediction tasks. Therefore, we propose a bidirectional reciprocal cycle (BRC) framework to bridge the multi-level gaps. First, the BRC constructs pyramid reciprocal alignments of embedded global and local image–text feature representations. This matches complex medical expertise with corresponding phenomena. Second, BRC enforces the forward inference to be consistent with the reverse mapping (i.e., the text <span><math><mo>→</mo></math></span> feature is consistent with the feature <span><math><mo>→</mo></math></span> text or feature <span><math><mo>→</mo></math></span> image). This enforces the perception of the contextual relationship between input data and feature. Third, the BRC can adapt to the specific downstream segmentation task. This embeds complex text information to directly guide downstream tasks with a cross-modal attention mechanism. Compared with 22 existing methods, our BRC can achieve state-of-the-art performance on segmentation tasks (DSC = 95.2%). Extensive experiments on 11048 patients show that our method can significantly improve the accuracy and reduce the reliance on labeled data (DSC increased from 81.5% to 86.6% with text assistance in 1% labeled proportion data).</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"102 ","pages":"Article 103536"},"PeriodicalIF":10.7,"publicationDate":"2025-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143593056","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FedBM: Stealing knowledge from pre-trained language models for heterogeneous federated learning FedBM:从预训练的语言模型中窃取知识,用于异构联邦学习
IF 10.7 1区 医学
Medical image analysis Pub Date : 2025-03-07 DOI: 10.1016/j.media.2025.103524
Meilu Zhu , Qiushi Yang , Zhifan Gao , Yixuan Yuan , Jun Liu
{"title":"FedBM: Stealing knowledge from pre-trained language models for heterogeneous federated learning","authors":"Meilu Zhu ,&nbsp;Qiushi Yang ,&nbsp;Zhifan Gao ,&nbsp;Yixuan Yuan ,&nbsp;Jun Liu","doi":"10.1016/j.media.2025.103524","DOIUrl":"10.1016/j.media.2025.103524","url":null,"abstract":"<div><div>Federated learning (FL) has shown great potential in medical image computing since it provides a decentralized learning paradigm that allows multiple clients to train a model collaboratively without privacy leakage. However, current studies have shown that data heterogeneity incurs local learning bias in classifiers and feature extractors of client models during local training, leading to the performance degradation of a federation system. To address these issues, we propose a novel framework called <u>F</u>ederated <u>B</u>ias eli<u>M</u>inating (FedBM) to get rid of local learning bias in heterogeneous federated learning (FL), which mainly consists of two modules, <em>i.e.</em>, Linguistic Knowledge-based Classifier Construction (LKCC) and Concept-guided Global Distribution Estimation (CGDE). Specifically, LKCC exploits class concepts, prompts and pre-trained language models (PLMs) to obtain concept embeddings. These embeddings are used to estimate the latent concept distribution of each class in the linguistic space. Based on the theoretical derivation, we can rely on these distributions to pre-construct a high-quality classifier for clients to achieve classification optimization, which is frozen to avoid classifier bias during local training. CGDE samples probabilistic concept embeddings from the latent concept distributions to learn a conditional generator to capture the input space of the global model. Three regularization terms are introduced to improve the quality and utility of the generator. The generator is shared by all clients and produces pseudo data to calibrate updates of local feature extractors. Extensive comparison experiments and ablation studies on public datasets demonstrate the superior performance of FedBM over state-of-the-arts and confirm the effectiveness of each module, respectively. The code is available at <span><span>https://github.com/CUHK-AIM-Group/FedBM</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"102 ","pages":"Article 103524"},"PeriodicalIF":10.7,"publicationDate":"2025-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143593055","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Coordinate-based neural representation enabling zero-shot learning for fast 3D multiparametric quantitative MRI 基于坐标的神经表征,实现快速3D多参数定量MRI的零射击学习
IF 10.7 1区 医学
Medical image analysis Pub Date : 2025-03-06 DOI: 10.1016/j.media.2025.103530
Guoyan Lao , Ruimin Feng , Haikun Qi , Zhenfeng Lv , Qiangqiang Liu , Chunlei Liu , Yuyao Zhang , Hongjiang Wei
{"title":"Coordinate-based neural representation enabling zero-shot learning for fast 3D multiparametric quantitative MRI","authors":"Guoyan Lao ,&nbsp;Ruimin Feng ,&nbsp;Haikun Qi ,&nbsp;Zhenfeng Lv ,&nbsp;Qiangqiang Liu ,&nbsp;Chunlei Liu ,&nbsp;Yuyao Zhang ,&nbsp;Hongjiang Wei","doi":"10.1016/j.media.2025.103530","DOIUrl":"10.1016/j.media.2025.103530","url":null,"abstract":"<div><div>Quantitative magnetic resonance imaging (qMRI) offers tissue-specific physical parameters with significant potential for neuroscience research and clinical practice. However, lengthy scan times for 3D multiparametric qMRI acquisition limit its clinical utility. Here, we propose SUMMIT, an innovative imaging methodology that includes data acquisition and an unsupervised reconstruction for simultaneous multiparametric qMRI. SUMMIT first encodes multiple important quantitative properties into highly undersampled k-space. It further leverages implicit neural representation incorporated with a dedicated physics model to reconstruct the desired multiparametric maps without needing external training datasets. SUMMIT delivers co-registered <span><math><msub><mrow><mi>T</mi></mrow><mrow><mn>1</mn></mrow></msub></math></span>, <span><math><msub><mrow><mi>T</mi></mrow><mrow><mn>2</mn></mrow></msub></math></span>, <span><math><msubsup><mrow><mi>T</mi></mrow><mrow><mn>2</mn></mrow><mrow><mo>∗</mo></mrow></msubsup></math></span>, and subvoxel quantitative susceptibility mapping. Extensive simulations, phantom, and in vivo brain imaging demonstrate SUMMIT’s high accuracy. Notably, SUMMIT uniquely unravels microstructural alternations in patients with white matter hyperintense lesions with high sensitivity and specificity. Additionally, the proposed unsupervised approach for qMRI reconstruction also introduces a novel zero-shot learning paradigm for multiparametric imaging applicable to various medical imaging modalities.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"102 ","pages":"Article 103530"},"PeriodicalIF":10.7,"publicationDate":"2025-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143577178","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AI-based association analysis for medical imaging using latent-space geometric confounder correction 基于人工智能的医学影像关联分析——基于潜在空间几何混杂校正
IF 10.7 1区 医学
Medical image analysis Pub Date : 2025-03-06 DOI: 10.1016/j.media.2025.103529
Xianjing Liu , Bo Li , Meike W. Vernooij , Eppo B. Wolvius , Gennady V. Roshchupkin , Esther E. Bron
{"title":"AI-based association analysis for medical imaging using latent-space geometric confounder correction","authors":"Xianjing Liu ,&nbsp;Bo Li ,&nbsp;Meike W. Vernooij ,&nbsp;Eppo B. Wolvius ,&nbsp;Gennady V. Roshchupkin ,&nbsp;Esther E. Bron","doi":"10.1016/j.media.2025.103529","DOIUrl":"10.1016/j.media.2025.103529","url":null,"abstract":"<div><div>This study addresses the challenges of confounding effects and interpretability in artificial-intelligence-based medical image analysis. Whereas existing literature often resolves confounding by removing confounder-related information from latent representations, this strategy risks affecting image reconstruction quality in generative models, thus limiting their applicability in feature visualization. To tackle this, we propose a different strategy that retains confounder-related information in latent representations while finding an alternative confounder-free representation of the image data.</div><div>Our approach views the latent space of an autoencoder as a vector space, where imaging-related variables, such as the learning target (t) and confounder (c), have a vector capturing their variability. The confounding problem is addressed by searching a confounder-free vector which is orthogonal to the confounder-related vector but maximally collinear to the target-related vector. To achieve this, we introduce a novel correlation-based loss that not only performs vector searching in the latent space, but also encourages the encoder to generate latent representations linearly correlated with the variables. Subsequently, we interpret the confounder-free representation by sampling and reconstructing images along the confounder-free vector.</div><div>The efficacy and flexibility of our proposed method are demonstrated across three applications, accommodating multiple confounders and utilizing diverse image modalities. Results affirm the method’s effectiveness in reducing confounder influences, preventing wrong or misleading associations, and offering a unique visual interpretation for in-depth investigations by clinical and epidemiological researchers. The code is released in the following GitLab repository: <span><span>https://gitlab.com/radiology/compopbio/ai_based_association_analysis</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"102 ","pages":"Article 103529"},"PeriodicalIF":10.7,"publicationDate":"2025-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143593071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信