Medical image analysis最新文献

筛选
英文 中文
Multi-degradation-adaptation network for fundus image enhancement with degradation representation learning 利用降级表示学习增强眼底图像的多降级适应网络
IF 10.7 1区 医学
Medical image analysis Pub Date : 2024-07-14 DOI: 10.1016/j.media.2024.103273
{"title":"Multi-degradation-adaptation network for fundus image enhancement with degradation representation learning","authors":"","doi":"10.1016/j.media.2024.103273","DOIUrl":"10.1016/j.media.2024.103273","url":null,"abstract":"<div><p>Fundus image quality serves a crucial asset for medical diagnosis and applications. However, such images often suffer degradation during image acquisition where multiple types of degradation can occur in each image. Although recent deep learning based methods have shown promising results in image enhancement, they tend to focus on restoring one aspect of degradation and lack generalisability to multiple modes of degradation. We propose an adaptive image enhancement network that can simultaneously handle a mixture of different degradations. The main contribution of this work is to introduce our <em>Multi-Degradation-Adaptive</em> module which dynamically generates filters for different types of degradation. Moreover, we explore degradation representation learning and propose the degradation representation network and Multi-Degradation-Adaptive discriminator for our accompanying image enhancement network. Experimental results demonstrate that our method outperforms several existing state-of-the-art methods in fundus image enhancement. Code will be available at <span><span>https://github.com/RuoyuGuo/MDA-Net</span><svg><path></path></svg></span>.</p></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":null,"pages":null},"PeriodicalIF":10.7,"publicationDate":"2024-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1361841524001981/pdfft?md5=ec26e22bcbb56de061eca7ef1ad9c202&pid=1-s2.0-S1361841524001981-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141715384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dual domain distribution disruption with semantics preservation: Unsupervised domain adaptation for medical image segmentation 保留语义的双域分布破坏:医学图像分割的无监督域适应
IF 10.7 1区 医学
Medical image analysis Pub Date : 2024-07-14 DOI: 10.1016/j.media.2024.103275
{"title":"Dual domain distribution disruption with semantics preservation: Unsupervised domain adaptation for medical image segmentation","authors":"","doi":"10.1016/j.media.2024.103275","DOIUrl":"10.1016/j.media.2024.103275","url":null,"abstract":"<div><p>Recent unsupervised domain adaptation (UDA) methods in medical image segmentation commonly utilize Generative Adversarial Networks (GANs) for domain translation. However, the translated images often exhibit a distribution deviation from the ideal due to the inherent instability of GANs, leading to challenges such as visual inconsistency and incorrect style, consequently causing the segmentation model to fall into the fixed wrong pattern. To address this problem, we propose a novel UDA framework known as Dual Domain Distribution Disruption with Semantics Preservation (DDSP). Departing from the idea of generating images conforming to the target domain distribution in GAN-based UDA methods, we make the model domain-agnostic and focus on anatomical structural information by leveraging semantic information as constraints to guide the model to adapt to images with disrupted distributions in both source and target domains. Furthermore, we introduce the inter-channel similarity feature alignment based on the domain-invariant structural prior information, which facilitates the shared pixel-wise classifier to achieve robust performance on target domain features by aligning the source and target domain features across channels. Without any exaggeration, our method significantly outperforms existing state-of-the-art UDA methods on three public datasets (i.e., the heart dataset, the brain dataset, and the prostate dataset). The code is available at <span><span>https://github.com/MIXAILAB/DDSPSeg</span><svg><path></path></svg></span>.</p></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":null,"pages":null},"PeriodicalIF":10.7,"publicationDate":"2024-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141712356","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A graph-theoretic approach for the analysis of lesion changes and lesions detection review in longitudinal oncological imaging 在纵向肿瘤成像中分析病变变化和病变检测回顾的图论方法
IF 10.7 1区 医学
Medical image analysis Pub Date : 2024-07-14 DOI: 10.1016/j.media.2024.103268
{"title":"A graph-theoretic approach for the analysis of lesion changes and lesions detection review in longitudinal oncological imaging","authors":"","doi":"10.1016/j.media.2024.103268","DOIUrl":"10.1016/j.media.2024.103268","url":null,"abstract":"<div><p>Radiological follow-up of oncology patients requires the detection of lesions and the quantitative analysis of lesion changes in longitudinal imaging studies of patients, which is time-consuming and requires expertise.</p><p>We present here a new method and workflow for the analysis and review of lesions and volumetric lesion changes in longitudinal scans of a patient. The generic graph-based method consists of lesion matching, classification of changes in individual lesions, and detection of patterns of lesion changes computed from the properties of the graph and its connected components. The workflow guides clinicians in the detection of missed lesions and wrongly identified lesions in manual and computed lesion annotations using the analysis of lesion changes. It serves as a heuristic method for the automatic revision of ground truth lesion annotations in longitudinal scans.</p><p>The methods were evaluated on longitudinal studies of patients with three or more examinations of metastatic lesions in the lung (19 patients, 83 CT scans, 1178 lesions), the liver (18 patients, 77 CECT scans, 800 lesions) and the brain (30 patients, 102 T1W-Gad MRI scans, 317 lesions) with ground-truth lesion annotations. Lesion matching yielded a precision of 0.92–1.0 and recall of 0.91–0.99. The classification of changes in individual lesions yielded an accuracy of 0.87–0.97. The classification of patterns of lesion changes yielded an accuracy of 0.80–0.94. The lesion detection review workflow applied to manual and computed lesion annotations yielded 120 and 55 missed lesions and 20 and 164 wrongly identified lesions for all longitudinal studies of patients, respectively.</p><p>The automatic analysis of lesion changes and review of lesion detection in longitudinal studies of oncological patients helps detect missed lesions and wrongly identified lesions. This method may help improve the accuracy of radiological interpretation and the disease status evaluation.</p></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":null,"pages":null},"PeriodicalIF":10.7,"publicationDate":"2024-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141638705","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Federated brain tumor segmentation: An extensive benchmark 联合脑肿瘤分割:广泛的基准测试
IF 10.7 1区 医学
Medical image analysis Pub Date : 2024-07-14 DOI: 10.1016/j.media.2024.103270
{"title":"Federated brain tumor segmentation: An extensive benchmark","authors":"","doi":"10.1016/j.media.2024.103270","DOIUrl":"10.1016/j.media.2024.103270","url":null,"abstract":"<div><p>Recently, federated learning has raised increasing interest in the medical image analysis field due to its ability to aggregate multi-center data with privacy-preserving properties. A large amount of federated training schemes have been published, which we categorize into global (one final model), personalized (one model per institution) or hybrid (one model per cluster of institutions) methods. However, their applicability on the recently published Federated Brain Tumor Segmentation 2022 dataset has not been explored yet. We propose an extensive benchmark of federated learning algorithms from all three classes on this task. While standard FedAvg already performs very well, we show that some methods from each category can bring a slight performance improvement and potentially limit the final model(s) bias toward the predominant data distribution of the federation. Moreover, we provide a deeper understanding of the behavior of federated learning on this task through alternative ways of distributing the pooled dataset among institutions, namely an Independent and Identical Distributed (IID) setup, and a limited data setup. Our code is available at (<span><span>https://github.com/MatthisManthe/Benchmark_FeTS2022</span><svg><path></path></svg></span>).</p></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":null,"pages":null},"PeriodicalIF":10.7,"publicationDate":"2024-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141691952","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improving lesion volume measurements on digital mammograms 改进数字乳房 X 光照片的病变体积测量方法
IF 10.7 1区 医学
Medical image analysis Pub Date : 2024-07-11 DOI: 10.1016/j.media.2024.103269
{"title":"Improving lesion volume measurements on digital mammograms","authors":"","doi":"10.1016/j.media.2024.103269","DOIUrl":"10.1016/j.media.2024.103269","url":null,"abstract":"<div><p>Lesion volume is an important predictor for prognosis in breast cancer. However, it is currently impossible to compute lesion volumes accurately from digital mammography data, which is the most popular and readily available imaging modality for breast cancer. We make a step towards a more accurate lesion volume measurement on digital mammograms by developing a model that allows to estimate lesion volumes on processed mammogram. Processed mammograms are the images routinely used by radiologists in clinical practice as well as in breast cancer screening and are available in medical centers. Processed mammograms are obtained from raw mammograms, which are the X-ray data coming directly from the scanner, by applying certain vendor-specific non-linear transformations. At the core of our volume estimation method is a physics-based algorithm for measuring lesion volumes on raw mammograms. We subsequently extend this algorithm to processed mammograms via a deep learning image-to-image translation model that produces synthetic raw mammograms from processed mammograms in a multi-vendor setting. We assess the reliability and validity of our method using a dataset of 1778 mammograms with an annotated mass. Firstly, we investigate the correlations between lesion volumes computed from mediolateral oblique and craniocaudal views, with a resulting Pearson correlation of 0.93 [95% confidence interval (CI) 0.92 – 0.93]. Secondly, we compare the resulting lesion volumes from true and synthetic raw data, with a resulting Pearson correlation of 0.998 [95%CI 0.998 – 0.998] . Finally, for a subset of 100 mammograms with a malignant mass and concurrent MRI examination available, we analyze the agreement between lesion volume on mammography and MRI, resulting in an intraclass correlation coefficient of 0.81 [95%CI 0.73 – 0.87] for consistency and 0.78 [95%CI 0.66 – 0.86] for absolute agreement. In conclusion, we developed an algorithm to measure mammographic lesion volume that reached excellent reliability and good validity, when using MRI as ground truth. The algorithm may play a role in lesion characterization and breast cancer prognostication on mammograms.</p></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":null,"pages":null},"PeriodicalIF":10.7,"publicationDate":"2024-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1361841524001944/pdfft?md5=491b6626e250d1b4cbbb721fb33c4dc3&pid=1-s2.0-S1361841524001944-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141638704","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A spatio-temporal graph convolutional network for ultrasound echocardiographic landmark detection 用于超声心动图标记检测的时空图卷积网络
IF 10.7 1区 医学
Medical image analysis Pub Date : 2024-07-10 DOI: 10.1016/j.media.2024.103272
{"title":"A spatio-temporal graph convolutional network for ultrasound echocardiographic landmark detection","authors":"","doi":"10.1016/j.media.2024.103272","DOIUrl":"10.1016/j.media.2024.103272","url":null,"abstract":"<div><p>Landmark detection is a crucial task in medical image analysis, with applications across various fields. However, current methods struggle to accurately locate landmarks in medical images with blurred tissue boundaries due to low image quality. In particular, in echocardiography, sparse annotations make it challenging to predict landmarks with position stability and temporal consistency. In this paper, we propose a spatio-temporal graph convolutional network tailored for echocardiography landmark detection. We specifically sample landmark labels from the left ventricular endocardium and pre-calculate their correlations to establish structural priors. Our approach involves a graph convolutional neural network that learns the interrelationships among landmarks, significantly enhancing landmark accuracy within ambiguous tissue contexts. Additionally, we integrate gate recurrent units to grasp the temporal consistency of landmarks across consecutive images, augmenting the model’s resilience against unlabeled data. Through validation across three echocardiography datasets, our method demonstrates superior accuracy when contrasted with alternative landmark detection models.</p></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":null,"pages":null},"PeriodicalIF":10.7,"publicationDate":"2024-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141636774","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A bidirectional framework for fracture simulation and deformation-based restoration prediction in pelvic fracture surgical planning 骨盆骨折手术规划中的骨折模拟和基于变形的修复预测双向框架
IF 10.7 1区 医学
Medical image analysis Pub Date : 2024-07-10 DOI: 10.1016/j.media.2024.103267
{"title":"A bidirectional framework for fracture simulation and deformation-based restoration prediction in pelvic fracture surgical planning","authors":"","doi":"10.1016/j.media.2024.103267","DOIUrl":"10.1016/j.media.2024.103267","url":null,"abstract":"<div><p>Pelvic fracture is a severe trauma with life-threatening implications. Surgical reduction is essential for restoring the anatomical structure and functional integrity of the pelvis, requiring accurate preoperative planning. However, the complexity of pelvic fractures and limited data availability necessitate labor-intensive manual corrections in a clinical setting. We describe in this paper a novel bidirectional framework for automatic pelvic fracture surgical planning based on fracture simulation and structure restoration. Our fracture simulation method accounts for patient-specific pelvic structures, bone density information, and the randomness of fractures, enabling the generation of various types of fracture cases from healthy pelvises. Based on these features and on adversarial learning, we develop a novel structure restoration network to predict the deformation mapping in CT images before and after a fracture for the precise structural reconstruction of any fracture. Furthermore, a self-supervised strategy based on pelvic anatomical symmetry priors is developed to optimize the details of the restored pelvic structure. Finally, the restored pelvis is used as a template to generate a surgical reduction plan in which the fragments are repositioned in an efficient jigsaw puzzle registration manner. Extensive experiments on simulated and clinical datasets, including scans with metal artifacts, show that our method achieves good accuracy and robustness: a mean SSIM of 90.7% for restorations, with translational errors of 2.88 mm and rotational errors of 3.18°for reductions in real datasets. Our method takes 52.9 s to complete the surgical planning in the phantom study, representing a significant acceleration compared to standard clinical workflows. Our method may facilitate effective surgical planning for pelvic fractures tailored to individual patients in clinical settings.</p></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":null,"pages":null},"PeriodicalIF":10.7,"publicationDate":"2024-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141701247","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
From vision to text: A comprehensive review of natural image captioning in medical diagnosis and radiology report generation 从视觉到文本:医疗诊断和放射学报告生成中的自然图像字幕综合评述
IF 10.7 1区 医学
Medical image analysis Pub Date : 2024-07-08 DOI: 10.1016/j.media.2024.103264
{"title":"From vision to text: A comprehensive review of natural image captioning in medical diagnosis and radiology report generation","authors":"","doi":"10.1016/j.media.2024.103264","DOIUrl":"10.1016/j.media.2024.103264","url":null,"abstract":"<div><p>Natural Image Captioning (NIC) is an interdisciplinary research area that lies within the intersection of Computer Vision (CV) and Natural Language Processing (NLP). Several works have been presented on the subject, ranging from the early template-based approaches to the more recent deep learning-based methods. This paper conducts a survey in the area of NIC, especially focusing on its applications for Medical Image Captioning (MIC) and Diagnostic Captioning (DC) in the field of radiology. A review of the state-of-the-art is conducted summarizing key research works in NIC and DC to provide a wide overview on the subject. These works include existing NIC and MIC models, datasets, evaluation metrics, and previous reviews in the specialized literature. The revised work is thoroughly analyzed and discussed, highlighting the limitations of existing approaches and their potential implications in real clinical practice. Similarly, future potential research lines are outlined on the basis of the detected limitations.</p></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":null,"pages":null},"PeriodicalIF":10.7,"publicationDate":"2024-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1361841524001890/pdfft?md5=053ea012c2ac646829eff8e59192c345&pid=1-s2.0-S1361841524001890-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141623786","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DCCAT: Dual-Coordinate Cross-Attention Transformer for thrombus segmentation on coronary OCT DCCAT:用于冠状动脉 OCT 上血栓分割的双坐标交叉注意力变换器
IF 10.7 1区 医学
Medical image analysis Pub Date : 2024-07-05 DOI: 10.1016/j.media.2024.103265
{"title":"DCCAT: Dual-Coordinate Cross-Attention Transformer for thrombus segmentation on coronary OCT","authors":"","doi":"10.1016/j.media.2024.103265","DOIUrl":"10.1016/j.media.2024.103265","url":null,"abstract":"<div><p>Acute coronary syndromes (ACS) are one of the leading causes of mortality worldwide, with atherosclerotic plaque rupture and subsequent thrombus formation as the main underlying substrate. Thrombus burden evaluation is important for tailoring treatment therapy and predicting prognosis. Coronary optical coherence tomography (OCT) enables in-vivo visualization of thrombus that cannot otherwise be achieved by other image modalities. However, automatic quantification of thrombus on OCT has not been implemented. The main challenges are due to the variation in location, size and irregularities of thrombus in addition to the small data set. In this paper, we propose a novel dual-coordinate cross-attention transformer network, termed DCCAT, to overcome the above challenges and achieve the first automatic segmentation of thrombus on OCT. Imaging features from both Cartesian and polar coordinates are encoded and fused based on long-range correspondence via multi-head cross-attention mechanism. The dual-coordinate cross-attention block is hierarchically stacked amid convolutional layers at multiple levels, allowing comprehensive feature enhancement. The model was developed based on 5,649 OCT frames from 339 patients and tested using independent external OCT data from 548 frames of 52 patients. DCCAT achieved Dice similarity score (DSC) of 0.706 in segmenting thrombus, which is significantly higher than the CNN-based (0.656) and Transformer-based (0.584) models. We prove that the additional input of polar image not only leverages discriminative features from another coordinate but also improves model robustness for geometrical transformation.Experiment results show that DCCAT achieves competitive performance with only 10% of the total data, highlighting its data efficiency. The proposed dual-coordinate cross-attention design can be easily integrated into other developed Transformer models to boost performance.</p></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":null,"pages":null},"PeriodicalIF":10.7,"publicationDate":"2024-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141716448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Embedded prompt tuning: Towards enhanced calibration of pretrained models for medical images 嵌入式提示调整:增强医学图像预训练模型的校准。
IF 10.7 1区 医学
Medical image analysis Pub Date : 2024-07-04 DOI: 10.1016/j.media.2024.103258
Wenqiang Zu , Shenghao Xie , Qing Zhao , Guoqi Li , Lei Ma
{"title":"Embedded prompt tuning: Towards enhanced calibration of pretrained models for medical images","authors":"Wenqiang Zu ,&nbsp;Shenghao Xie ,&nbsp;Qing Zhao ,&nbsp;Guoqi Li ,&nbsp;Lei Ma","doi":"10.1016/j.media.2024.103258","DOIUrl":"10.1016/j.media.2024.103258","url":null,"abstract":"<div><p>Foundation models pre-trained on large-scale data have been widely witnessed to achieve success in various natural imaging downstream tasks. <strong>Parameter-efficient fine-tuning (PEFT)</strong> methods aim to adapt foundation models to new domains by updating only a small portion of parameters in order to reduce computational overhead. However, the effectiveness of these PEFT methods, especially in cross-domain few-shot scenarios, e.g., medical image analysis, has not been fully explored. In this work, we facilitate the study of the performance of PEFT when adapting foundation models to medical image classification tasks. Furthermore, to alleviate the limitations of prompt introducing ways and approximation capabilities on Transformer architectures of mainstream prompt tuning methods, we propose the <strong>Embedded Prompt Tuning (EPT)</strong> method by embedding prompt tokens into the expanded channels. We also find that there are anomalies in the feature space distribution of foundation models during pre-training process, and prompt tuning can help mitigate this negative impact. To explain this phenomenon, we also introduce a novel perspective to understand prompt tuning: <strong>Prompt tuning is a distribution calibrator.</strong> And we support it by analysing patch-wise scaling and feature separation operations contained in EPT. Our experiments show that EPT outperforms several state-of-the-art fine-tuning methods by a significant margin on few-shot medical image classification tasks, and completes the fine-tuning process within highly competitive time, indicating EPT is an effective PEFT method. The source code is available at <span>github.com/zuwenqiang/EPT</span><svg><path></path></svg>.</p></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":null,"pages":null},"PeriodicalIF":10.7,"publicationDate":"2024-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141600542","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信