Medical image analysis最新文献

筛选
英文 中文
Editorial for Special Issue on Foundation Models for Medical Image Analysis. 医学图像分析基础模型》特刊编辑。
IF 10.7 1区 医学
Medical image analysis Pub Date : 2025-02-01 Epub Date: 2024-11-06 DOI: 10.1016/j.media.2024.103389
Xiaosong Wang, Dequan Wang, Xiaoxiao Li, Jens Rittscher, Dimitris Metaxas, Shaoting Zhang
{"title":"Editorial for Special Issue on Foundation Models for Medical Image Analysis.","authors":"Xiaosong Wang, Dequan Wang, Xiaoxiao Li, Jens Rittscher, Dimitris Metaxas, Shaoting Zhang","doi":"10.1016/j.media.2024.103389","DOIUrl":"10.1016/j.media.2024.103389","url":null,"abstract":"","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":" ","pages":"103389"},"PeriodicalIF":10.7,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142739884","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Few-shot medical image segmentation with high-fidelity prototypes.
IF 10.7 1区 医学
Medical image analysis Pub Date : 2025-02-01 Epub Date: 2024-11-30 DOI: 10.1016/j.media.2024.103412
Song Tang, Shaxu Yan, Xiaozhi Qi, Jianxin Gao, Mao Ye, Jianwei Zhang, Xiatian Zhu
{"title":"Few-shot medical image segmentation with high-fidelity prototypes.","authors":"Song Tang, Shaxu Yan, Xiaozhi Qi, Jianxin Gao, Mao Ye, Jianwei Zhang, Xiatian Zhu","doi":"10.1016/j.media.2024.103412","DOIUrl":"10.1016/j.media.2024.103412","url":null,"abstract":"<p><p>Few-shot Semantic Segmentation (FSS) aims to adapt a pretrained model to new classes with as few as a single labeled training sample per class. Despite the prototype based approaches have achieved substantial success, existing models are limited to the imaging scenarios with considerably distinct objects and not highly complex background, e.g., natural images. This makes such models suboptimal for medical imaging with both conditions invalid. To address this problem, we propose a novel DetailSelf-refinedPrototypeNetwork (DSPNet) to construct high-fidelity prototypes representing the object foreground and the background more comprehensively. Specifically, to construct global semantics while maintaining the captured detail semantics, we learn the foreground prototypes by modeling the multimodal structures with clustering and then fusing each in a channel-wise manner. Considering that the background often has no apparent semantic relation in the spatial dimensions, we integrate channel-specific structural information under sparse channel-aware regulation. Extensive experiments on three challenging medical image benchmarks show the superiority of DSPNet over previous state-of-the-art methods. The code and data are available at https://github.com/tntek/DSPNet.</p>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"100 ","pages":"103412"},"PeriodicalIF":10.7,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142780633","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Corrigendum to "Detection and analysis of cerebral aneurysms based on X-ray rotational angiography - the CADA 2020 challenge" [Medical Image Analysis, April 2022, Volume 77, 102333]. 基于 X 射线旋转血管造影的脑动脉瘤检测和分析--CADA 2020 挑战"[《医学图像分析》,2022 年 4 月,第 77 卷,102333] 勘误。
IF 10.7 1区 医学
Medical image analysis Pub Date : 2025-02-01 Epub Date: 2024-10-10 DOI: 10.1016/j.media.2024.103363
Matthias Ivantsits, Leonid Goubergrits, Jan-Martin Kuhnigk, Markus Huellebrand, Jan Bruening, Tabea Kossen, Boris Pfahringer, Jens Schaller, Andreas Spuler, Titus Kuehne, Yizhuan Jia, Xuesong Li, Suprosanna Shit, Bjoern Menze, Ziyu Su, Jun Ma, Ziwei Nie, Kartik Jain, Yanfei Liu, Yi Lin, Anja Hennemuth
{"title":"Corrigendum to \"Detection and analysis of cerebral aneurysms based on X-ray rotational angiography - the CADA 2020 challenge\" [Medical Image Analysis, April 2022, Volume 77, 102333].","authors":"Matthias Ivantsits, Leonid Goubergrits, Jan-Martin Kuhnigk, Markus Huellebrand, Jan Bruening, Tabea Kossen, Boris Pfahringer, Jens Schaller, Andreas Spuler, Titus Kuehne, Yizhuan Jia, Xuesong Li, Suprosanna Shit, Bjoern Menze, Ziyu Su, Jun Ma, Ziwei Nie, Kartik Jain, Yanfei Liu, Yi Lin, Anja Hennemuth","doi":"10.1016/j.media.2024.103363","DOIUrl":"10.1016/j.media.2024.103363","url":null,"abstract":"","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":" ","pages":"103363"},"PeriodicalIF":10.7,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142400702","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Developing Human Connectome Project: A fast deep learning-based pipeline for neonatal cortical surface reconstruction.
IF 10.7 1区 医学
Medical image analysis Pub Date : 2025-02-01 Epub Date: 2024-11-26 DOI: 10.1016/j.media.2024.103394
Qiang Ma, Kaili Liang, Liu Li, Saga Masui, Yourong Guo, Chiara Nosarti, Emma C Robinson, Bernhard Kainz, Daniel Rueckert
{"title":"The Developing Human Connectome Project: A fast deep learning-based pipeline for neonatal cortical surface reconstruction.","authors":"Qiang Ma, Kaili Liang, Liu Li, Saga Masui, Yourong Guo, Chiara Nosarti, Emma C Robinson, Bernhard Kainz, Daniel Rueckert","doi":"10.1016/j.media.2024.103394","DOIUrl":"10.1016/j.media.2024.103394","url":null,"abstract":"<p><p>The Developing Human Connectome Project (dHCP) aims to explore developmental patterns of the human brain during the perinatal period. An automated processing pipeline has been developed to extract high-quality cortical surfaces from structural brain magnetic resonance (MR) images for the dHCP neonatal dataset. However, the current implementation of the pipeline requires more than 6.5 h to process a single MRI scan, making it expensive for large-scale neuroimaging studies. In this paper, we propose a fast deep learning (DL) based pipeline for dHCP neonatal cortical surface reconstruction, incorporating DL-based brain extraction, cortical surface reconstruction and spherical projection, as well as GPU-accelerated cortical surface inflation and cortical feature estimation. We introduce a multiscale deformation network to learn diffeomorphic cortical surface reconstruction end-to-end from T2-weighted brain MRI. A fast unsupervised spherical mapping approach is integrated to minimize metric distortions between cortical surfaces and projected spheres. The entire workflow of our DL-based dHCP pipeline completes within only 24 s on a modern GPU, which is nearly 1000 times faster than the original dHCP pipeline. The qualitative assessment demonstrates that for 82.5% of the test samples, the cortical surfaces reconstructed by our DL-based pipeline achieve superior (54.2%) or equal (28.3%) surface quality compared to the original dHCP pipeline.</p>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"100 ","pages":"103394"},"PeriodicalIF":10.7,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142780635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AutoFOX: An automated cross-modal 3D fusion framework of coronary X-ray angiography and OCT.
IF 10.7 1区 医学
Medical image analysis Pub Date : 2024-12-15 DOI: 10.1016/j.media.2024.103432
Chunming Li, Yuchuan Qiao, Wei Yu, Yingguang Li, Yankai Chen, Zehao Fan, Runguo Wei, Botao Yang, Zhiqing Wang, Xuesong Lu, Lianglong Chen, Carlos Collet, Miao Chu, Shengxian Tu
{"title":"AutoFOX: An automated cross-modal 3D fusion framework of coronary X-ray angiography and OCT.","authors":"Chunming Li, Yuchuan Qiao, Wei Yu, Yingguang Li, Yankai Chen, Zehao Fan, Runguo Wei, Botao Yang, Zhiqing Wang, Xuesong Lu, Lianglong Chen, Carlos Collet, Miao Chu, Shengxian Tu","doi":"10.1016/j.media.2024.103432","DOIUrl":"https://doi.org/10.1016/j.media.2024.103432","url":null,"abstract":"<p><p>Coronary artery disease (CAD) is the leading cause of death globally. The 3D fusion of coronary X-ray angiography (XA) and optical coherence tomography (OCT) provides complementary information to appreciate coronary anatomy and plaque morphology. This significantly improve CAD diagnosis and prognosis by enabling precise hemodynamic and computational physiology assessments. The challenges of fusion lie in the potential misalignment caused by the foreshortening effect in XA and non-uniform acquisition of OCT pullback. Moreover, the need for reconstructions of major bifurcations is technically demanding. This paper proposed an automated 3D fusion framework AutoFOX, which consists of deep learning model TransCAN for 3D vessel alignment. The 3D vessel contours are processed as sequential data, whose features are extracted and integrated with bifurcation information to enhance alignment via a multi-task fashion. TransCAN shows the highest alignment accuracy among all methods with a mean alignment error of 0.99 ± 0.81 mm along the vascular sequence, and only 0.82 ± 0.69 mm at key anatomical positions. The proposed AutoFOX framework uniquely employs an advanced side branch lumen reconstruction algorithm to enhance the assessment of bifurcation lesions. A multi-center dataset is utilized for independent external validation, using the paired 3D coronary computer tomography angiography (CTA) as the reference standard. Novel morphological metrics are proposed to evaluate the fusion accuracy. Our experiments show that the fusion model generated by AutoFOX exhibits high morphological consistency with CTA. AutoFOX framework enables automatic and comprehensive assessment of CAD, especially for the accurate assessment of bifurcation stenosis, which is of clinical value to guiding procedure and optimization.</p>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"101 ","pages":"103432"},"PeriodicalIF":10.7,"publicationDate":"2024-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142864791","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Organ-level instance segmentation enables continuous time-space-spectrum analysis of pre-clinical abdominal photoacoustic tomography images.
IF 10.7 1区 医学
Medical image analysis Pub Date : 2024-12-12 DOI: 10.1016/j.media.2024.103402
Zhichao Liang, Shuangyang Zhang, Zongxin Mo, Xiaoming Zhang, Anqi Wei, Wufan Chen, Li Qi
{"title":"Organ-level instance segmentation enables continuous time-space-spectrum analysis of pre-clinical abdominal photoacoustic tomography images.","authors":"Zhichao Liang, Shuangyang Zhang, Zongxin Mo, Xiaoming Zhang, Anqi Wei, Wufan Chen, Li Qi","doi":"10.1016/j.media.2024.103402","DOIUrl":"https://doi.org/10.1016/j.media.2024.103402","url":null,"abstract":"<p><p>Photoacoustic tomography (PAT), as a novel biomedical imaging technique, is able to capture temporal, spatial and spectral tomographic information from organisms. Organ-level multi-parametric analysis of continuous PAT images are of interest since it enables the quantification of organ specific morphological and functional parameters in small animals. Accurate organ delineation is imperative for organ-level image analysis, yet the low contrast and blurred organ boundaries in PAT images pose challenge for their precise segmentation. Fortunately, shared structural information among continuous images in the time-space-spectrum domain may be used to enhance segmentation. In this paper, we introduce a structure fusion enhanced graph convolutional network (SFE-GCN), which aims at automatically segmenting major organs including the body, liver, kidneys, spleen, vessel and spine of abdominal PAT image of mice. SFE-GCN enhances the structural feature of organs by fusing information in continuous image sequence captured at time, space and spectrum domains. As validated on large-scale datasets across different imaging scenarios, our method not only preserves fine structural details but also ensures anatomically aligned organ contours. Most importantly, this study explores the application of SFE-GCN in multi-dimensional organ image analysis, including organ-based dynamic morphological analysis, organ-wise light fluence correction and segmentation-enhanced spectral un-mixing. Code will be released at https://github.com/lzc-smu/SFEGCN.git.</p>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"101 ","pages":"103402"},"PeriodicalIF":10.7,"publicationDate":"2024-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142846732","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Personalized predictions of Glioblastoma infiltration: Mathematical models, Physics-Informed Neural Networks and multimodal scans.
IF 10.7 1区 医学
Medical image analysis Pub Date : 2024-12-12 DOI: 10.1016/j.media.2024.103423
Ray Zirui Zhang, Ivan Ezhov, Michal Balcerak, Andy Zhu, Benedikt Wiestler, Bjoern Menze, John S Lowengrub
{"title":"Personalized predictions of Glioblastoma infiltration: Mathematical models, Physics-Informed Neural Networks and multimodal scans.","authors":"Ray Zirui Zhang, Ivan Ezhov, Michal Balcerak, Andy Zhu, Benedikt Wiestler, Bjoern Menze, John S Lowengrub","doi":"10.1016/j.media.2024.103423","DOIUrl":"https://doi.org/10.1016/j.media.2024.103423","url":null,"abstract":"<p><p>Predicting the infiltration of Glioblastoma (GBM) from medical MRI scans is crucial for understanding tumor growth dynamics and designing personalized radiotherapy treatment plans. Mathematical models of GBM growth can complement the data in the prediction of spatial distributions of tumor cells. However, this requires estimating patient-specific parameters of the model from clinical data, which is a challenging inverse problem due to limited temporal data and the limited time between imaging and diagnosis. This work proposes a method that uses Physics-Informed Neural Networks (PINNs) to estimate patient-specific parameters of a reaction-diffusion partial differential equation (PDE) model of GBM growth from a single 3D structural MRI snapshot. PINNs embed both the data and the PDE into a loss function, thus integrating theory and data. Key innovations include the identification and estimation of characteristic non-dimensional parameters, a pre-training step that utilizes the non-dimensional parameters and a fine-tuning step to determine the patient specific parameters. Additionally, the diffuse-domain method is employed to handle the complex brain geometry within the PINN framework. The method is validated on both synthetic and patient datasets, showing promise for personalized GBM treatment through parametric inference within clinically relevant timeframes.</p>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"101 ","pages":"103423"},"PeriodicalIF":10.7,"publicationDate":"2024-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142864837","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Contrastive machine learning reveals species -shared and -specific brain functional architecture.
IF 10.7 1区 医学
Medical image analysis Pub Date : 2024-12-12 DOI: 10.1016/j.media.2024.103431
Li Yang, Guannan Cao, Songyao Zhang, Weihan Zhang, Yusong Sun, Jingchao Zhou, Tianyang Zhong, Yixuan Yuan, Tao Liu, Tianming Liu, Lei Guo, Yongchun Yu, Xi Jiang, Gang Li, Junwei Han, Tuo Zhang
{"title":"Contrastive machine learning reveals species -shared and -specific brain functional architecture.","authors":"Li Yang, Guannan Cao, Songyao Zhang, Weihan Zhang, Yusong Sun, Jingchao Zhou, Tianyang Zhong, Yixuan Yuan, Tao Liu, Tianming Liu, Lei Guo, Yongchun Yu, Xi Jiang, Gang Li, Junwei Han, Tuo Zhang","doi":"10.1016/j.media.2024.103431","DOIUrl":"https://doi.org/10.1016/j.media.2024.103431","url":null,"abstract":"<p><p>A deep comparative analysis of brain functional connectome across species in primates has the potential to yield valuable insights for both scientific and clinical applications. However, the interspecies commonality and differences are inherently entangled with each other and with other irrelevant factors. Here we develop a novel contrastive machine learning method, called shared-unique variation autoencoder (SU-VAE), to allow disentanglement of the species-shared and species-specific functional connectome variation between macaque and human brains on large-scale resting-state fMRI datasets. The method was validated by confirming that human-specific features are differentially related to cognitive scores, while features shared with macaque better capture sensorimotor ones. The projection of disentangled connectomes to the cortex revealed a gradient that reflected species divergence. In contrast to macaque, the introduction of human-specific connectomes to the shared ones enhanced network efficiency. We identified genes enriched on 'axon guidance' that could be related to the human-specific connectomes. The code contains the model and analysis can be found in https://github.com/BBBBrain/SU-VAE.</p>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"101 ","pages":"103431"},"PeriodicalIF":10.7,"publicationDate":"2024-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142846527","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improving cross-domain generalizability of medical image segmentation using uncertainty and shape-aware continual test-time domain adaptation.
IF 10.7 1区 医学
Medical image analysis Pub Date : 2024-12-10 DOI: 10.1016/j.media.2024.103422
Jiayi Zhu, Bart Bolsterlee, Yang Song, Erik Meijering
{"title":"Improving cross-domain generalizability of medical image segmentation using uncertainty and shape-aware continual test-time domain adaptation.","authors":"Jiayi Zhu, Bart Bolsterlee, Yang Song, Erik Meijering","doi":"10.1016/j.media.2024.103422","DOIUrl":"https://doi.org/10.1016/j.media.2024.103422","url":null,"abstract":"<p><p>Continual test-time adaptation (CTTA) aims to continuously adapt a source-trained model to a target domain with minimal performance loss while assuming no access to the source data. Typically, source models are trained with empirical risk minimization (ERM) and assumed to perform reasonably on the target domain to allow for further adaptation. However, ERM-trained models often fail to perform adequately on a severely drifted target domain, resulting in unsatisfactory adaptation results. To tackle this issue, we propose a generalizable CTTA framework. First, we incorporate domain-invariant shape modeling into the model and train it using domain-generalization (DG) techniques, promoting target-domain adaptability regardless of the severity of the domain shift. Then, an uncertainty and shape-aware mean teacher network performs adaptation with uncertainty-weighted pseudo-labels and shape information. As part of this process, a novel uncertainty-ranked cross-task regularization scheme is proposed to impose consistency between segmentation maps and their corresponding shape representations, both produced by the student model, at the patch and global levels to enhance performance further. Lastly, small portions of the model's weights are stochastically reset to the initial domain-generalized state at each adaptation step, preventing the model from 'diving too deep' into any specific test samples. The proposed method demonstrates strong continual adaptability and outperforms its peers on five cross-domain segmentation tasks, showcasing its effectiveness and generalizability.</p>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"101 ","pages":"103422"},"PeriodicalIF":10.7,"publicationDate":"2024-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142864795","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MoMA: Momentum contrastive learning with multi-head attention-based knowledge distillation for histopathology image analysis.
IF 10.7 1区 医学
Medical image analysis Pub Date : 2024-12-09 DOI: 10.1016/j.media.2024.103421
Trinh Thi Le Vuong, Jin Tae Kwak
{"title":"MoMA: Momentum contrastive learning with multi-head attention-based knowledge distillation for histopathology image analysis.","authors":"Trinh Thi Le Vuong, Jin Tae Kwak","doi":"10.1016/j.media.2024.103421","DOIUrl":"https://doi.org/10.1016/j.media.2024.103421","url":null,"abstract":"<p><p>There is no doubt that advanced artificial intelligence models and high quality data are the keys to success in developing computational pathology tools. Although the overall volume of pathology data keeps increasing, a lack of quality data is a common issue when it comes to a specific task due to several reasons including privacy and ethical issues with patient data. In this work, we propose to exploit knowledge distillation, i.e., utilize the existing model to learn a new, target model, to overcome such issues in computational pathology. Specifically, we employ a student-teacher framework to learn a target model from a pre-trained, teacher model without direct access to source data and distill relevant knowledge via momentum contrastive learning with multi-head attention mechanism, which provides consistent and context-aware feature representations. This enables the target model to assimilate informative representations of the teacher model while seamlessly adapting to the unique nuances of the target data. The proposed method is rigorously evaluated across different scenarios where the teacher model was trained on the same, relevant, and irrelevant classification tasks with the target model. Experimental results demonstrate the accuracy and robustness of our approach in transferring knowledge to different domains and tasks, outperforming other related methods. Moreover, the results provide a guideline on the learning strategy for different types of tasks and scenarios in computational pathology.</p>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"101 ","pages":"103421"},"PeriodicalIF":10.7,"publicationDate":"2024-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142821813","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信