The transfer learning gap: quantifying transfer learning in a medical image case

Javier Guerra-Librero, M. Bento, R. Frayne
{"title":"The transfer learning gap: quantifying transfer learning in a medical image case","authors":"Javier Guerra-Librero, M. Bento, R. Frayne","doi":"10.1117/12.2670071","DOIUrl":null,"url":null,"abstract":"Transfer learning is a widely used technique in medical imaging and other research fields where a scarcity of available data limit the training of machine learning algorithms. Despite its widespread use and extensive supporting body of research, the specific mechanisms behind transfer learning are not completely understood. In this work, we quantify the effectiveness of transfer learning in medical image classification scenarios for different numbers of training set images. We trained ResNet50, a popular deep learning model used in medical image classification, using two scenarios: 1) applying transfer learning to a pre-trained network and 2) training the same model from scratch (i.e., starting with randomly selected weights). We analyzed the performance of the model under both scenarios as the number of training set images increased from 5,000 to 160,000 medical images. We introduced and evaluated a metric, the transfer learning gap (TLG), to quantify the differences between the two scenarios. The TLG measured the difference in the area under the loss curves (AULCs) when transfer learning was applied and when the model was trained from scratch. Our experiments show that as the training set size increases, the TLG trends to zero, suggesting that the advantage of using transfer learning decreases. The trend in the AULC suggests a training set size where the two scenarios would have equal losses. At this point, the model reaches the same performance regardless of if transfer learning or training from scratch was used. This study is important because it provides a novel metric to understand and quantify the effect of transfer learning.","PeriodicalId":147201,"journal":{"name":"Symposium on Medical Information Processing and Analysis","volume":"208 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Symposium on Medical Information Processing and Analysis","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1117/12.2670071","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Transfer learning is a widely used technique in medical imaging and other research fields where a scarcity of available data limit the training of machine learning algorithms. Despite its widespread use and extensive supporting body of research, the specific mechanisms behind transfer learning are not completely understood. In this work, we quantify the effectiveness of transfer learning in medical image classification scenarios for different numbers of training set images. We trained ResNet50, a popular deep learning model used in medical image classification, using two scenarios: 1) applying transfer learning to a pre-trained network and 2) training the same model from scratch (i.e., starting with randomly selected weights). We analyzed the performance of the model under both scenarios as the number of training set images increased from 5,000 to 160,000 medical images. We introduced and evaluated a metric, the transfer learning gap (TLG), to quantify the differences between the two scenarios. The TLG measured the difference in the area under the loss curves (AULCs) when transfer learning was applied and when the model was trained from scratch. Our experiments show that as the training set size increases, the TLG trends to zero, suggesting that the advantage of using transfer learning decreases. The trend in the AULC suggests a training set size where the two scenarios would have equal losses. At this point, the model reaches the same performance regardless of if transfer learning or training from scratch was used. This study is important because it provides a novel metric to understand and quantify the effect of transfer learning.
迁移学习差距:医学图像案例中迁移学习的量化
迁移学习是一种广泛应用于医学成像和其他研究领域的技术,在这些领域,可用数据的缺乏限制了机器学习算法的训练。尽管迁移学习得到了广泛的应用和广泛的研究支持,但迁移学习背后的具体机制尚未完全理解。在这项工作中,我们量化了迁移学习在不同训练集图像数量的医学图像分类场景中的有效性。我们使用两种场景训练ResNet50,这是一种流行的深度学习模型,用于医学图像分类:1)将迁移学习应用于预训练的网络,2)从头开始训练相同的模型(即从随机选择的权重开始)。当训练集图像数量从5000张增加到160,000张医学图像时,我们分析了模型在两种情况下的性能。我们引入并评估了一个度量,即迁移学习差距(TLG),以量化两种情况之间的差异。TLG测量了应用迁移学习和从头开始训练模型时损失曲线下面积(AULCs)的差异。我们的实验表明,随着训练集规模的增加,TLG趋于零,表明使用迁移学习的优势下降。在AULC的趋势表明,训练集的大小,两种情况将有相等的损失。此时,无论使用迁移学习还是从头开始训练,模型都能达到相同的性能。这项研究是重要的,因为它提供了一个新的指标来理解和量化迁移学习的影响。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信