Lessons learned from the NeurIPS 2021 MetaDL challenge: Backbone fine-tuning without episodic meta-learning dominates for few-shot learning image classification

Adrian El Baz, André Carvalho, Hong Chen, Fábio Ferreira, H. Gouk, S. Hu, F. Hutter, Zhengying Liu, F. Mohr, J. V. Rijn, Xin Wang, Isabelle M Guyon
{"title":"Lessons learned from the NeurIPS 2021 MetaDL challenge: Backbone fine-tuning without episodic meta-learning dominates for few-shot learning image classification","authors":"Adrian El Baz, André Carvalho, Hong Chen, Fábio Ferreira, H. Gouk, S. Hu, F. Hutter, Zhengying Liu, F. Mohr, J. V. Rijn, Xin Wang, Isabelle M Guyon","doi":"10.48550/arXiv.2206.08138","DOIUrl":null,"url":null,"abstract":"Although deep neural networks are capable of achieving performance superior to humans on various tasks, they are notorious for requiring large amounts of data and computing resources, restricting their success to domains where such resources are available. Metalearning methods can address this problem by transferring knowledge from related tasks, thus reducing the amount of data and computing resources needed to learn new tasks. We organize the MetaDL competition series, which provide opportunities for research groups all over the world to create and experimentally assess new meta-(deep)learning solutions for real problems. In this paper, authored collaboratively between the competition organizers and the top-ranked participants, we describe the design of the competition, the datasets, the best experimental results, as well as the top-ranked methods in the NeurIPS 2021 challenge, which attracted 15 active teams who made it to the final phase (by outperforming the baseline), making over 100 code submissions during the feedback phase. The solutions of the top participants have been open-sourced. The lessons learned include that learning good representations is essential for effective transfer learning.","PeriodicalId":72099,"journal":{"name":"Advances in neural information processing systems","volume":"12 1","pages":"80-96"},"PeriodicalIF":0.0000,"publicationDate":"2022-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Advances in neural information processing systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.48550/arXiv.2206.08138","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 8

Abstract

Although deep neural networks are capable of achieving performance superior to humans on various tasks, they are notorious for requiring large amounts of data and computing resources, restricting their success to domains where such resources are available. Metalearning methods can address this problem by transferring knowledge from related tasks, thus reducing the amount of data and computing resources needed to learn new tasks. We organize the MetaDL competition series, which provide opportunities for research groups all over the world to create and experimentally assess new meta-(deep)learning solutions for real problems. In this paper, authored collaboratively between the competition organizers and the top-ranked participants, we describe the design of the competition, the datasets, the best experimental results, as well as the top-ranked methods in the NeurIPS 2021 challenge, which attracted 15 active teams who made it to the final phase (by outperforming the baseline), making over 100 code submissions during the feedback phase. The solutions of the top participants have been open-sourced. The lessons learned include that learning good representations is essential for effective transfer learning.
从NeurIPS 2021 MetaDL挑战中获得的经验教训:无情景元学习的骨干微调在少量学习图像分类中占主导地位
尽管深度神经网络能够在各种任务上取得优于人类的性能,但它们因需要大量数据和计算资源而臭名昭著,这限制了它们在这些资源可用的领域取得成功。元学习方法可以通过从相关任务中转移知识来解决这个问题,从而减少学习新任务所需的数据量和计算资源。我们组织了MetaDL系列竞赛,为世界各地的研究小组提供了为实际问题创建和实验评估新的元(深度)学习解决方案的机会。在这篇由竞赛组织者和排名靠前的参赛者共同撰写的论文中,我们描述了竞赛的设计、数据集、最佳实验结果以及NeurIPS 2021挑战赛中排名靠前的方法,该挑战赛吸引了15个活跃的团队进入最后阶段(通过超越基线),在反馈阶段提交了100多份代码。顶级参与者的解决方案已经开源。学习好的表征对于有效的迁移学习至关重要。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信