MetaGB:一个用于高效任务自适应元学习的梯度增强框架

Manqing Dong, Lina Yao, Xianzhi Wang, Xiwei Xu, Liming Zhu
{"title":"MetaGB:一个用于高效任务自适应元学习的梯度增强框架","authors":"Manqing Dong, Lina Yao, Xianzhi Wang, Xiwei Xu, Liming Zhu","doi":"10.1109/ICDM51629.2021.00020","DOIUrl":null,"url":null,"abstract":"Deep learning frameworks generally require sufficient training data to generalize well while fail to adapt on small or few-shot datasets. Meta-learning offers an effective means of tackling few-shot scenarios and has drawn increasing attention in recent years. Meta-optimization aims to learn a shared set of parameters across tasks for meta-learning while facing challenges in determining whether an initialization condition can be generalized to tasks with diverse distributions. In this regard, we propose a meta-gradient boosting framework that can fit diverse distributions based on a base learner (which learns shared information across tasks) and a series of gradient-boosted modules (which capture task-specific information). We evaluate the model on several few-shot learning benchmarks and demonstrate the effectiveness of our model in modulating task-specific meta-learned priors and handling diverse distributions.","PeriodicalId":320970,"journal":{"name":"2021 IEEE International Conference on Data Mining (ICDM)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"MetaGB: A Gradient Boosting Framework for Efficient Task Adaptive Meta Learning\",\"authors\":\"Manqing Dong, Lina Yao, Xianzhi Wang, Xiwei Xu, Liming Zhu\",\"doi\":\"10.1109/ICDM51629.2021.00020\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Deep learning frameworks generally require sufficient training data to generalize well while fail to adapt on small or few-shot datasets. Meta-learning offers an effective means of tackling few-shot scenarios and has drawn increasing attention in recent years. Meta-optimization aims to learn a shared set of parameters across tasks for meta-learning while facing challenges in determining whether an initialization condition can be generalized to tasks with diverse distributions. In this regard, we propose a meta-gradient boosting framework that can fit diverse distributions based on a base learner (which learns shared information across tasks) and a series of gradient-boosted modules (which capture task-specific information). We evaluate the model on several few-shot learning benchmarks and demonstrate the effectiveness of our model in modulating task-specific meta-learned priors and handling diverse distributions.\",\"PeriodicalId\":320970,\"journal\":{\"name\":\"2021 IEEE International Conference on Data Mining (ICDM)\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 IEEE International Conference on Data Mining (ICDM)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICDM51629.2021.00020\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE International Conference on Data Mining (ICDM)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICDM51629.2021.00020","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

深度学习框架通常需要足够的训练数据才能很好地泛化,而不能适应小型或少量的数据集。元学习提供了一种有效的方法来解决少数情况,近年来引起了越来越多的关注。元优化的目的是学习一组跨任务的共享参数进行元学习,同时面临着确定初始化条件是否可以推广到具有不同分布的任务的挑战。在这方面,我们提出了一个元梯度增强框架,该框架可以适应基于基础学习器(跨任务学习共享信息)和一系列梯度增强模块(捕获任务特定信息)的不同分布。我们在几个少量的学习基准上评估了模型,并证明了我们的模型在调节特定任务的元学习先验和处理不同分布方面的有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
MetaGB: A Gradient Boosting Framework for Efficient Task Adaptive Meta Learning
Deep learning frameworks generally require sufficient training data to generalize well while fail to adapt on small or few-shot datasets. Meta-learning offers an effective means of tackling few-shot scenarios and has drawn increasing attention in recent years. Meta-optimization aims to learn a shared set of parameters across tasks for meta-learning while facing challenges in determining whether an initialization condition can be generalized to tasks with diverse distributions. In this regard, we propose a meta-gradient boosting framework that can fit diverse distributions based on a base learner (which learns shared information across tasks) and a series of gradient-boosted modules (which capture task-specific information). We evaluate the model on several few-shot learning benchmarks and demonstrate the effectiveness of our model in modulating task-specific meta-learned priors and handling diverse distributions.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信