Task augmentation via channel mixture for few-task meta-learning

IF 6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Jiangdong Fan , Yuekeng Li , Jiayi Bi , Hui Xu , Jie Shao
{"title":"Task augmentation via channel mixture for few-task meta-learning","authors":"Jiangdong Fan ,&nbsp;Yuekeng Li ,&nbsp;Jiayi Bi ,&nbsp;Hui Xu ,&nbsp;Jie Shao","doi":"10.1016/j.neunet.2025.107609","DOIUrl":null,"url":null,"abstract":"<div><div>Meta-learning is a promising approach for rapidly adapting to new tasks with minimal data by leveraging knowledge from previous tasks. However, meta-learning typically requires a large number of meta-training tasks. Existing methods often generate new tasks by interpolating fine-grained feature points, and such interpolation can compromise the continuity and integrity of the feature representations in the generated tasks. To address this problem, we propose task-level data augmentation to generate additional new tasks. Specifically, we introduce a novel task augmentation method called Task Augmentation via Channel Mixture (TACM). TACM generates new tasks by mixing channels from different tasks. This channel-level mixture preserves the continuity and integrity of feature information in channels during the mixture process, thereby enhancing the generalization ability of the model. Experimental results demonstrate that TACM outperforms other state-of-the-art methods across multiple datasets. Code is available at <span><span>https://github.com/F-GOD6/TACM</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"190 ","pages":"Article 107609"},"PeriodicalIF":6.0000,"publicationDate":"2025-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Networks","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0893608025004897","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Meta-learning is a promising approach for rapidly adapting to new tasks with minimal data by leveraging knowledge from previous tasks. However, meta-learning typically requires a large number of meta-training tasks. Existing methods often generate new tasks by interpolating fine-grained feature points, and such interpolation can compromise the continuity and integrity of the feature representations in the generated tasks. To address this problem, we propose task-level data augmentation to generate additional new tasks. Specifically, we introduce a novel task augmentation method called Task Augmentation via Channel Mixture (TACM). TACM generates new tasks by mixing channels from different tasks. This channel-level mixture preserves the continuity and integrity of feature information in channels during the mixture process, thereby enhancing the generalization ability of the model. Experimental results demonstrate that TACM outperforms other state-of-the-art methods across multiple datasets. Code is available at https://github.com/F-GOD6/TACM.
基于通道混合的少任务元学习任务增强
元学习是一种很有前途的方法,通过利用以前任务中的知识,以最少的数据快速适应新任务。然而,元学习通常需要大量的元训练任务。现有方法通常通过插值细粒度特征点来生成新任务,这种插值会损害生成任务中特征表示的连续性和完整性。为了解决这个问题,我们提出了任务级数据增强来生成额外的新任务。具体来说,我们介绍了一种新的任务增强方法,即信道混合任务增强(TACM)。TACM通过混合来自不同任务的通道来生成新任务。这种通道级混合保持了通道中特征信息在混合过程中的连续性和完整性,从而增强了模型的泛化能力。实验结果表明,TACM在多个数据集上优于其他最先进的方法。代码可从https://github.com/F-GOD6/TACM获得。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Neural Networks
Neural Networks 工程技术-计算机:人工智能
CiteScore
13.90
自引率
7.70%
发文量
425
审稿时长
67 days
期刊介绍: Neural Networks is a platform that aims to foster an international community of scholars and practitioners interested in neural networks, deep learning, and other approaches to artificial intelligence and machine learning. Our journal invites submissions covering various aspects of neural networks research, from computational neuroscience and cognitive modeling to mathematical analyses and engineering applications. By providing a forum for interdisciplinary discussions between biology and technology, we aim to encourage the development of biologically-inspired artificial intelligence.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信