不断学习中的探索和利用

IF 6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Kiseong Hong , Hyundong Jin , Sungho Suh , Eunwoo Kim
{"title":"不断学习中的探索和利用","authors":"Kiseong Hong ,&nbsp;Hyundong Jin ,&nbsp;Sungho Suh ,&nbsp;Eunwoo Kim","doi":"10.1016/j.neunet.2025.107444","DOIUrl":null,"url":null,"abstract":"<div><div>Continual learning (CL) has received a surge of interest, particularly in parameter isolation approaches, aiming to prevent catastrophic forgetting by assigning a disjoint parameter set to each task. Despite their effectiveness, existing approaches often neglect the task-specific differences, depending on predetermined allocation ratios of parameters. This can lead to suboptimal performance as it disregards the unique requirements of individual task traits. In this paper, we propose a novel <em>Exploration–Exploitation</em> approach to address this issue. Our goal is to adaptively distribute resources between acquiring new information (Exploration) and retaining previously learned knowledge (Exploitation) as new tasks emerge. This allows a continual learner to adaptively allocate parameters for every consecutive task by enabling them to compete for resources using exploration and exploitation. To achieve this, we introduce an allocation learner that learns the intricate interplay between exploration and exploitation across all layers of the continual learner. We demonstrate the proposed method under popular image classification benchmarks for diverse CL scenarios, including domain-shift task-incremental learning. Experimental results show that the proposed method outperforms other competitive continual learning approaches with an average margin of 5.3% across all scenarios.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"188 ","pages":"Article 107444"},"PeriodicalIF":6.0000,"publicationDate":"2025-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Exploration and exploitation in continual learning\",\"authors\":\"Kiseong Hong ,&nbsp;Hyundong Jin ,&nbsp;Sungho Suh ,&nbsp;Eunwoo Kim\",\"doi\":\"10.1016/j.neunet.2025.107444\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Continual learning (CL) has received a surge of interest, particularly in parameter isolation approaches, aiming to prevent catastrophic forgetting by assigning a disjoint parameter set to each task. Despite their effectiveness, existing approaches often neglect the task-specific differences, depending on predetermined allocation ratios of parameters. This can lead to suboptimal performance as it disregards the unique requirements of individual task traits. In this paper, we propose a novel <em>Exploration–Exploitation</em> approach to address this issue. Our goal is to adaptively distribute resources between acquiring new information (Exploration) and retaining previously learned knowledge (Exploitation) as new tasks emerge. This allows a continual learner to adaptively allocate parameters for every consecutive task by enabling them to compete for resources using exploration and exploitation. To achieve this, we introduce an allocation learner that learns the intricate interplay between exploration and exploitation across all layers of the continual learner. We demonstrate the proposed method under popular image classification benchmarks for diverse CL scenarios, including domain-shift task-incremental learning. Experimental results show that the proposed method outperforms other competitive continual learning approaches with an average margin of 5.3% across all scenarios.</div></div>\",\"PeriodicalId\":49763,\"journal\":{\"name\":\"Neural Networks\",\"volume\":\"188 \",\"pages\":\"Article 107444\"},\"PeriodicalIF\":6.0000,\"publicationDate\":\"2025-04-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Neural Networks\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0893608025003235\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Networks","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0893608025003235","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

持续学习(CL)已经引起了人们的极大兴趣,特别是在参数隔离方法方面,它旨在通过为每个任务分配一个不相关的参数集来防止灾难性遗忘。尽管它们很有效,但现有的方法往往忽略了特定任务的差异,这取决于预先确定的参数分配比例。这可能会导致次优性能,因为它忽略了单个任务特征的独特需求。在本文中,我们提出了一种新的探索-开发方法来解决这个问题。我们的目标是在新任务出现时,在获取新信息(探索)和保留先前学习的知识(利用)之间自适应地分配资源。这使得持续学习者能够自适应地为每个连续的任务分配参数,使他们能够通过探索和开发来竞争资源。为了实现这一点,我们引入了一个分配学习器,它学习了持续学习器所有层之间探索和利用之间复杂的相互作用。我们在不同CL场景的流行图像分类基准下演示了所提出的方法,包括领域转移任务-增量学习。实验结果表明,该方法优于其他竞争性持续学习方法,在所有场景下的平均裕度为5.3%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

Exploration and exploitation in continual learning

Exploration and exploitation in continual learning
Continual learning (CL) has received a surge of interest, particularly in parameter isolation approaches, aiming to prevent catastrophic forgetting by assigning a disjoint parameter set to each task. Despite their effectiveness, existing approaches often neglect the task-specific differences, depending on predetermined allocation ratios of parameters. This can lead to suboptimal performance as it disregards the unique requirements of individual task traits. In this paper, we propose a novel Exploration–Exploitation approach to address this issue. Our goal is to adaptively distribute resources between acquiring new information (Exploration) and retaining previously learned knowledge (Exploitation) as new tasks emerge. This allows a continual learner to adaptively allocate parameters for every consecutive task by enabling them to compete for resources using exploration and exploitation. To achieve this, we introduce an allocation learner that learns the intricate interplay between exploration and exploitation across all layers of the continual learner. We demonstrate the proposed method under popular image classification benchmarks for diverse CL scenarios, including domain-shift task-incremental learning. Experimental results show that the proposed method outperforms other competitive continual learning approaches with an average margin of 5.3% across all scenarios.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Neural Networks
Neural Networks 工程技术-计算机:人工智能
CiteScore
13.90
自引率
7.70%
发文量
425
审稿时长
67 days
期刊介绍: Neural Networks is a platform that aims to foster an international community of scholars and practitioners interested in neural networks, deep learning, and other approaches to artificial intelligence and machine learning. Our journal invites submissions covering various aspects of neural networks research, from computational neuroscience and cognitive modeling to mathematical analyses and engineering applications. By providing a forum for interdisciplinary discussions between biology and technology, we aim to encourage the development of biologically-inspired artificial intelligence.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信