Continual Unsupervised Generative Modeling

IF 18.6
Fei Ye;Adrian G. Bors
{"title":"Continual Unsupervised Generative Modeling","authors":"Fei Ye;Adrian G. Bors","doi":"10.1109/TPAMI.2025.3564188","DOIUrl":null,"url":null,"abstract":"Variational Autoencoders (VAEs), can achieve remarkable results in single tasks, by learning data representations, image generation, or image-to-image translation among others. However, VAEs suffer from loss of information when aiming to continuously learn a sequence of different data domains. This is caused by the catastrophic forgetting, which affects all machine learning methods. This paper addresses the problem of catastrophic forgetting by developing a new theoretical framework which derives an upper bound to the negative sample log-likelihood when continuously learning sequences of tasks. These theoretical derivations provide new insights into the forgetting behavior of learning models, showing that their optimal performance is achieved when a dynamic mixture expansion model adds new components whenever learning new tasks. In our approach we optimize the model size by introducing the Dynamic Expansion Graph Model (DEGM) that dynamically builds a graph structure promoting the positive knowledge transfer when learning new tasks. In addition, we propose a Dynamic Expansion Graph Adaptive Mechanism (DEGAM) that generates adaptive weights to regulate the graph structure, further improving the positive knowledge transfer effectiveness. Experimental results show that the proposed methodology performs better than other baselines in continual learning.","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":"47 8","pages":"6256-6273"},"PeriodicalIF":18.6000,"publicationDate":"2025-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on pattern analysis and machine intelligence","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10977656/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Variational Autoencoders (VAEs), can achieve remarkable results in single tasks, by learning data representations, image generation, or image-to-image translation among others. However, VAEs suffer from loss of information when aiming to continuously learn a sequence of different data domains. This is caused by the catastrophic forgetting, which affects all machine learning methods. This paper addresses the problem of catastrophic forgetting by developing a new theoretical framework which derives an upper bound to the negative sample log-likelihood when continuously learning sequences of tasks. These theoretical derivations provide new insights into the forgetting behavior of learning models, showing that their optimal performance is achieved when a dynamic mixture expansion model adds new components whenever learning new tasks. In our approach we optimize the model size by introducing the Dynamic Expansion Graph Model (DEGM) that dynamically builds a graph structure promoting the positive knowledge transfer when learning new tasks. In addition, we propose a Dynamic Expansion Graph Adaptive Mechanism (DEGAM) that generates adaptive weights to regulate the graph structure, further improving the positive knowledge transfer effectiveness. Experimental results show that the proposed methodology performs better than other baselines in continual learning.
连续无监督生成建模
变分自动编码器(VAEs)可以通过学习数据表示、图像生成或图像到图像的转换等,在单个任务中取得显著的效果。然而,当目标是连续学习一系列不同的数据域时,VAEs存在信息丢失的问题。这是由灾难性遗忘引起的,它会影响所有的机器学习方法。本文通过建立一个新的理论框架来解决灾难性遗忘问题,该框架推导了连续学习任务序列时负样本对数似然的上界。这些理论推导为学习模型的遗忘行为提供了新的见解,表明当动态混合扩展模型在学习新任务时添加新的组件时,它们的最佳性能就会实现。在我们的方法中,我们通过引入动态扩展图模型(DEGM)来优化模型的大小,该模型在学习新任务时动态构建一个促进积极知识转移的图结构。此外,我们提出了一种动态扩展图自适应机制(DEGAM),该机制产生自适应权值来调节图的结构,进一步提高了正向知识转移的有效性。实验结果表明,该方法在持续学习方面优于其他基线方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信