{"title":"Continual Unsupervised Generative Modeling","authors":"Fei Ye;Adrian G. Bors","doi":"10.1109/TPAMI.2025.3564188","DOIUrl":null,"url":null,"abstract":"Variational Autoencoders (VAEs), can achieve remarkable results in single tasks, by learning data representations, image generation, or image-to-image translation among others. However, VAEs suffer from loss of information when aiming to continuously learn a sequence of different data domains. This is caused by the catastrophic forgetting, which affects all machine learning methods. This paper addresses the problem of catastrophic forgetting by developing a new theoretical framework which derives an upper bound to the negative sample log-likelihood when continuously learning sequences of tasks. These theoretical derivations provide new insights into the forgetting behavior of learning models, showing that their optimal performance is achieved when a dynamic mixture expansion model adds new components whenever learning new tasks. In our approach we optimize the model size by introducing the Dynamic Expansion Graph Model (DEGM) that dynamically builds a graph structure promoting the positive knowledge transfer when learning new tasks. In addition, we propose a Dynamic Expansion Graph Adaptive Mechanism (DEGAM) that generates adaptive weights to regulate the graph structure, further improving the positive knowledge transfer effectiveness. Experimental results show that the proposed methodology performs better than other baselines in continual learning.","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":"47 8","pages":"6256-6273"},"PeriodicalIF":18.6000,"publicationDate":"2025-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on pattern analysis and machine intelligence","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10977656/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Variational Autoencoders (VAEs), can achieve remarkable results in single tasks, by learning data representations, image generation, or image-to-image translation among others. However, VAEs suffer from loss of information when aiming to continuously learn a sequence of different data domains. This is caused by the catastrophic forgetting, which affects all machine learning methods. This paper addresses the problem of catastrophic forgetting by developing a new theoretical framework which derives an upper bound to the negative sample log-likelihood when continuously learning sequences of tasks. These theoretical derivations provide new insights into the forgetting behavior of learning models, showing that their optimal performance is achieved when a dynamic mixture expansion model adds new components whenever learning new tasks. In our approach we optimize the model size by introducing the Dynamic Expansion Graph Model (DEGM) that dynamically builds a graph structure promoting the positive knowledge transfer when learning new tasks. In addition, we propose a Dynamic Expansion Graph Adaptive Mechanism (DEGAM) that generates adaptive weights to regulate the graph structure, further improving the positive knowledge transfer effectiveness. Experimental results show that the proposed methodology performs better than other baselines in continual learning.