Latent Generative Replay for Resource-Efficient Continual Learning of Facial Expressions

Samuil Stoychev, Nikhil Churamani, H. Gunes
{"title":"Latent Generative Replay for Resource-Efficient Continual Learning of Facial Expressions","authors":"Samuil Stoychev, Nikhil Churamani, H. Gunes","doi":"10.1109/FG57933.2023.10042642","DOIUrl":null,"url":null,"abstract":"Real-world Facial Expression Recognition (FER) systems require models to constantly learn and adapt with novel data. Traditional Machine Learning (ML) approaches struggle to adapt to such dynamics as models need to be re-trained from scratch with a combination of both old and new data. Replay-based Continual Learning (CL) provides a solution to this problem, either by storing previously seen data samples in memory, sampling and interleaving them with novel data (rehearsal) or by using a generative model to simulate pseudo-samples to replay past knowledge (pseudo-rehearsal). Yet, the high memory footprint of rehearsal and the high computational cost of pseudo-rehearsal limit the real-world application of such methods, especially on resource-constrained devices. To address this, we propose Latent Generative Replay (LGR) for pseudo-rehearsal of low-dimensional latent features to mitigate forgetting in a resource-efficient manner. We adapt popular CL strategies to use LGR instead of generating pseudo-samples, resulting in performance upgrades when evaluated on the CK+, RAF-DB and AffectNet FER benchmarks where LGR significantly reduces the memory and resource consumption of replay-based CL without compromising model performance.","PeriodicalId":318766,"journal":{"name":"2023 IEEE 17th International Conference on Automatic Face and Gesture Recognition (FG)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE 17th International Conference on Automatic Face and Gesture Recognition (FG)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/FG57933.2023.10042642","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

Abstract

Real-world Facial Expression Recognition (FER) systems require models to constantly learn and adapt with novel data. Traditional Machine Learning (ML) approaches struggle to adapt to such dynamics as models need to be re-trained from scratch with a combination of both old and new data. Replay-based Continual Learning (CL) provides a solution to this problem, either by storing previously seen data samples in memory, sampling and interleaving them with novel data (rehearsal) or by using a generative model to simulate pseudo-samples to replay past knowledge (pseudo-rehearsal). Yet, the high memory footprint of rehearsal and the high computational cost of pseudo-rehearsal limit the real-world application of such methods, especially on resource-constrained devices. To address this, we propose Latent Generative Replay (LGR) for pseudo-rehearsal of low-dimensional latent features to mitigate forgetting in a resource-efficient manner. We adapt popular CL strategies to use LGR instead of generating pseudo-samples, resulting in performance upgrades when evaluated on the CK+, RAF-DB and AffectNet FER benchmarks where LGR significantly reduces the memory and resource consumption of replay-based CL without compromising model performance.
基于资源高效的面部表情持续学习的潜在生成重放
现实世界的面部表情识别(FER)系统需要模型不断学习和适应新的数据。传统的机器学习(ML)方法很难适应这种动态,因为模型需要结合新旧数据从零开始重新训练。基于重播的持续学习(CL)为这个问题提供了解决方案,它可以将以前看到的数据样本存储在内存中,对它们进行采样并与新数据交叉(排练),或者使用生成模型模拟伪样本来重播过去的知识(伪排练)。然而,排练的高内存占用和伪排练的高计算成本限制了这些方法的实际应用,特别是在资源受限的设备上。为了解决这个问题,我们提出了潜在生成重播(LGR),用于低维潜在特征的伪排练,以资源高效的方式减轻遗忘。我们采用流行的CL策略来使用LGR而不是生成伪样本,从而在CK+, RAF-DB和AffectNet FER基准测试中进行评估时获得性能升级,其中LGR显着降低了基于重播的CL的内存和资源消耗,而不会影响模型性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信