{"title":"红色警报:持续学习中的可控后门攻击","authors":"Rui Gao, Weiwei Liu","doi":"10.1016/j.neunet.2025.107479","DOIUrl":null,"url":null,"abstract":"<div><div>Continual learning (CL) studies the problem of learning a single model from a sequence of disjoint tasks. The main challenge is to learn without catastrophic forgetting, a scenario in which the model’s performance on previous tasks degrades significantly as new tasks are added. However, few works focus on the security challenge in the CL setting. In this paper, we focus on the backdoor attack in the CL setting. Specifically, we provide the threat model and explore what attackers in a CL setting will face. Based on these findings, we propose a controllable backdoor attack mechanism in continual learning (CBACL). Experimental results on the Split Cifar and Tiny Imagenet datasets confirm the advantages of our proposed mechanism.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"188 ","pages":"Article 107479"},"PeriodicalIF":6.0000,"publicationDate":"2025-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Red alarm: Controllable backdoor attack in continual learning\",\"authors\":\"Rui Gao, Weiwei Liu\",\"doi\":\"10.1016/j.neunet.2025.107479\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Continual learning (CL) studies the problem of learning a single model from a sequence of disjoint tasks. The main challenge is to learn without catastrophic forgetting, a scenario in which the model’s performance on previous tasks degrades significantly as new tasks are added. However, few works focus on the security challenge in the CL setting. In this paper, we focus on the backdoor attack in the CL setting. Specifically, we provide the threat model and explore what attackers in a CL setting will face. Based on these findings, we propose a controllable backdoor attack mechanism in continual learning (CBACL). Experimental results on the Split Cifar and Tiny Imagenet datasets confirm the advantages of our proposed mechanism.</div></div>\",\"PeriodicalId\":49763,\"journal\":{\"name\":\"Neural Networks\",\"volume\":\"188 \",\"pages\":\"Article 107479\"},\"PeriodicalIF\":6.0000,\"publicationDate\":\"2025-04-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Neural Networks\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0893608025003582\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Networks","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0893608025003582","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Red alarm: Controllable backdoor attack in continual learning
Continual learning (CL) studies the problem of learning a single model from a sequence of disjoint tasks. The main challenge is to learn without catastrophic forgetting, a scenario in which the model’s performance on previous tasks degrades significantly as new tasks are added. However, few works focus on the security challenge in the CL setting. In this paper, we focus on the backdoor attack in the CL setting. Specifically, we provide the threat model and explore what attackers in a CL setting will face. Based on these findings, we propose a controllable backdoor attack mechanism in continual learning (CBACL). Experimental results on the Split Cifar and Tiny Imagenet datasets confirm the advantages of our proposed mechanism.
期刊介绍:
Neural Networks is a platform that aims to foster an international community of scholars and practitioners interested in neural networks, deep learning, and other approaches to artificial intelligence and machine learning. Our journal invites submissions covering various aspects of neural networks research, from computational neuroscience and cognitive modeling to mathematical analyses and engineering applications. By providing a forum for interdisciplinary discussions between biology and technology, we aim to encourage the development of biologically-inspired artificial intelligence.