{"title":"生成网络纠错促进增量学习","authors":"Justin Leo;Jugal Kalita","doi":"10.1109/TETCI.2025.3543370","DOIUrl":null,"url":null,"abstract":"Neural networks are often designed for closed environments that are not open to acquisition of new knowledge. Incremental learning techniques allow neural networks to adapt to changing environments, but these methods often encounter challenges causing models to suffer from low classification accuracies. The main problem faced is catastrophic forgetting and this problem is more harmful when using incremental strategies compared to regular tasks. Some known causes of catastrophic forgetting are weight drift and inter-class confusion; these problems cause the network to erroneously fuse trained classes or to forget a learned class. This paper addresses these issues by focusing on data pre-processing and using network feedback corrections for incremental learning. Data pre-processing is important as the quality of the training data used affects the network's ability to maintain continuous class discrimination. This approach uses a generative model to modify the data input for the incremental model. Network feedback corrections would allow the network to adapt to newly found classes and scale based on network need. With combination of generative data pre-processing and network feedback, this paper proposes an approach for efficient long-term incremental learning. The results obtained are compared with similar state-of-the-art algorithms and show high incremental accuracy levels.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"9 3","pages":"2334-2343"},"PeriodicalIF":5.3000,"publicationDate":"2025-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Generative Network Correction to Promote Incremental Learning\",\"authors\":\"Justin Leo;Jugal Kalita\",\"doi\":\"10.1109/TETCI.2025.3543370\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Neural networks are often designed for closed environments that are not open to acquisition of new knowledge. Incremental learning techniques allow neural networks to adapt to changing environments, but these methods often encounter challenges causing models to suffer from low classification accuracies. The main problem faced is catastrophic forgetting and this problem is more harmful when using incremental strategies compared to regular tasks. Some known causes of catastrophic forgetting are weight drift and inter-class confusion; these problems cause the network to erroneously fuse trained classes or to forget a learned class. This paper addresses these issues by focusing on data pre-processing and using network feedback corrections for incremental learning. Data pre-processing is important as the quality of the training data used affects the network's ability to maintain continuous class discrimination. This approach uses a generative model to modify the data input for the incremental model. Network feedback corrections would allow the network to adapt to newly found classes and scale based on network need. With combination of generative data pre-processing and network feedback, this paper proposes an approach for efficient long-term incremental learning. The results obtained are compared with similar state-of-the-art algorithms and show high incremental accuracy levels.\",\"PeriodicalId\":13135,\"journal\":{\"name\":\"IEEE Transactions on Emerging Topics in Computational Intelligence\",\"volume\":\"9 3\",\"pages\":\"2334-2343\"},\"PeriodicalIF\":5.3000,\"publicationDate\":\"2025-03-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Emerging Topics in Computational Intelligence\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10910221/\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Emerging Topics in Computational Intelligence","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10910221/","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Generative Network Correction to Promote Incremental Learning
Neural networks are often designed for closed environments that are not open to acquisition of new knowledge. Incremental learning techniques allow neural networks to adapt to changing environments, but these methods often encounter challenges causing models to suffer from low classification accuracies. The main problem faced is catastrophic forgetting and this problem is more harmful when using incremental strategies compared to regular tasks. Some known causes of catastrophic forgetting are weight drift and inter-class confusion; these problems cause the network to erroneously fuse trained classes or to forget a learned class. This paper addresses these issues by focusing on data pre-processing and using network feedback corrections for incremental learning. Data pre-processing is important as the quality of the training data used affects the network's ability to maintain continuous class discrimination. This approach uses a generative model to modify the data input for the incremental model. Network feedback corrections would allow the network to adapt to newly found classes and scale based on network need. With combination of generative data pre-processing and network feedback, this paper proposes an approach for efficient long-term incremental learning. The results obtained are compared with similar state-of-the-art algorithms and show high incremental accuracy levels.
期刊介绍:
The IEEE Transactions on Emerging Topics in Computational Intelligence (TETCI) publishes original articles on emerging aspects of computational intelligence, including theory, applications, and surveys.
TETCI is an electronics only publication. TETCI publishes six issues per year.
Authors are encouraged to submit manuscripts in any emerging topic in computational intelligence, especially nature-inspired computing topics not covered by other IEEE Computational Intelligence Society journals. A few such illustrative examples are glial cell networks, computational neuroscience, Brain Computer Interface, ambient intelligence, non-fuzzy computing with words, artificial life, cultural learning, artificial endocrine networks, social reasoning, artificial hormone networks, computational intelligence for the IoT and Smart-X technologies.