Yunzhe Zhu, Yusong Tan, Xiaoling Li, Qingbo Wu, Xueqin Ning
{"title":"一种基于数据特征分布的增量学习方法","authors":"Yunzhe Zhu, Yusong Tan, Xiaoling Li, Qingbo Wu, Xueqin Ning","doi":"10.1109/SmartWorld-UIC-ATC-ScalCom-DigitalTwin-PriComp-Metaverse56740.2022.00103","DOIUrl":null,"url":null,"abstract":"Neural network models have achieved great success in numerous disciplines in recent years, including image segmentation, object identification, and natural language processing (NLP). Incremental learning in these fields focuses on training models in a continuous data stream. As time goes by, more new data becomes available, and old data may become unavailable owing to resource constraints such as storage. As a result, when new data is continually arriving, the performance of the neural network model on the old data sample sometimes decreases significantly, a phenomenon known as catastrophic forgetting. Many corresponding strategies have been proposed to mitigate the catastrophic forgetting of neural network models, which are based on parameter regularization, data replay, and parameter isolation. This paper proposes an incremental learning method based on data feature distribution (ICFD). The method uses Gaussian distribution to generate features from old data to train neural network models based on the phenomenon that feature vectors obey multi-dimensional Gaussian distribution in feature space. This method avoids storing a large number of original samples, and the generated old class features contain more sample information. This method combines data playback and parameter regularization in concrete implementation. The experimental results of ICFD on the CIFAR-100 demonstrate that when the incremental step is 5, the average incremental accuracy is increased by 10.4%. When the incremental step is 10, the average incremental accuracy is improved by 8.1%.","PeriodicalId":43791,"journal":{"name":"Scalable Computing-Practice and Experience","volume":null,"pages":null},"PeriodicalIF":0.9000,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"ICFD: An Incremental Learning Method Based on Data Feature Distribution\",\"authors\":\"Yunzhe Zhu, Yusong Tan, Xiaoling Li, Qingbo Wu, Xueqin Ning\",\"doi\":\"10.1109/SmartWorld-UIC-ATC-ScalCom-DigitalTwin-PriComp-Metaverse56740.2022.00103\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Neural network models have achieved great success in numerous disciplines in recent years, including image segmentation, object identification, and natural language processing (NLP). Incremental learning in these fields focuses on training models in a continuous data stream. As time goes by, more new data becomes available, and old data may become unavailable owing to resource constraints such as storage. As a result, when new data is continually arriving, the performance of the neural network model on the old data sample sometimes decreases significantly, a phenomenon known as catastrophic forgetting. Many corresponding strategies have been proposed to mitigate the catastrophic forgetting of neural network models, which are based on parameter regularization, data replay, and parameter isolation. This paper proposes an incremental learning method based on data feature distribution (ICFD). The method uses Gaussian distribution to generate features from old data to train neural network models based on the phenomenon that feature vectors obey multi-dimensional Gaussian distribution in feature space. This method avoids storing a large number of original samples, and the generated old class features contain more sample information. This method combines data playback and parameter regularization in concrete implementation. The experimental results of ICFD on the CIFAR-100 demonstrate that when the incremental step is 5, the average incremental accuracy is increased by 10.4%. When the incremental step is 10, the average incremental accuracy is improved by 8.1%.\",\"PeriodicalId\":43791,\"journal\":{\"name\":\"Scalable Computing-Practice and Experience\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.9000,\"publicationDate\":\"2022-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Scalable Computing-Practice and Experience\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/SmartWorld-UIC-ATC-ScalCom-DigitalTwin-PriComp-Metaverse56740.2022.00103\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"COMPUTER SCIENCE, SOFTWARE ENGINEERING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Scalable Computing-Practice and Experience","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SmartWorld-UIC-ATC-ScalCom-DigitalTwin-PriComp-Metaverse56740.2022.00103","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
ICFD: An Incremental Learning Method Based on Data Feature Distribution
Neural network models have achieved great success in numerous disciplines in recent years, including image segmentation, object identification, and natural language processing (NLP). Incremental learning in these fields focuses on training models in a continuous data stream. As time goes by, more new data becomes available, and old data may become unavailable owing to resource constraints such as storage. As a result, when new data is continually arriving, the performance of the neural network model on the old data sample sometimes decreases significantly, a phenomenon known as catastrophic forgetting. Many corresponding strategies have been proposed to mitigate the catastrophic forgetting of neural network models, which are based on parameter regularization, data replay, and parameter isolation. This paper proposes an incremental learning method based on data feature distribution (ICFD). The method uses Gaussian distribution to generate features from old data to train neural network models based on the phenomenon that feature vectors obey multi-dimensional Gaussian distribution in feature space. This method avoids storing a large number of original samples, and the generated old class features contain more sample information. This method combines data playback and parameter regularization in concrete implementation. The experimental results of ICFD on the CIFAR-100 demonstrate that when the incremental step is 5, the average incremental accuracy is increased by 10.4%. When the incremental step is 10, the average incremental accuracy is improved by 8.1%.
期刊介绍:
The area of scalable computing has matured and reached a point where new issues and trends require a professional forum. SCPE will provide this avenue by publishing original refereed papers that address the present as well as the future of parallel and distributed computing. The journal will focus on algorithm development, implementation and execution on real-world parallel architectures, and application of parallel and distributed computing to the solution of real-life problems.