Yunzhe Zhu, Yusong Tan, Xiaoling Li, Qingbo Wu, Xueqin Ning
{"title":"ICFD: An Incremental Learning Method Based on Data Feature Distribution","authors":"Yunzhe Zhu, Yusong Tan, Xiaoling Li, Qingbo Wu, Xueqin Ning","doi":"10.1109/SmartWorld-UIC-ATC-ScalCom-DigitalTwin-PriComp-Metaverse56740.2022.00103","DOIUrl":null,"url":null,"abstract":"Neural network models have achieved great success in numerous disciplines in recent years, including image segmentation, object identification, and natural language processing (NLP). Incremental learning in these fields focuses on training models in a continuous data stream. As time goes by, more new data becomes available, and old data may become unavailable owing to resource constraints such as storage. As a result, when new data is continually arriving, the performance of the neural network model on the old data sample sometimes decreases significantly, a phenomenon known as catastrophic forgetting. Many corresponding strategies have been proposed to mitigate the catastrophic forgetting of neural network models, which are based on parameter regularization, data replay, and parameter isolation. This paper proposes an incremental learning method based on data feature distribution (ICFD). The method uses Gaussian distribution to generate features from old data to train neural network models based on the phenomenon that feature vectors obey multi-dimensional Gaussian distribution in feature space. This method avoids storing a large number of original samples, and the generated old class features contain more sample information. This method combines data playback and parameter regularization in concrete implementation. The experimental results of ICFD on the CIFAR-100 demonstrate that when the incremental step is 5, the average incremental accuracy is increased by 10.4%. When the incremental step is 10, the average incremental accuracy is improved by 8.1%.","PeriodicalId":0,"journal":{"name":"","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SmartWorld-UIC-ATC-ScalCom-DigitalTwin-PriComp-Metaverse56740.2022.00103","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Neural network models have achieved great success in numerous disciplines in recent years, including image segmentation, object identification, and natural language processing (NLP). Incremental learning in these fields focuses on training models in a continuous data stream. As time goes by, more new data becomes available, and old data may become unavailable owing to resource constraints such as storage. As a result, when new data is continually arriving, the performance of the neural network model on the old data sample sometimes decreases significantly, a phenomenon known as catastrophic forgetting. Many corresponding strategies have been proposed to mitigate the catastrophic forgetting of neural network models, which are based on parameter regularization, data replay, and parameter isolation. This paper proposes an incremental learning method based on data feature distribution (ICFD). The method uses Gaussian distribution to generate features from old data to train neural network models based on the phenomenon that feature vectors obey multi-dimensional Gaussian distribution in feature space. This method avoids storing a large number of original samples, and the generated old class features contain more sample information. This method combines data playback and parameter regularization in concrete implementation. The experimental results of ICFD on the CIFAR-100 demonstrate that when the incremental step is 5, the average incremental accuracy is increased by 10.4%. When the incremental step is 10, the average incremental accuracy is improved by 8.1%.