{"title":"基于模型综合和背景数据引入的分类模型避免灾难性遗忘方法","authors":"Hirayama Akari, Kimura Masaomi","doi":"10.23919/APSIPAASC55919.2022.9980154","DOIUrl":null,"url":null,"abstract":"Animals including humans, continuously acquire knowledge and skills throughout their lives. However, many machine learning models cannot learn new tasks without forgetting past knowledge. In neural networks, it is common to use one neural network for each training task, and successive training will reduce the accuracy of the previous task. This problem is called catastrophic forgetting, and research on continual learning is being conducted to solve it. In this paper, we proposed a method to reducing catastrophic forgetting, where new tasks are trained without retaining previously trained data. Our method assumes that tasks are classification. Our method adds random data to the training data in order to combine models trained on different tasks to avoid exceed generalization in the domain where train data do not exist combines models separately trained for each tasks. In the evaluation experiments, we confirmed that our method reduced forgetting for the original two-dimensional dataset and MNIST dataset.","PeriodicalId":382967,"journal":{"name":"2022 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC)","volume":"53 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Catastrophic forgetting avoidance method for a Classification Model by Model Synthesis and Introduction of Background Data\",\"authors\":\"Hirayama Akari, Kimura Masaomi\",\"doi\":\"10.23919/APSIPAASC55919.2022.9980154\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Animals including humans, continuously acquire knowledge and skills throughout their lives. However, many machine learning models cannot learn new tasks without forgetting past knowledge. In neural networks, it is common to use one neural network for each training task, and successive training will reduce the accuracy of the previous task. This problem is called catastrophic forgetting, and research on continual learning is being conducted to solve it. In this paper, we proposed a method to reducing catastrophic forgetting, where new tasks are trained without retaining previously trained data. Our method assumes that tasks are classification. Our method adds random data to the training data in order to combine models trained on different tasks to avoid exceed generalization in the domain where train data do not exist combines models separately trained for each tasks. In the evaluation experiments, we confirmed that our method reduced forgetting for the original two-dimensional dataset and MNIST dataset.\",\"PeriodicalId\":382967,\"journal\":{\"name\":\"2022 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC)\",\"volume\":\"53 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-11-07\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.23919/APSIPAASC55919.2022.9980154\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.23919/APSIPAASC55919.2022.9980154","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Catastrophic forgetting avoidance method for a Classification Model by Model Synthesis and Introduction of Background Data
Animals including humans, continuously acquire knowledge and skills throughout their lives. However, many machine learning models cannot learn new tasks without forgetting past knowledge. In neural networks, it is common to use one neural network for each training task, and successive training will reduce the accuracy of the previous task. This problem is called catastrophic forgetting, and research on continual learning is being conducted to solve it. In this paper, we proposed a method to reducing catastrophic forgetting, where new tasks are trained without retaining previously trained data. Our method assumes that tasks are classification. Our method adds random data to the training data in order to combine models trained on different tasks to avoid exceed generalization in the domain where train data do not exist combines models separately trained for each tasks. In the evaluation experiments, we confirmed that our method reduced forgetting for the original two-dimensional dataset and MNIST dataset.