{"title":"分类任务中的层间增强","authors":"Satoru Mizusawa, Y. Sei","doi":"10.1109/iCCECE52344.2021.9534840","DOIUrl":null,"url":null,"abstract":"In deep learning, it is necessary to train on huge datasets to obtain accurate models, but in certain domains, such as medical imaging, it is difficult to develop large datasets. Therefore, research is being conducted to realize good accuracy, even with small datasets. One strategy to achieve good accuracy with small datasets is input data augmentation. However, input data augmentation needs to be carefully prepared according to the domain. In this article, we propose an interlayer augmentation method that produces new data between layers. Then, we propose batch generalization (BG) and random BG (RBG) as specific methods. We applied BG and RBG to VGG, ResNet, and ViT, evaluated each using CIFAR10 and CIFAR100 classification tasks, and compared them with scratch learning. We obtained an average improvement of 0.39% and 0.27% for RBG and BG, respectively, in CIFAR10 and an average improvement of 1.07% and 0.30% for RBG and BG, respectively, in CIFAR100. In particular, in all cases, RBG showed better results than scratch learning.","PeriodicalId":128679,"journal":{"name":"2021 International Conference on Computing, Electronics & Communications Engineering (iCCECE)","volume":"56 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Interlayer Augmentation in a Classification Task\",\"authors\":\"Satoru Mizusawa, Y. Sei\",\"doi\":\"10.1109/iCCECE52344.2021.9534840\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In deep learning, it is necessary to train on huge datasets to obtain accurate models, but in certain domains, such as medical imaging, it is difficult to develop large datasets. Therefore, research is being conducted to realize good accuracy, even with small datasets. One strategy to achieve good accuracy with small datasets is input data augmentation. However, input data augmentation needs to be carefully prepared according to the domain. In this article, we propose an interlayer augmentation method that produces new data between layers. Then, we propose batch generalization (BG) and random BG (RBG) as specific methods. We applied BG and RBG to VGG, ResNet, and ViT, evaluated each using CIFAR10 and CIFAR100 classification tasks, and compared them with scratch learning. We obtained an average improvement of 0.39% and 0.27% for RBG and BG, respectively, in CIFAR10 and an average improvement of 1.07% and 0.30% for RBG and BG, respectively, in CIFAR100. In particular, in all cases, RBG showed better results than scratch learning.\",\"PeriodicalId\":128679,\"journal\":{\"name\":\"2021 International Conference on Computing, Electronics & Communications Engineering (iCCECE)\",\"volume\":\"56 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-08-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 International Conference on Computing, Electronics & Communications Engineering (iCCECE)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/iCCECE52344.2021.9534840\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 International Conference on Computing, Electronics & Communications Engineering (iCCECE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/iCCECE52344.2021.9534840","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
In deep learning, it is necessary to train on huge datasets to obtain accurate models, but in certain domains, such as medical imaging, it is difficult to develop large datasets. Therefore, research is being conducted to realize good accuracy, even with small datasets. One strategy to achieve good accuracy with small datasets is input data augmentation. However, input data augmentation needs to be carefully prepared according to the domain. In this article, we propose an interlayer augmentation method that produces new data between layers. Then, we propose batch generalization (BG) and random BG (RBG) as specific methods. We applied BG and RBG to VGG, ResNet, and ViT, evaluated each using CIFAR10 and CIFAR100 classification tasks, and compared them with scratch learning. We obtained an average improvement of 0.39% and 0.27% for RBG and BG, respectively, in CIFAR10 and an average improvement of 1.07% and 0.30% for RBG and BG, respectively, in CIFAR100. In particular, in all cases, RBG showed better results than scratch learning.