Gereziher W. Adhane, Mohammad Mahdi Dehshibi, D. Masip
{"title":"在深度架构训练中结合强化学习进行质量意识样本选择","authors":"Gereziher W. Adhane, Mohammad Mahdi Dehshibi, D. Masip","doi":"10.1109/COINS54846.2022.9854971","DOIUrl":null,"url":null,"abstract":"Many samples are necessary to train a convolutional neural network (CNN) to achieve optimum performance while maintaining generalizability. Several studies, however, have indicated that not all input data in large datasets are informative for the model, and using them for training can degrade the model’s performance and add uncertainty. Furthermore, in some domains, such as medicine, there is insufficient labelled data to train a deep learning model from scratch, necessitating the use of transfer learning to fine-tune a pretrained model in another domain. This paper proposes a transfer learning strategy based on partially supervised reinforcement learning (RL) to address these concerns by selecting the most informative samples while avoiding negative transfers from the dataset. We conducted several experiments on the benchmark image classification databases MNIST, Fashion-MNIST, and CIFAR-10 to create a fair test harness for assessing the performance of the proposed strategy, which can be extended to explore other domains in the future. The results show that the proposed strategy outperforms the classical training methods.","PeriodicalId":187055,"journal":{"name":"2022 IEEE International Conference on Omni-layer Intelligent Systems (COINS)","volume":"104 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Incorporating Reinforcement Learning for Quality-aware Sample Selection in Deep Architecture Training\",\"authors\":\"Gereziher W. Adhane, Mohammad Mahdi Dehshibi, D. Masip\",\"doi\":\"10.1109/COINS54846.2022.9854971\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Many samples are necessary to train a convolutional neural network (CNN) to achieve optimum performance while maintaining generalizability. Several studies, however, have indicated that not all input data in large datasets are informative for the model, and using them for training can degrade the model’s performance and add uncertainty. Furthermore, in some domains, such as medicine, there is insufficient labelled data to train a deep learning model from scratch, necessitating the use of transfer learning to fine-tune a pretrained model in another domain. This paper proposes a transfer learning strategy based on partially supervised reinforcement learning (RL) to address these concerns by selecting the most informative samples while avoiding negative transfers from the dataset. We conducted several experiments on the benchmark image classification databases MNIST, Fashion-MNIST, and CIFAR-10 to create a fair test harness for assessing the performance of the proposed strategy, which can be extended to explore other domains in the future. The results show that the proposed strategy outperforms the classical training methods.\",\"PeriodicalId\":187055,\"journal\":{\"name\":\"2022 IEEE International Conference on Omni-layer Intelligent Systems (COINS)\",\"volume\":\"104 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-08-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE International Conference on Omni-layer Intelligent Systems (COINS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/COINS54846.2022.9854971\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE International Conference on Omni-layer Intelligent Systems (COINS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/COINS54846.2022.9854971","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Incorporating Reinforcement Learning for Quality-aware Sample Selection in Deep Architecture Training
Many samples are necessary to train a convolutional neural network (CNN) to achieve optimum performance while maintaining generalizability. Several studies, however, have indicated that not all input data in large datasets are informative for the model, and using them for training can degrade the model’s performance and add uncertainty. Furthermore, in some domains, such as medicine, there is insufficient labelled data to train a deep learning model from scratch, necessitating the use of transfer learning to fine-tune a pretrained model in another domain. This paper proposes a transfer learning strategy based on partially supervised reinforcement learning (RL) to address these concerns by selecting the most informative samples while avoiding negative transfers from the dataset. We conducted several experiments on the benchmark image classification databases MNIST, Fashion-MNIST, and CIFAR-10 to create a fair test harness for assessing the performance of the proposed strategy, which can be extended to explore other domains in the future. The results show that the proposed strategy outperforms the classical training methods.