Zack Chen-McCaig, R. Hoseinnezhad, A. Bab-Hadiashar
{"title":"使用迁移学习的卷积神经网络纹理识别","authors":"Zack Chen-McCaig, R. Hoseinnezhad, A. Bab-Hadiashar","doi":"10.1109/ICCAIS.2017.8217573","DOIUrl":null,"url":null,"abstract":"VGG 16 and Inception-v3 networks were trained using a texture dataset of muddied and clean cows. A new dataset with 600 images that is similar to the actual texture dataset was introduced and used to train the networks. The method used to train the networks was transfer learning. ImageNet weights were trained using the similar dataset, then the newly trained weights were trained again using the actual texture dataset which had 584 images. We used a novel CNN training method, which involved a middle training step training using transfer learning. The achieved validation accuracy was 95.5% which is considerably better than the state-of-the-art 87%.","PeriodicalId":410094,"journal":{"name":"2017 International Conference on Control, Automation and Information Sciences (ICCAIS)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":"{\"title\":\"Convolutional neural networks for texture recognition using transfer learning\",\"authors\":\"Zack Chen-McCaig, R. Hoseinnezhad, A. Bab-Hadiashar\",\"doi\":\"10.1109/ICCAIS.2017.8217573\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"VGG 16 and Inception-v3 networks were trained using a texture dataset of muddied and clean cows. A new dataset with 600 images that is similar to the actual texture dataset was introduced and used to train the networks. The method used to train the networks was transfer learning. ImageNet weights were trained using the similar dataset, then the newly trained weights were trained again using the actual texture dataset which had 584 images. We used a novel CNN training method, which involved a middle training step training using transfer learning. The achieved validation accuracy was 95.5% which is considerably better than the state-of-the-art 87%.\",\"PeriodicalId\":410094,\"journal\":{\"name\":\"2017 International Conference on Control, Automation and Information Sciences (ICCAIS)\",\"volume\":\"24 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2017-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"6\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2017 International Conference on Control, Automation and Information Sciences (ICCAIS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICCAIS.2017.8217573\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 International Conference on Control, Automation and Information Sciences (ICCAIS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCAIS.2017.8217573","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Convolutional neural networks for texture recognition using transfer learning
VGG 16 and Inception-v3 networks were trained using a texture dataset of muddied and clean cows. A new dataset with 600 images that is similar to the actual texture dataset was introduced and used to train the networks. The method used to train the networks was transfer learning. ImageNet weights were trained using the similar dataset, then the newly trained weights were trained again using the actual texture dataset which had 584 images. We used a novel CNN training method, which involved a middle training step training using transfer learning. The achieved validation accuracy was 95.5% which is considerably better than the state-of-the-art 87%.