利用小型数据集和迁移学习技术进行车辆类型分类

Q2 Engineering
Quang-Tu Pham, Dinh-Dat Pham, Khanh-Ly Can, Hieu Dao To, Hoang-Dieu Vu
{"title":"利用小型数据集和迁移学习技术进行车辆类型分类","authors":"Quang-Tu Pham, Dinh-Dat Pham, Khanh-Ly Can, Hieu Dao To, Hoang-Dieu Vu","doi":"10.4108/eetinis.v11i2.4678","DOIUrl":null,"url":null,"abstract":"This study delves into the application of deep learning training techniques using a restricted dataset, encompassing around 400 vehicle images sourced from Kaggle. Faced with the challenges of limited data, the impracticality of training models from scratch becomes apparent, advocating instead for the utilization of pre-trained models with pre-trained weights. The investigation considers three prominent models—EfficientNetB0, ResNetB0, and MobileNetV2—with EfficientNetB0 emerging as the most proficient choice. Employing the gradually unfreeze layer technique over a specified number of epochs, EfficientNetB0 exhibits remarkable accuracy, reaching 99.5% on the training dataset and 97% on the validation dataset. In contrast, training models from scratch results in notably lower accuracy. In this context, knowledge distillation proves pivotal, overcoming this limitation and significantly improving accuracy from 29.5% in training and 20.5% in validation to 54% and 45%, respectively. This study uniquely contributes by exploring transfer learning with gradually unfreeze layers and elucidates the potential of knowledge distillation. It highlights their effectiveness in robustly enhancing model performance under data scarcity, thus addressing challenges associated with training deep learning models on limited datasets. The findings underscore the practical significance of these techniques in achieving superior results when confronted with data constraints in real-world scenarios","PeriodicalId":33474,"journal":{"name":"EAI Endorsed Transactions on Industrial Networks and Intelligent Systems","volume":"80 3","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Vehicle Type Classification with Small Dataset and Transfer Learning Techniques\",\"authors\":\"Quang-Tu Pham, Dinh-Dat Pham, Khanh-Ly Can, Hieu Dao To, Hoang-Dieu Vu\",\"doi\":\"10.4108/eetinis.v11i2.4678\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This study delves into the application of deep learning training techniques using a restricted dataset, encompassing around 400 vehicle images sourced from Kaggle. Faced with the challenges of limited data, the impracticality of training models from scratch becomes apparent, advocating instead for the utilization of pre-trained models with pre-trained weights. The investigation considers three prominent models—EfficientNetB0, ResNetB0, and MobileNetV2—with EfficientNetB0 emerging as the most proficient choice. Employing the gradually unfreeze layer technique over a specified number of epochs, EfficientNetB0 exhibits remarkable accuracy, reaching 99.5% on the training dataset and 97% on the validation dataset. In contrast, training models from scratch results in notably lower accuracy. In this context, knowledge distillation proves pivotal, overcoming this limitation and significantly improving accuracy from 29.5% in training and 20.5% in validation to 54% and 45%, respectively. This study uniquely contributes by exploring transfer learning with gradually unfreeze layers and elucidates the potential of knowledge distillation. It highlights their effectiveness in robustly enhancing model performance under data scarcity, thus addressing challenges associated with training deep learning models on limited datasets. The findings underscore the practical significance of these techniques in achieving superior results when confronted with data constraints in real-world scenarios\",\"PeriodicalId\":33474,\"journal\":{\"name\":\"EAI Endorsed Transactions on Industrial Networks and Intelligent Systems\",\"volume\":\"80 3\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-03-07\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"EAI Endorsed Transactions on Industrial Networks and Intelligent Systems\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.4108/eetinis.v11i2.4678\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"Engineering\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"EAI Endorsed Transactions on Industrial Networks and Intelligent Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.4108/eetinis.v11i2.4678","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"Engineering","Score":null,"Total":0}
引用次数: 0

摘要

本研究深入探讨了深度学习训练技术在有限数据集上的应用,该数据集包含来自 Kaggle 的约 400 张车辆图像。面对有限数据带来的挑战,从头开始训练模型的不切实际性变得显而易见,因此我们主张使用预先训练好的模型和预先训练好的权重。本研究考虑了三种著名的模型--EfficientNetB0、ResNetB0 和 MobileNetV2,其中 EfficientNetB0 是最熟练的选择。EfficientNetB0 采用了在指定的epoch次数内逐步解冻层的技术,表现出了卓越的准确性,在训练数据集上达到了 99.5%,在验证数据集上达到了 97%。相比之下,从头开始训练模型的准确率明显较低。在这种情况下,知识提炼被证明是至关重要的,它克服了这一局限性,并显著提高了准确率,从训练数据集的 29.5% 和验证数据集的 20.5% 分别提高到 54% 和 45%。本研究通过探索逐步解冻层的迁移学习,阐明了知识蒸馏的潜力,从而做出了独特的贡献。研究强调了它们在数据稀缺情况下稳健提高模型性能的有效性,从而解决了在有限数据集上训练深度学习模型所面临的挑战。研究结果强调了这些技术在现实世界场景中面临数据限制时取得优异结果的实际意义。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Vehicle Type Classification with Small Dataset and Transfer Learning Techniques
This study delves into the application of deep learning training techniques using a restricted dataset, encompassing around 400 vehicle images sourced from Kaggle. Faced with the challenges of limited data, the impracticality of training models from scratch becomes apparent, advocating instead for the utilization of pre-trained models with pre-trained weights. The investigation considers three prominent models—EfficientNetB0, ResNetB0, and MobileNetV2—with EfficientNetB0 emerging as the most proficient choice. Employing the gradually unfreeze layer technique over a specified number of epochs, EfficientNetB0 exhibits remarkable accuracy, reaching 99.5% on the training dataset and 97% on the validation dataset. In contrast, training models from scratch results in notably lower accuracy. In this context, knowledge distillation proves pivotal, overcoming this limitation and significantly improving accuracy from 29.5% in training and 20.5% in validation to 54% and 45%, respectively. This study uniquely contributes by exploring transfer learning with gradually unfreeze layers and elucidates the potential of knowledge distillation. It highlights their effectiveness in robustly enhancing model performance under data scarcity, thus addressing challenges associated with training deep learning models on limited datasets. The findings underscore the practical significance of these techniques in achieving superior results when confronted with data constraints in real-world scenarios
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
4.00
自引率
0.00%
发文量
15
审稿时长
10 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信