{"title":"An Object Detection Network for Embedded System","authors":"Yanpeng Sun, Chenlu Wang, L. Qu","doi":"10.1109/IUCC/DSCI/SmartCNS.2019.00110","DOIUrl":null,"url":null,"abstract":"Object detection in images has a wide range of applications in various fields. However, many of the convolutional neural networks recently proposed have higher requirements on computing resources while achieving higher precision, which cannot guarantee good real-time performance on embedded platforms with limited resources. This paper proposed an object detection network suitable for embedded systems. The M-YOLO (Mobile-YOLO) model proposed in this paper combines depthwise separable convolution and residual blocks in feature extraction layers, which helps to reduce the amount of computation of the network. Multi-scale feature fusion is applied to the output layers to improve the accuracy. Experiments show that the M-YOLO model has 9.68M FLOPs (Floating Point Operations), which is about 22% of Tiny-YOLO model. The accuracy of the network reaches 56.61% on the PASCAL VOC dataset, and the speed in ARM is over 3 times faster than Tiny-YOLO model. The network is more suitable for embedded systems.","PeriodicalId":410905,"journal":{"name":"2019 IEEE International Conferences on Ubiquitous Computing & Communications (IUCC) and Data Science and Computational Intelligence (DSCI) and Smart Computing, Networking and Services (SmartCNS)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE International Conferences on Ubiquitous Computing & Communications (IUCC) and Data Science and Computational Intelligence (DSCI) and Smart Computing, Networking and Services (SmartCNS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IUCC/DSCI/SmartCNS.2019.00110","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3
Abstract
Object detection in images has a wide range of applications in various fields. However, many of the convolutional neural networks recently proposed have higher requirements on computing resources while achieving higher precision, which cannot guarantee good real-time performance on embedded platforms with limited resources. This paper proposed an object detection network suitable for embedded systems. The M-YOLO (Mobile-YOLO) model proposed in this paper combines depthwise separable convolution and residual blocks in feature extraction layers, which helps to reduce the amount of computation of the network. Multi-scale feature fusion is applied to the output layers to improve the accuracy. Experiments show that the M-YOLO model has 9.68M FLOPs (Floating Point Operations), which is about 22% of Tiny-YOLO model. The accuracy of the network reaches 56.61% on the PASCAL VOC dataset, and the speed in ARM is over 3 times faster than Tiny-YOLO model. The network is more suitable for embedded systems.