{"title":"YOLO-Tight: an Efficient Dynamic Compression Method for YOLO Object Detection Networks","authors":"Wei Yan, Ting Liu, Yuzhuo Fu","doi":"10.1145/3457682.3457740","DOIUrl":null,"url":null,"abstract":"Deep learning algorithms perform well in the field of object detection. Object detection networks represented by YOLO, SSD and faster-RCNN have achieved excellent performance on public datasets such as VOC and COCO. However, deep learning models are difficult to deploy on the edge computing platform with less computing resources due to its huge amount of parameters and computation. In this paper, we propose an efficient dynamic sparsity method to help the network quickly mine important parameters, and then prune the unimportant weight channels, which makes the network model more compact and consumes less computation. In the case of high sparsity, our method is more robust than L1 regularization and other regularization forms, and can achieve better sparsity and pruning effects. Through this method, we can prune the YOLOv3 network and the enhanced YOLOv3-SPP3 network by up to 90%. This allows the network to achieve 5× reduction in FLOPs and maintain an accuracy loss of less than 1% on the BDD100k dataset.","PeriodicalId":142045,"journal":{"name":"2021 13th International Conference on Machine Learning and Computing","volume":"45 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 13th International Conference on Machine Learning and Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3457682.3457740","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
Deep learning algorithms perform well in the field of object detection. Object detection networks represented by YOLO, SSD and faster-RCNN have achieved excellent performance on public datasets such as VOC and COCO. However, deep learning models are difficult to deploy on the edge computing platform with less computing resources due to its huge amount of parameters and computation. In this paper, we propose an efficient dynamic sparsity method to help the network quickly mine important parameters, and then prune the unimportant weight channels, which makes the network model more compact and consumes less computation. In the case of high sparsity, our method is more robust than L1 regularization and other regularization forms, and can achieve better sparsity and pruning effects. Through this method, we can prune the YOLOv3 network and the enhanced YOLOv3-SPP3 network by up to 90%. This allows the network to achieve 5× reduction in FLOPs and maintain an accuracy loss of less than 1% on the BDD100k dataset.