Nanfei Jiang, Zhexiao Xiong, Hui Tian, Xu Zhao, Xiaojie Du, Chaoyang Zhao, Jinqiao Wang
{"title":"PruneFaceDet:通过稀疏性训练精简轻量级人脸检测网络","authors":"Nanfei Jiang, Zhexiao Xiong, Hui Tian, Xu Zhao, Xiaojie Du, Chaoyang Zhao, Jinqiao Wang","doi":"10.1049/ccs2.12065","DOIUrl":null,"url":null,"abstract":"<p>Face detection is the basic step of many face analysis tasks. In practice, face detectors usually run on mobile devices with limited memory and computing resources. Therefore, it is important to keep face detectors lightweight. To this end, current methods usually focus on directly designing lightweight detectors. Nevertheless, it is not fully explored whether the resource consumption of these lightweight detectors can be further suppressed without too much sacrifice on accuracy. In this study, we propose to apply the network pruning method to the lightweight face detection network, to further reduce its parameters and floating point operations. To identify the channels of less importance, we perform network training with sparsity regularisation on channel scaling factors of each layer. Then, we remove the connections and corresponding weights with near-zero scaling factors after sparsity training. We apply the proposed pruning pipeline to a state-of-the-art face detection method, EagleEye, and get a shrunken EagleEye model, which has a reduced number of computing operations and parameters. The shrunken model achieves comparable accuracy as the unpruned model. By using the proposed method, the shrunken EagleEye achieves a 56.3% reduction of parameter size with almost no accuracy loss on the WiderFace dataset.</p>","PeriodicalId":33652,"journal":{"name":"Cognitive Computation and Systems","volume":"4 4","pages":"391-399"},"PeriodicalIF":1.2000,"publicationDate":"2022-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/ccs2.12065","citationCount":"0","resultStr":"{\"title\":\"PruneFaceDet: Pruning lightweight face detection network by sparsity training\",\"authors\":\"Nanfei Jiang, Zhexiao Xiong, Hui Tian, Xu Zhao, Xiaojie Du, Chaoyang Zhao, Jinqiao Wang\",\"doi\":\"10.1049/ccs2.12065\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Face detection is the basic step of many face analysis tasks. In practice, face detectors usually run on mobile devices with limited memory and computing resources. Therefore, it is important to keep face detectors lightweight. To this end, current methods usually focus on directly designing lightweight detectors. Nevertheless, it is not fully explored whether the resource consumption of these lightweight detectors can be further suppressed without too much sacrifice on accuracy. In this study, we propose to apply the network pruning method to the lightweight face detection network, to further reduce its parameters and floating point operations. To identify the channels of less importance, we perform network training with sparsity regularisation on channel scaling factors of each layer. Then, we remove the connections and corresponding weights with near-zero scaling factors after sparsity training. We apply the proposed pruning pipeline to a state-of-the-art face detection method, EagleEye, and get a shrunken EagleEye model, which has a reduced number of computing operations and parameters. The shrunken model achieves comparable accuracy as the unpruned model. By using the proposed method, the shrunken EagleEye achieves a 56.3% reduction of parameter size with almost no accuracy loss on the WiderFace dataset.</p>\",\"PeriodicalId\":33652,\"journal\":{\"name\":\"Cognitive Computation and Systems\",\"volume\":\"4 4\",\"pages\":\"391-399\"},\"PeriodicalIF\":1.2000,\"publicationDate\":\"2022-06-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/ccs2.12065\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Cognitive Computation and Systems\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1049/ccs2.12065\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Cognitive Computation and Systems","FirstCategoryId":"1085","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1049/ccs2.12065","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
PruneFaceDet: Pruning lightweight face detection network by sparsity training
Face detection is the basic step of many face analysis tasks. In practice, face detectors usually run on mobile devices with limited memory and computing resources. Therefore, it is important to keep face detectors lightweight. To this end, current methods usually focus on directly designing lightweight detectors. Nevertheless, it is not fully explored whether the resource consumption of these lightweight detectors can be further suppressed without too much sacrifice on accuracy. In this study, we propose to apply the network pruning method to the lightweight face detection network, to further reduce its parameters and floating point operations. To identify the channels of less importance, we perform network training with sparsity regularisation on channel scaling factors of each layer. Then, we remove the connections and corresponding weights with near-zero scaling factors after sparsity training. We apply the proposed pruning pipeline to a state-of-the-art face detection method, EagleEye, and get a shrunken EagleEye model, which has a reduced number of computing operations and parameters. The shrunken model achieves comparable accuracy as the unpruned model. By using the proposed method, the shrunken EagleEye achieves a 56.3% reduction of parameter size with almost no accuracy loss on the WiderFace dataset.