{"title":"Towards Lower Precision Quantization for Pedestrian Detection in Crowded Scenario","authors":"Mickael Cormier, Dmitrii Seletkov, J. Beyerer","doi":"10.1109/EUROCON52738.2021.9535539","DOIUrl":null,"url":null,"abstract":"Automatic pedestrian detection in real-world un-cooperative scenarios is a well-known problem in computer vision, which has again gained in visibility last year due to distancing requirements. This remains a very challenging task, especially in crowded areas. Due to diverse technical and privacy issues, embedded systems such as smart cameras and smaller drones are becoming ubiquitous. Those complex detection models are not designed for on-edge processing in resource-constrained environments. Therefore, quantization techniques are required, in order to reduce the weights of a model to low-precision and not only effectively compress the model, but also allow to use low bitwidth arithmetic, which in term can be accelerated from specialized hardware. However, using an effective quantization scheme while maintaining accuracy is challenging. In this work we first establish a Quantization-aware training (QAT) and Post-training Quantization (PTQ) baseline for 8-bit uniform quantization to RetinaNet for person detection on the extremely challenging PANDA dataset. Those achieve near lossless performance in terms of accuracy by about 5× speed-up of the CPU inference and 4× model size reduction for 8-bit PTQ quantized model. Further experiments with aggressive quantization scheme in 4- and 2-bit show diverse challenges resulting in severe instabilities. We apply both uniform and non-uniform quantization to overcome those and provide insights and strategies to fully quantize in 4- and 2-bit. Through this process we systematically evaluate the sensibility of individual parts of RetinaNet for quantization in very low precision. Finally, we show the resistance of quantization for limited amount of data.","PeriodicalId":328338,"journal":{"name":"IEEE EUROCON 2021 - 19th International Conference on Smart Technologies","volume":"5 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE EUROCON 2021 - 19th International Conference on Smart Technologies","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/EUROCON52738.2021.9535539","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Automatic pedestrian detection in real-world un-cooperative scenarios is a well-known problem in computer vision, which has again gained in visibility last year due to distancing requirements. This remains a very challenging task, especially in crowded areas. Due to diverse technical and privacy issues, embedded systems such as smart cameras and smaller drones are becoming ubiquitous. Those complex detection models are not designed for on-edge processing in resource-constrained environments. Therefore, quantization techniques are required, in order to reduce the weights of a model to low-precision and not only effectively compress the model, but also allow to use low bitwidth arithmetic, which in term can be accelerated from specialized hardware. However, using an effective quantization scheme while maintaining accuracy is challenging. In this work we first establish a Quantization-aware training (QAT) and Post-training Quantization (PTQ) baseline for 8-bit uniform quantization to RetinaNet for person detection on the extremely challenging PANDA dataset. Those achieve near lossless performance in terms of accuracy by about 5× speed-up of the CPU inference and 4× model size reduction for 8-bit PTQ quantized model. Further experiments with aggressive quantization scheme in 4- and 2-bit show diverse challenges resulting in severe instabilities. We apply both uniform and non-uniform quantization to overcome those and provide insights and strategies to fully quantize in 4- and 2-bit. Through this process we systematically evaluate the sensibility of individual parts of RetinaNet for quantization in very low precision. Finally, we show the resistance of quantization for limited amount of data.