{"title":"基于转置卷积的量化神经网络精度评价","authors":"Cristian Sestito, S. Perri, Rob Stewart","doi":"10.1109/IJCNN55064.2022.9892671","DOIUrl":null,"url":null,"abstract":"Several modern applications in the field of Artificial Intelligence exploit deep learning to make accurate decisions. Recent work on compression techniques allows for deep learning applications, such as computer vision, to run on Edge Computing devices. For instance, quantizing the precision of deep learning architectures allows Edge Computing devices to achieve high throughput at low power. Quantization has been mainly focused on multilayer perceptrons and convolution-based models for classification problems. However, its impact over more complex scenarios, such as image up-sampling, is still underexplored. This paper presents a systematic evaluation of the accuracy achieved by quantized neural networks when performing image up-sampling in three different applications: image compression/decompression, synthetic image generation and semantic segmentation. Taking into account the promising attitude of learnable filters to predict pixels, transposed convolutional layers are used for up-sampling. Experimental results based on analytical metrics show that acceptable accuracies are reached with quantization spanning between 3 and 7 bits. Based on the visual inspection, the range 2–6 bits guarantees appropriate accuracy.","PeriodicalId":106974,"journal":{"name":"2022 International Joint Conference on Neural Networks (IJCNN)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Accuracy Evaluation of Transposed Convolution-Based Quantized Neural Networks\",\"authors\":\"Cristian Sestito, S. Perri, Rob Stewart\",\"doi\":\"10.1109/IJCNN55064.2022.9892671\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Several modern applications in the field of Artificial Intelligence exploit deep learning to make accurate decisions. Recent work on compression techniques allows for deep learning applications, such as computer vision, to run on Edge Computing devices. For instance, quantizing the precision of deep learning architectures allows Edge Computing devices to achieve high throughput at low power. Quantization has been mainly focused on multilayer perceptrons and convolution-based models for classification problems. However, its impact over more complex scenarios, such as image up-sampling, is still underexplored. This paper presents a systematic evaluation of the accuracy achieved by quantized neural networks when performing image up-sampling in three different applications: image compression/decompression, synthetic image generation and semantic segmentation. Taking into account the promising attitude of learnable filters to predict pixels, transposed convolutional layers are used for up-sampling. Experimental results based on analytical metrics show that acceptable accuracies are reached with quantization spanning between 3 and 7 bits. Based on the visual inspection, the range 2–6 bits guarantees appropriate accuracy.\",\"PeriodicalId\":106974,\"journal\":{\"name\":\"2022 International Joint Conference on Neural Networks (IJCNN)\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-07-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 International Joint Conference on Neural Networks (IJCNN)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IJCNN55064.2022.9892671\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 International Joint Conference on Neural Networks (IJCNN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IJCNN55064.2022.9892671","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Accuracy Evaluation of Transposed Convolution-Based Quantized Neural Networks
Several modern applications in the field of Artificial Intelligence exploit deep learning to make accurate decisions. Recent work on compression techniques allows for deep learning applications, such as computer vision, to run on Edge Computing devices. For instance, quantizing the precision of deep learning architectures allows Edge Computing devices to achieve high throughput at low power. Quantization has been mainly focused on multilayer perceptrons and convolution-based models for classification problems. However, its impact over more complex scenarios, such as image up-sampling, is still underexplored. This paper presents a systematic evaluation of the accuracy achieved by quantized neural networks when performing image up-sampling in three different applications: image compression/decompression, synthetic image generation and semantic segmentation. Taking into account the promising attitude of learnable filters to predict pixels, transposed convolutional layers are used for up-sampling. Experimental results based on analytical metrics show that acceptable accuracies are reached with quantization spanning between 3 and 7 bits. Based on the visual inspection, the range 2–6 bits guarantees appropriate accuracy.