{"title":"深度量化成像的QKeras神经网络动物园","authors":"F. Loro, D. Pau, V. Tomaselli","doi":"10.1109/rtsi50628.2021.9597341","DOIUrl":null,"url":null,"abstract":"Neural network zoos are quite common in the literature and are particularly useful for demonstrating the potential of any deep learning framework by providing examples of its use to the Artificial Intelligence community. Unfortunately most of them uses FP32 (32bits floating point) or INT8 (8bits integer) precision for activation and weights. Communities such as TinyML are paying more and more attention to memory and energy-saving to achieve mW and below power consumptions and therefore to Deeply Quantized Neural Networks (DQNNs). Two frameworks: QKeras and Larq, are gaining momentum for defining and training DQNNs. To best of our knowledge, the only available zoo for DQNN is the Larq framework. In this work we developed a new QKeras zoo and comparing the accuracy with the available Larq zoo. To avoid costly re-training, we show how to re-use the weights from Larq zoo. We also developed the zoo with ten networks and matched the performance of the Larq zoo for seven out of ten networks. Our work will be made publicly available through a GitHub repository.","PeriodicalId":294628,"journal":{"name":"2021 IEEE 6th International Forum on Research and Technology for Society and Industry (RTSI)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"A QKeras Neural Network Zoo for Deeply Quantized Imaging\",\"authors\":\"F. Loro, D. Pau, V. Tomaselli\",\"doi\":\"10.1109/rtsi50628.2021.9597341\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Neural network zoos are quite common in the literature and are particularly useful for demonstrating the potential of any deep learning framework by providing examples of its use to the Artificial Intelligence community. Unfortunately most of them uses FP32 (32bits floating point) or INT8 (8bits integer) precision for activation and weights. Communities such as TinyML are paying more and more attention to memory and energy-saving to achieve mW and below power consumptions and therefore to Deeply Quantized Neural Networks (DQNNs). Two frameworks: QKeras and Larq, are gaining momentum for defining and training DQNNs. To best of our knowledge, the only available zoo for DQNN is the Larq framework. In this work we developed a new QKeras zoo and comparing the accuracy with the available Larq zoo. To avoid costly re-training, we show how to re-use the weights from Larq zoo. We also developed the zoo with ten networks and matched the performance of the Larq zoo for seven out of ten networks. Our work will be made publicly available through a GitHub repository.\",\"PeriodicalId\":294628,\"journal\":{\"name\":\"2021 IEEE 6th International Forum on Research and Technology for Society and Industry (RTSI)\",\"volume\":\"22 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-09-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 IEEE 6th International Forum on Research and Technology for Society and Industry (RTSI)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/rtsi50628.2021.9597341\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE 6th International Forum on Research and Technology for Society and Industry (RTSI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/rtsi50628.2021.9597341","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A QKeras Neural Network Zoo for Deeply Quantized Imaging
Neural network zoos are quite common in the literature and are particularly useful for demonstrating the potential of any deep learning framework by providing examples of its use to the Artificial Intelligence community. Unfortunately most of them uses FP32 (32bits floating point) or INT8 (8bits integer) precision for activation and weights. Communities such as TinyML are paying more and more attention to memory and energy-saving to achieve mW and below power consumptions and therefore to Deeply Quantized Neural Networks (DQNNs). Two frameworks: QKeras and Larq, are gaining momentum for defining and training DQNNs. To best of our knowledge, the only available zoo for DQNN is the Larq framework. In this work we developed a new QKeras zoo and comparing the accuracy with the available Larq zoo. To avoid costly re-training, we show how to re-use the weights from Larq zoo. We also developed the zoo with ten networks and matched the performance of the Larq zoo for seven out of ten networks. Our work will be made publicly available through a GitHub repository.