{"title":"用mobilenet变浅:小波池的影响","authors":"S. El-Khamy, A. Al-Kabbany, Shimaa El-bana","doi":"10.1109/NRSC52299.2021.9509825","DOIUrl":null,"url":null,"abstract":"MobileNet is a light-weight neural network model that has facilitated harnessing the power of deep learning on mobile devices. The advances in pervasive computing and the ever-increasing interest in deep learning has resulted in a growing research attention on the enhancement of the MobileNet architecture. Beside the enhancement in convolution layers, recent literature has featured new directions for implementing the pooling layers. In this work, we propose a new model based on the MobileNet-V1 architecture, and we investigate the impact of wavelet pooling on the performance of the proposed model. While traditional neighborhood pooling can result in information loss, which negatively impacts any succeeding feature extraction, wavelet pooling allows us to utilize spectral information which is useful in most image processing tasks. On two widely adopted datasets, we evaluated the performance of the proposed model, and compared to the baseline MobileNet, we attained a 10% and a 16% increase in classification accuracy on CIFAR-10 and CIFAR-100 respectively. We also evaluated a shallow version of the proposed architecture with wavelet pooling, and we showed that it maintained the classification accuracy either higher than, or <1% less than, the deep versions of MobileNet while decreasing the number of model parameters by almost 40%.","PeriodicalId":231431,"journal":{"name":"2021 38th National Radio Science Conference (NRSC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":"{\"title\":\"Going Shallower with MobileNets: On the Impact of Wavelet Pooling\",\"authors\":\"S. El-Khamy, A. Al-Kabbany, Shimaa El-bana\",\"doi\":\"10.1109/NRSC52299.2021.9509825\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"MobileNet is a light-weight neural network model that has facilitated harnessing the power of deep learning on mobile devices. The advances in pervasive computing and the ever-increasing interest in deep learning has resulted in a growing research attention on the enhancement of the MobileNet architecture. Beside the enhancement in convolution layers, recent literature has featured new directions for implementing the pooling layers. In this work, we propose a new model based on the MobileNet-V1 architecture, and we investigate the impact of wavelet pooling on the performance of the proposed model. While traditional neighborhood pooling can result in information loss, which negatively impacts any succeeding feature extraction, wavelet pooling allows us to utilize spectral information which is useful in most image processing tasks. On two widely adopted datasets, we evaluated the performance of the proposed model, and compared to the baseline MobileNet, we attained a 10% and a 16% increase in classification accuracy on CIFAR-10 and CIFAR-100 respectively. We also evaluated a shallow version of the proposed architecture with wavelet pooling, and we showed that it maintained the classification accuracy either higher than, or <1% less than, the deep versions of MobileNet while decreasing the number of model parameters by almost 40%.\",\"PeriodicalId\":231431,\"journal\":{\"name\":\"2021 38th National Radio Science Conference (NRSC)\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-07-27\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 38th National Radio Science Conference (NRSC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/NRSC52299.2021.9509825\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 38th National Radio Science Conference (NRSC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/NRSC52299.2021.9509825","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Going Shallower with MobileNets: On the Impact of Wavelet Pooling
MobileNet is a light-weight neural network model that has facilitated harnessing the power of deep learning on mobile devices. The advances in pervasive computing and the ever-increasing interest in deep learning has resulted in a growing research attention on the enhancement of the MobileNet architecture. Beside the enhancement in convolution layers, recent literature has featured new directions for implementing the pooling layers. In this work, we propose a new model based on the MobileNet-V1 architecture, and we investigate the impact of wavelet pooling on the performance of the proposed model. While traditional neighborhood pooling can result in information loss, which negatively impacts any succeeding feature extraction, wavelet pooling allows us to utilize spectral information which is useful in most image processing tasks. On two widely adopted datasets, we evaluated the performance of the proposed model, and compared to the baseline MobileNet, we attained a 10% and a 16% increase in classification accuracy on CIFAR-10 and CIFAR-100 respectively. We also evaluated a shallow version of the proposed architecture with wavelet pooling, and we showed that it maintained the classification accuracy either higher than, or <1% less than, the deep versions of MobileNet while decreasing the number of model parameters by almost 40%.