{"title":"修剪滤波器和类:面向设备上定制的卷积神经网络","authors":"Jia Guo, M. Potkonjak","doi":"10.1145/3089801.3089806","DOIUrl":null,"url":null,"abstract":"In recent years, we have witnessed more and more mobile applications based on deep learning. Widely used as they may be, those applications provide little flexibility to cater to the diversified needs of different groups of users. For users facing a classification problem, it is natural that some classes are more important to them, while the rest are not. We thus propose a lightweight method that allows users to prune the unneeded classes together with associated filters from convolutional neural networks (CNNs). Such customization can result in substantial reduction in computational costs at test time. Early results have shown that after pruning the Network-in-Network (NIN) model on CIFAR-10 dataset\\cite{lim2013network} down to a 5-class classifier, we can trade a 3\\% loss in accuracy for a 1.63$\\times$ gain in energy consumption and a 1.24$\\times$ improvement in latency when experimenting on an off-the-shelf smartphone, while the procedure incurs with very little overhead. After pruning, the custom-tailored model can still achieve a higher classification accuracy than the unmodified classifier because of a smaller problem space that more accurately reflects users' needs.","PeriodicalId":125567,"journal":{"name":"EMDL '17","volume":"14 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"14","resultStr":"{\"title\":\"Pruning Filters and Classes: Towards On-Device Customization of Convolutional Neural Networks\",\"authors\":\"Jia Guo, M. Potkonjak\",\"doi\":\"10.1145/3089801.3089806\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In recent years, we have witnessed more and more mobile applications based on deep learning. Widely used as they may be, those applications provide little flexibility to cater to the diversified needs of different groups of users. For users facing a classification problem, it is natural that some classes are more important to them, while the rest are not. We thus propose a lightweight method that allows users to prune the unneeded classes together with associated filters from convolutional neural networks (CNNs). Such customization can result in substantial reduction in computational costs at test time. Early results have shown that after pruning the Network-in-Network (NIN) model on CIFAR-10 dataset\\\\cite{lim2013network} down to a 5-class classifier, we can trade a 3\\\\% loss in accuracy for a 1.63$\\\\times$ gain in energy consumption and a 1.24$\\\\times$ improvement in latency when experimenting on an off-the-shelf smartphone, while the procedure incurs with very little overhead. After pruning, the custom-tailored model can still achieve a higher classification accuracy than the unmodified classifier because of a smaller problem space that more accurately reflects users' needs.\",\"PeriodicalId\":125567,\"journal\":{\"name\":\"EMDL '17\",\"volume\":\"14 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2017-06-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"14\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"EMDL '17\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3089801.3089806\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"EMDL '17","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3089801.3089806","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Pruning Filters and Classes: Towards On-Device Customization of Convolutional Neural Networks
In recent years, we have witnessed more and more mobile applications based on deep learning. Widely used as they may be, those applications provide little flexibility to cater to the diversified needs of different groups of users. For users facing a classification problem, it is natural that some classes are more important to them, while the rest are not. We thus propose a lightweight method that allows users to prune the unneeded classes together with associated filters from convolutional neural networks (CNNs). Such customization can result in substantial reduction in computational costs at test time. Early results have shown that after pruning the Network-in-Network (NIN) model on CIFAR-10 dataset\cite{lim2013network} down to a 5-class classifier, we can trade a 3\% loss in accuracy for a 1.63$\times$ gain in energy consumption and a 1.24$\times$ improvement in latency when experimenting on an off-the-shelf smartphone, while the procedure incurs with very little overhead. After pruning, the custom-tailored model can still achieve a higher classification accuracy than the unmodified classifier because of a smaller problem space that more accurately reflects users' needs.