Han Sun;Wei Shao;Tao Li;Jiayu Zhao;Weitao Xu;Linqi Song
{"title":"一种面向面部情绪识别的剪枝量化模型压缩框架","authors":"Han Sun;Wei Shao;Tao Li;Jiayu Zhao;Weitao Xu;Linqi Song","doi":"10.23919/ICN.2023.0020","DOIUrl":null,"url":null,"abstract":"Facial emotion recognition achieves great success with the help of large neural models but also fails to be applied in practical situations due to the large model size of neural methods. To bridge this gap, in this paper, we combine two mainstream model compression methods (pruning and quantization) together, and propose a pruning-then-quantization framework to compress the neural models for facial emotion recognition tasks. Experiments on three datasets show that our model could achieve a high model compression ratio and maintain the model's high performance well. Besides, We analyze the layer-wise compression performance of our proposed framework to explore its effect and adaptability in fine-grained modules.","PeriodicalId":100681,"journal":{"name":"Intelligent and Converged Networks","volume":"4 3","pages":"225-236"},"PeriodicalIF":0.0000,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/iel7/9195266/10286548/10286552.pdf","citationCount":"0","resultStr":"{\"title\":\"A pruning-then-quantization model compression framework for facial emotion recognition\",\"authors\":\"Han Sun;Wei Shao;Tao Li;Jiayu Zhao;Weitao Xu;Linqi Song\",\"doi\":\"10.23919/ICN.2023.0020\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Facial emotion recognition achieves great success with the help of large neural models but also fails to be applied in practical situations due to the large model size of neural methods. To bridge this gap, in this paper, we combine two mainstream model compression methods (pruning and quantization) together, and propose a pruning-then-quantization framework to compress the neural models for facial emotion recognition tasks. Experiments on three datasets show that our model could achieve a high model compression ratio and maintain the model's high performance well. Besides, We analyze the layer-wise compression performance of our proposed framework to explore its effect and adaptability in fine-grained modules.\",\"PeriodicalId\":100681,\"journal\":{\"name\":\"Intelligent and Converged Networks\",\"volume\":\"4 3\",\"pages\":\"225-236\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-09-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://ieeexplore.ieee.org/iel7/9195266/10286548/10286552.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Intelligent and Converged Networks\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10286552/\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Intelligent and Converged Networks","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10286552/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A pruning-then-quantization model compression framework for facial emotion recognition
Facial emotion recognition achieves great success with the help of large neural models but also fails to be applied in practical situations due to the large model size of neural methods. To bridge this gap, in this paper, we combine two mainstream model compression methods (pruning and quantization) together, and propose a pruning-then-quantization framework to compress the neural models for facial emotion recognition tasks. Experiments on three datasets show that our model could achieve a high model compression ratio and maintain the model's high performance well. Besides, We analyze the layer-wise compression performance of our proposed framework to explore its effect and adaptability in fine-grained modules.