A pruning-then-quantization model compression framework for facial emotion recognition

Han Sun;Wei Shao;Tao Li;Jiayu Zhao;Weitao Xu;Linqi Song
{"title":"A pruning-then-quantization model compression framework for facial emotion recognition","authors":"Han Sun;Wei Shao;Tao Li;Jiayu Zhao;Weitao Xu;Linqi Song","doi":"10.23919/ICN.2023.0020","DOIUrl":null,"url":null,"abstract":"Facial emotion recognition achieves great success with the help of large neural models but also fails to be applied in practical situations due to the large model size of neural methods. To bridge this gap, in this paper, we combine two mainstream model compression methods (pruning and quantization) together, and propose a pruning-then-quantization framework to compress the neural models for facial emotion recognition tasks. Experiments on three datasets show that our model could achieve a high model compression ratio and maintain the model's high performance well. Besides, We analyze the layer-wise compression performance of our proposed framework to explore its effect and adaptability in fine-grained modules.","PeriodicalId":100681,"journal":{"name":"Intelligent and Converged Networks","volume":"4 3","pages":"225-236"},"PeriodicalIF":0.0000,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/iel7/9195266/10286548/10286552.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Intelligent and Converged Networks","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10286552/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Facial emotion recognition achieves great success with the help of large neural models but also fails to be applied in practical situations due to the large model size of neural methods. To bridge this gap, in this paper, we combine two mainstream model compression methods (pruning and quantization) together, and propose a pruning-then-quantization framework to compress the neural models for facial emotion recognition tasks. Experiments on three datasets show that our model could achieve a high model compression ratio and maintain the model's high performance well. Besides, We analyze the layer-wise compression performance of our proposed framework to explore its effect and adaptability in fine-grained modules.
一种面向面部情绪识别的剪枝量化模型压缩框架
人脸情绪识别在大型神经模型的帮助下取得了巨大的成功,但由于神经方法的模型规模太大,无法应用于实际情况。为了弥补这一缺陷,本文将两种主流的模型压缩方法(剪枝和量化)结合在一起,提出了一个剪枝-量化的框架来压缩面部情绪识别任务的神经模型。在三个数据集上的实验表明,我们的模型可以获得较高的模型压缩比,并能很好地保持模型的高性能。此外,我们还分析了我们提出的框架的分层压缩性能,以探索其在细粒度模块中的效果和适应性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信