用于情感识别的可解释快速深度神经网络

IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL
Francesco Di Luzio, Antonello Rosato, Massimo Panella
{"title":"用于情感识别的可解释快速深度神经网络","authors":"Francesco Di Luzio,&nbsp;Antonello Rosato,&nbsp;Massimo Panella","doi":"10.1016/j.bspc.2024.107177","DOIUrl":null,"url":null,"abstract":"<div><div>In the context of artificial intelligence, the inherent human attribute of engaging in logical reasoning to facilitate decision-making is mirrored by the concept of explainability, which pertains to the ability of a model to provide a clear and interpretable account of how it arrived at a particular outcome. This study explores explainability techniques for binary deep neural architectures in the framework of emotion classification through video analysis. We investigate the optimization of input features to binary classifiers for emotion recognition, with face landmarks detection, using an improved version of the Integrated Gradients explainability method. The main contribution of this paper consists of the employment of an innovative explainable artificial intelligence algorithm to understand the crucial facial landmarks movements typical of emotional feeling, using this information for improving the performance of deep learning-based emotion classifiers. By means of explainability, we can optimize the number and the position of the facial landmarks used as input features for facial emotion recognition, lowering the impact of noisy landmarks and thus increasing the accuracy of the developed models. To test the effectiveness of the proposed approach, we considered a set of deep binary models for emotion classification, trained initially with a complete set of facial landmarks, which are progressively reduced basing the decision on a suitable optimization procedure. The obtained results prove the robustness of the proposed explainable approach in terms of understanding the relevance of the different facial points for the different emotions, improving the classification accuracy and diminishing the computational cost.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"100 ","pages":"Article 107177"},"PeriodicalIF":4.9000,"publicationDate":"2024-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"An explainable fast deep neural network for emotion recognition\",\"authors\":\"Francesco Di Luzio,&nbsp;Antonello Rosato,&nbsp;Massimo Panella\",\"doi\":\"10.1016/j.bspc.2024.107177\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>In the context of artificial intelligence, the inherent human attribute of engaging in logical reasoning to facilitate decision-making is mirrored by the concept of explainability, which pertains to the ability of a model to provide a clear and interpretable account of how it arrived at a particular outcome. This study explores explainability techniques for binary deep neural architectures in the framework of emotion classification through video analysis. We investigate the optimization of input features to binary classifiers for emotion recognition, with face landmarks detection, using an improved version of the Integrated Gradients explainability method. The main contribution of this paper consists of the employment of an innovative explainable artificial intelligence algorithm to understand the crucial facial landmarks movements typical of emotional feeling, using this information for improving the performance of deep learning-based emotion classifiers. By means of explainability, we can optimize the number and the position of the facial landmarks used as input features for facial emotion recognition, lowering the impact of noisy landmarks and thus increasing the accuracy of the developed models. To test the effectiveness of the proposed approach, we considered a set of deep binary models for emotion classification, trained initially with a complete set of facial landmarks, which are progressively reduced basing the decision on a suitable optimization procedure. The obtained results prove the robustness of the proposed explainable approach in terms of understanding the relevance of the different facial points for the different emotions, improving the classification accuracy and diminishing the computational cost.</div></div>\",\"PeriodicalId\":55362,\"journal\":{\"name\":\"Biomedical Signal Processing and Control\",\"volume\":\"100 \",\"pages\":\"Article 107177\"},\"PeriodicalIF\":4.9000,\"publicationDate\":\"2024-11-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Biomedical Signal Processing and Control\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1746809424012357\",\"RegionNum\":2,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENGINEERING, BIOMEDICAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Biomedical Signal Processing and Control","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1746809424012357","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
引用次数: 0

摘要

在人工智能领域,可解释性的概念反映了人类参与逻辑推理以促进决策的固有属性,这一概念涉及模型对其如何得出特定结果提供清晰、可解释的说明的能力。本研究在通过视频分析进行情感分类的框架内,探索二元深度神经架构的可解释性技术。我们使用改进版的 "集成梯度 "可解释性方法,研究了如何优化二元分类器的输入特征,以进行情绪识别和人脸地标检测。本文的主要贡献在于采用了一种创新的可解释人工智能算法来理解典型情绪感受的关键面部地标运动,并利用这些信息来提高基于深度学习的情绪分类器的性能。通过可解释性,我们可以优化作为面部情绪识别输入特征的面部地标的数量和位置,降低噪声地标的影响,从而提高所开发模型的准确性。为了测试所提方法的有效性,我们考虑了一组用于情绪分类的深度二元模型,这些模型最初是用一组完整的面部地标训练的,然后根据适当的优化程序逐步减少这些地标。所获得的结果证明了所提出的可解释方法在理解不同面部点与不同情绪的相关性、提高分类准确性和降低计算成本方面的稳健性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

An explainable fast deep neural network for emotion recognition

An explainable fast deep neural network for emotion recognition
In the context of artificial intelligence, the inherent human attribute of engaging in logical reasoning to facilitate decision-making is mirrored by the concept of explainability, which pertains to the ability of a model to provide a clear and interpretable account of how it arrived at a particular outcome. This study explores explainability techniques for binary deep neural architectures in the framework of emotion classification through video analysis. We investigate the optimization of input features to binary classifiers for emotion recognition, with face landmarks detection, using an improved version of the Integrated Gradients explainability method. The main contribution of this paper consists of the employment of an innovative explainable artificial intelligence algorithm to understand the crucial facial landmarks movements typical of emotional feeling, using this information for improving the performance of deep learning-based emotion classifiers. By means of explainability, we can optimize the number and the position of the facial landmarks used as input features for facial emotion recognition, lowering the impact of noisy landmarks and thus increasing the accuracy of the developed models. To test the effectiveness of the proposed approach, we considered a set of deep binary models for emotion classification, trained initially with a complete set of facial landmarks, which are progressively reduced basing the decision on a suitable optimization procedure. The obtained results prove the robustness of the proposed explainable approach in terms of understanding the relevance of the different facial points for the different emotions, improving the classification accuracy and diminishing the computational cost.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Biomedical Signal Processing and Control
Biomedical Signal Processing and Control 工程技术-工程:生物医学
CiteScore
9.80
自引率
13.70%
发文量
822
审稿时长
4 months
期刊介绍: Biomedical Signal Processing and Control aims to provide a cross-disciplinary international forum for the interchange of information on research in the measurement and analysis of signals and images in clinical medicine and the biological sciences. Emphasis is placed on contributions dealing with the practical, applications-led research on the use of methods and devices in clinical diagnosis, patient monitoring and management. Biomedical Signal Processing and Control reflects the main areas in which these methods are being used and developed at the interface of both engineering and clinical science. The scope of the journal is defined to include relevant review papers, technical notes, short communications and letters. Tutorial papers and special issues will also be published.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信