Real-time microexpression recognition in educational scenarios using a dual-branch continuous attention network

Yan Lv, Meng Ning, Fan Zhou, Pengfei Lv, Peiying Zhang, Jian Wang
{"title":"Real-time microexpression recognition in educational scenarios using a dual-branch continuous attention network","authors":"Yan Lv, Meng Ning, Fan Zhou, Pengfei Lv, Peiying Zhang, Jian Wang","doi":"10.1007/s11227-024-06455-5","DOIUrl":null,"url":null,"abstract":"<p>Facial microexpressions (MEs) are involuntary, fleeting, and subtle facial muscle movements that reveal a person’s true emotional state and inner experiences. Microexpression recognition has been applied in various disciplines and fields, particularly in educational settings, where it can help educators better understand students’ emotional states and learning experiences, thus providing personalized teaching support and guidance. However, existing microexpression recognition datasets tailored for educational scenarios are limited. Moreover, microexpression recognition classifiers for educational settings not only require high recognition accuracy but also real-time performance. To this end, we provide a student behavior dataset specifically for research on microexpression and action recognition in educational scenarios. Moreover, we innovatively propose a lightweight dual-branch continuous attention network for microexpression recognition research. Specifically, for the student behavior dataset, we collect data on students” behaviors in real classroom scenarios. We categorize student microexpressions into two types: serious and non-serious. Additionally, we classify student classroom behaviors into several categories: attentive listening, note-taking, yawning, looking around, and nodding. Regarding the dual-branch continuous attention network, unlike most methods that extract features directly from entire video frames, which include abundant identity information, we focus on modeling subtle information from facial regions by using optical flow and motion information from keyframes as input. We extensively evaluate our proposed method on publicly available datasets such as CASME II and SAMM, as well as our provided dataset. The experimental results demonstrate that our proposed method achieves state-of-the-art performance in the field of microexpression recognition and provides a competitive dataset for analyzing student classroom behaviors in educational scenarios. We will provide the GitHub link upon acceptance of the paper, and we will make the dataset available to any applicant under a licensed agreement.</p>","PeriodicalId":501596,"journal":{"name":"The Journal of Supercomputing","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"The Journal of Supercomputing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/s11227-024-06455-5","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Facial microexpressions (MEs) are involuntary, fleeting, and subtle facial muscle movements that reveal a person’s true emotional state and inner experiences. Microexpression recognition has been applied in various disciplines and fields, particularly in educational settings, where it can help educators better understand students’ emotional states and learning experiences, thus providing personalized teaching support and guidance. However, existing microexpression recognition datasets tailored for educational scenarios are limited. Moreover, microexpression recognition classifiers for educational settings not only require high recognition accuracy but also real-time performance. To this end, we provide a student behavior dataset specifically for research on microexpression and action recognition in educational scenarios. Moreover, we innovatively propose a lightweight dual-branch continuous attention network for microexpression recognition research. Specifically, for the student behavior dataset, we collect data on students” behaviors in real classroom scenarios. We categorize student microexpressions into two types: serious and non-serious. Additionally, we classify student classroom behaviors into several categories: attentive listening, note-taking, yawning, looking around, and nodding. Regarding the dual-branch continuous attention network, unlike most methods that extract features directly from entire video frames, which include abundant identity information, we focus on modeling subtle information from facial regions by using optical flow and motion information from keyframes as input. We extensively evaluate our proposed method on publicly available datasets such as CASME II and SAMM, as well as our provided dataset. The experimental results demonstrate that our proposed method achieves state-of-the-art performance in the field of microexpression recognition and provides a competitive dataset for analyzing student classroom behaviors in educational scenarios. We will provide the GitHub link upon acceptance of the paper, and we will make the dataset available to any applicant under a licensed agreement.

Abstract Image

利用双分支连续注意力网络在教育场景中实时识别微表情
面部微表情(ME)是一种不自主的、稍纵即逝的、细微的面部肌肉运动,它揭示了一个人真实的情绪状态和内心体验。微表情识别已被应用于各个学科和领域,尤其是在教育领域,它可以帮助教育工作者更好地了解学生的情绪状态和学习经历,从而提供个性化的教学支持和指导。然而,现有的针对教育场景的微表情识别数据集非常有限。此外,用于教育环境的微表情识别分类器不仅要求高识别准确率,还要求实时性。为此,我们专门为教育场景中的微表情和动作识别研究提供了一个学生行为数据集。此外,我们还创新性地提出了用于微表情识别研究的轻量级双分支连续注意力网络。具体来说,在学生行为数据集方面,我们收集了真实课堂场景中学生的行为数据。我们将学生的微表情分为两种类型:严肃和非严肃。此外,我们还将学生的课堂行为分为几类:专心听讲、记笔记、打哈欠、东张西望和点头。关于双分支连续注意力网络,与直接从包含丰富身份信息的整个视频帧中提取特征的大多数方法不同,我们侧重于通过使用关键帧的光流和运动信息作为输入,对面部区域的细微信息进行建模。我们在 CASME II 和 SAMM 等公开数据集以及我们提供的数据集上广泛评估了我们提出的方法。实验结果表明,我们提出的方法在微表情识别领域达到了最先进的性能,并为分析教育场景中的学生课堂行为提供了有竞争力的数据集。我们将在论文被接受后提供 GitHub 链接,并根据许可协议向任何申请者提供数据集。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信