基于实时多cnn的博物馆游客满意度情感识别系统

IF 2.1 3区 计算机科学 Q3 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS
Do Hyung Kwon, Jeong Min Yu
{"title":"基于实时多cnn的博物馆游客满意度情感识别系统","authors":"Do Hyung Kwon, Jeong Min Yu","doi":"10.1145/3631123","DOIUrl":null,"url":null,"abstract":"Conventional studies on the satisfaction of museum visitors focus on collecting information through surveys to provide a one-way service to visitors, and thus it is impossible to obtain feedback on the real-time satisfaction of visitors who are experiencing the museum exhibition program. In addition, museum practitioners lack research on automated ways to evaluate a produced content program's life cycle and its appropriateness. To overcome these problems, we propose a novel multi-convolutional neural network (CNN), called VimoNet, which is able to recognize visitors emotions automatically in real-time based on their facial expressions and body gestures. Furthermore, we design a user preference model of content and a framework to obtain feedback on content improvement for providing personalized digital cultural heritage content to visitors. Specifically, we define seven emotions of visitors and build a dataset of visitor facial expressions and gestures with respect to the emotions. Using the dataset, we proceed with feature fusion of face and gesture images trained on the DenseNet-201 and VGG-16 models for generating a combined emotion recognition model. From the results of the experiment, VimoNet achieved a classification accuracy of 84.10%, providing 7.60% and 14.31% improvement, respectively, over a single face and body gesture-based method of emotion classification performance. It is thus possible to automatically capture the emotions of museum visitors via VimoNet, and we confirm its feasibility through a case study with respect to digital content of cultural heritage.","PeriodicalId":54310,"journal":{"name":"ACM Journal on Computing and Cultural Heritage","volume":"140 5","pages":"0"},"PeriodicalIF":2.1000,"publicationDate":"2023-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Real-time Multi-CNN based Emotion Recognition System for Evaluating Museum Visitors’ Satisfaction\",\"authors\":\"Do Hyung Kwon, Jeong Min Yu\",\"doi\":\"10.1145/3631123\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Conventional studies on the satisfaction of museum visitors focus on collecting information through surveys to provide a one-way service to visitors, and thus it is impossible to obtain feedback on the real-time satisfaction of visitors who are experiencing the museum exhibition program. In addition, museum practitioners lack research on automated ways to evaluate a produced content program's life cycle and its appropriateness. To overcome these problems, we propose a novel multi-convolutional neural network (CNN), called VimoNet, which is able to recognize visitors emotions automatically in real-time based on their facial expressions and body gestures. Furthermore, we design a user preference model of content and a framework to obtain feedback on content improvement for providing personalized digital cultural heritage content to visitors. Specifically, we define seven emotions of visitors and build a dataset of visitor facial expressions and gestures with respect to the emotions. Using the dataset, we proceed with feature fusion of face and gesture images trained on the DenseNet-201 and VGG-16 models for generating a combined emotion recognition model. From the results of the experiment, VimoNet achieved a classification accuracy of 84.10%, providing 7.60% and 14.31% improvement, respectively, over a single face and body gesture-based method of emotion classification performance. It is thus possible to automatically capture the emotions of museum visitors via VimoNet, and we confirm its feasibility through a case study with respect to digital content of cultural heritage.\",\"PeriodicalId\":54310,\"journal\":{\"name\":\"ACM Journal on Computing and Cultural Heritage\",\"volume\":\"140 5\",\"pages\":\"0\"},\"PeriodicalIF\":2.1000,\"publicationDate\":\"2023-10-30\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ACM Journal on Computing and Cultural Heritage\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3631123\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM Journal on Computing and Cultural Heritage","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3631123","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
引用次数: 0

摘要

传统的博物馆游客满意度研究侧重于通过调查收集信息,为游客提供单向的服务,因此无法获得正在体验博物馆展览方案的游客的实时满意度反馈。此外,博物馆从业者缺乏对自动化方法的研究,以评估生产内容节目的生命周期及其适当性。为了克服这些问题,我们提出了一种新的多卷积神经网络(CNN),称为VimoNet,它能够根据访问者的面部表情和肢体动作自动实时识别访问者的情绪。此外,我们设计了用户对内容的偏好模型和框架,以获得内容改进的反馈,为游客提供个性化的数字文化遗产内容。具体来说,我们定义了访问者的七种情绪,并建立了访问者面部表情和手势的数据集。使用该数据集,我们继续对DenseNet-201和VGG-16模型上训练的面部和手势图像进行特征融合,以生成组合情感识别模型。从实验结果来看,VimoNet的分类准确率为84.10%,与基于单一面部和基于肢体动作的情绪分类方法相比,分别提高了7.60%和14.31%。因此,通过VimoNet自动捕捉博物馆游客的情绪是可能的,我们通过一个关于文化遗产数字内容的案例研究来确认其可行性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Real-time Multi-CNN based Emotion Recognition System for Evaluating Museum Visitors’ Satisfaction
Conventional studies on the satisfaction of museum visitors focus on collecting information through surveys to provide a one-way service to visitors, and thus it is impossible to obtain feedback on the real-time satisfaction of visitors who are experiencing the museum exhibition program. In addition, museum practitioners lack research on automated ways to evaluate a produced content program's life cycle and its appropriateness. To overcome these problems, we propose a novel multi-convolutional neural network (CNN), called VimoNet, which is able to recognize visitors emotions automatically in real-time based on their facial expressions and body gestures. Furthermore, we design a user preference model of content and a framework to obtain feedback on content improvement for providing personalized digital cultural heritage content to visitors. Specifically, we define seven emotions of visitors and build a dataset of visitor facial expressions and gestures with respect to the emotions. Using the dataset, we proceed with feature fusion of face and gesture images trained on the DenseNet-201 and VGG-16 models for generating a combined emotion recognition model. From the results of the experiment, VimoNet achieved a classification accuracy of 84.10%, providing 7.60% and 14.31% improvement, respectively, over a single face and body gesture-based method of emotion classification performance. It is thus possible to automatically capture the emotions of museum visitors via VimoNet, and we confirm its feasibility through a case study with respect to digital content of cultural heritage.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
ACM Journal on Computing and Cultural Heritage
ACM Journal on Computing and Cultural Heritage Arts and Humanities-Conservation
CiteScore
4.60
自引率
8.30%
发文量
90
期刊介绍: ACM Journal on Computing and Cultural Heritage (JOCCH) publishes papers of significant and lasting value in all areas relating to the use of information and communication technologies (ICT) in support of Cultural Heritage. The journal encourages the submission of manuscripts that demonstrate innovative use of technology for the discovery, analysis, interpretation and presentation of cultural material, as well as manuscripts that illustrate applications in the Cultural Heritage sector that challenge the computational technologies and suggest new research opportunities in computer science.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信