MODEL OF MIMIC EXPRESSIONS OF HUMAN EMOTIONAL STATES FOR THE VIDEO SURVEILLANCE SYSTEMS

O. Kalyta
{"title":"MODEL OF MIMIC EXPRESSIONS OF HUMAN EMOTIONAL STATES FOR THE VIDEO SURVEILLANCE SYSTEMS","authors":"O. Kalyta","doi":"10.31891/2307-5732-2023-319-1-143-145","DOIUrl":null,"url":null,"abstract":"The presented paper proposes a novel computational model for generating facial expressions that mimic human emotional states. The authors aim to create a system that can generate realistic facial expressions to be used in human-robot interactions. The proposed model is based on the Facial Action Coding System, a widely used tool for describing facial expressions. FACS is used in this study to identify the muscles involved in each facial expression and the degree to which each muscle is activated. Several machine-learning techniques were utilized to learn the relationships between facial muscle activations and emotional states. In particular, a hyperplane classification was employed in the system for facial expressions representing major emotional states. The model’s primary advantage lies in its low computational complexity, which enables it to recognize changes in human emotional states through facial expressions without requiring specialized equipment, such as low-resolution or long-distance video cameras. The proposed approach is intended for use in control systems for various purposes, including security systems or monitoring drivers while operating vehicles. It was investigated that the proposed model could generate facial expressions similar to those produced by humans and that these expressions were recognized as conveying the intended emotional state by human observers. The authors also investigated the effect of different factors on the generation of facial expressions. Overall, the proposed model represents a promising approach for generating realistic facial expressions that mimic human emotional states and could have applications in improving security compliance in sensitive environments. However, carefully considering and managing potential ethical issues will be necessary to ensure the responsible use of this technology.","PeriodicalId":386560,"journal":{"name":"Herald of Khmelnytskyi National University. Technical sciences","volume":"206 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Herald of Khmelnytskyi National University. Technical sciences","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.31891/2307-5732-2023-319-1-143-145","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

The presented paper proposes a novel computational model for generating facial expressions that mimic human emotional states. The authors aim to create a system that can generate realistic facial expressions to be used in human-robot interactions. The proposed model is based on the Facial Action Coding System, a widely used tool for describing facial expressions. FACS is used in this study to identify the muscles involved in each facial expression and the degree to which each muscle is activated. Several machine-learning techniques were utilized to learn the relationships between facial muscle activations and emotional states. In particular, a hyperplane classification was employed in the system for facial expressions representing major emotional states. The model’s primary advantage lies in its low computational complexity, which enables it to recognize changes in human emotional states through facial expressions without requiring specialized equipment, such as low-resolution or long-distance video cameras. The proposed approach is intended for use in control systems for various purposes, including security systems or monitoring drivers while operating vehicles. It was investigated that the proposed model could generate facial expressions similar to those produced by humans and that these expressions were recognized as conveying the intended emotional state by human observers. The authors also investigated the effect of different factors on the generation of facial expressions. Overall, the proposed model represents a promising approach for generating realistic facial expressions that mimic human emotional states and could have applications in improving security compliance in sensitive environments. However, carefully considering and managing potential ethical issues will be necessary to ensure the responsible use of this technology.
视频监控系统中人类情绪状态的模拟表达模型
本文提出了一种新的计算模型,用于生成模仿人类情绪状态的面部表情。作者的目标是创建一个系统,可以产生逼真的面部表情,用于人机交互。该模型基于面部动作编码系统,这是一种广泛使用的描述面部表情的工具。在这项研究中,FACS被用来识别每个面部表情所涉及的肌肉以及每个肌肉被激活的程度。使用了几种机器学习技术来学习面部肌肉激活和情绪状态之间的关系。特别地,系统采用超平面分类方法对代表主要情绪状态的面部表情进行分类。该模型的主要优势在于其较低的计算复杂度,这使得它能够通过面部表情识别人类情绪状态的变化,而不需要专门的设备,如低分辨率或远程摄像机。拟议的方法旨在用于各种目的的控制系统,包括安全系统或在操作车辆时监控驾驶员。研究表明,所提出的模型可以产生类似于人类产生的面部表情,并且这些表情被人类观察者识别为传达预期的情绪状态。作者还研究了不同因素对面部表情产生的影响。总的来说,所提出的模型代表了一种很有前途的方法,可以生成逼真的面部表情,模仿人类的情绪状态,并可以应用于提高敏感环境中的安全遵从性。然而,仔细考虑和管理潜在的伦理问题将是必要的,以确保负责任地使用这项技术。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信