{"title":"Identification of micro expressions in a video sequence by Euclidean distance of the facial contours","authors":"S. Kherchaoui, A. Houacine","doi":"10.3233/web-220010","DOIUrl":null,"url":null,"abstract":"This paper presents an automatic facial micro-expression recognition system (FMER) from video sequence. Identification and classification are performed on basic expressions: happy, surprise, fear, disgust, sadness, anger, and neutral states. The system integrates three main steps. The first step consists in face detection and tracking over three consecutive frames. In the second step, the facial contour extraction is performed on each frame to build Euclidean distance maps. The last task corresponds to the classification which is achieved with two methods; the SVM and using convolutional neural networks. Experimental evaluation of the proposed system for facial micro-expression identification is performed on the well-known databases (Chon and Kanade and CASME II), with six and seven facial expressions for each classification method.","PeriodicalId":42775,"journal":{"name":"Web Intelligence","volume":null,"pages":null},"PeriodicalIF":0.2000,"publicationDate":"2023-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Web Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3233/web-220010","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
This paper presents an automatic facial micro-expression recognition system (FMER) from video sequence. Identification and classification are performed on basic expressions: happy, surprise, fear, disgust, sadness, anger, and neutral states. The system integrates three main steps. The first step consists in face detection and tracking over three consecutive frames. In the second step, the facial contour extraction is performed on each frame to build Euclidean distance maps. The last task corresponds to the classification which is achieved with two methods; the SVM and using convolutional neural networks. Experimental evaluation of the proposed system for facial micro-expression identification is performed on the well-known databases (Chon and Kanade and CASME II), with six and seven facial expressions for each classification method.
提出了一种基于视频序列的面部微表情自动识别系统(FMER)。对基本表情进行识别和分类:快乐、惊讶、恐惧、厌恶、悲伤、愤怒和中性状态。该系统集成了三个主要步骤。第一步是对连续三帧的人脸进行检测和跟踪。第二步,对每一帧进行人脸轮廓提取,构建欧几里得距离图。最后一个任务对应的分类是用两种方法实现的;支持向量机和卷积神经网络。在知名数据库(Chon and Kanade和CASME II)上对所提出的面部微表情识别系统进行了实验评估,每种分类方法分别有6种和7种面部表情。
期刊介绍:
Web Intelligence (WI) is an official journal of the Web Intelligence Consortium (WIC), an international organization dedicated to promoting collaborative scientific research and industrial development in the era of Web intelligence. WI seeks to collaborate with major societies and international conferences in the field. WI is a peer-reviewed journal, which publishes four issues a year, in both online and print form. WI aims to achieve a multi-disciplinary balance between research advances in theories and methods usually associated with Collective Intelligence, Data Science, Human-Centric Computing, Knowledge Management, and Network Science. It is committed to publishing research that both deepen the understanding of computational, logical, cognitive, physical, and social foundations of the future Web, and enable the development and application of technologies based on Web intelligence. The journal features high-quality, original research papers (including state-of-the-art reviews), brief papers, and letters in all theoretical and technology areas that make up the field of WI. The papers should clearly focus on some of the following areas of interest: a. Collective Intelligence[...] b. Data Science[...] c. Human-Centric Computing[...] d. Knowledge Management[...] e. Network Science[...]