自动面部表情识别系统

Andrew H. Ryan, J. F. Cohn, S. Lucey, Jason M. Saragih, P. Lucey, F. de la Torre, Adam Rossi
{"title":"自动面部表情识别系统","authors":"Andrew H. Ryan, J. F. Cohn, S. Lucey, Jason M. Saragih, P. Lucey, F. de la Torre, Adam Rossi","doi":"10.1109/CCST.2009.5335546","DOIUrl":null,"url":null,"abstract":"Heightened concerns about the treatment of individuals during interviews and interrogations have stimulated efforts to develop “non-intrusive” technologies for rapidly assessing the credibility of statements by individuals in a variety of sensitive environments. Methods or processes that have the potential to precisely focus investigative resources will advance operational excellence and improve investigative capabilities. Facial expressions have the ability to communicate emotion and regulate interpersonal behavior. Over the past 30 years, scientists have developed human-observer based methods that can be used to classify and correlate facial expressions with human emotion. However, these methods have proven to be labor intensive, qualitative, and difficult to standardize. The Facial Action Coding System (FACS) developed by Paul Ekman and Wallace V. Friesen is the most widely used and validated method for measuring and describing facial behaviors. The Automated Facial Expression Recognition System (AFERS) automates the manual practice of FACS, leveraging the research and technology behind the CMU/PITT Automated Facial Image Analysis System (AFA) system developed by Dr. Jeffery Cohn and his colleagues at the Robotics Institute of Carnegie Mellon University. This portable, near real-time system will detect the seven universal expressions of emotion (figure 1), providing investigators with indicators of the presence of deception during the interview process. In addition, the system will include features such as full video support, snapshot generation, and case management utilities, enabling users to re-evaluate interviews in detail at a later date.","PeriodicalId":117285,"journal":{"name":"43rd Annual 2009 International Carnahan Conference on Security Technology","volume":"24 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2009-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"111","resultStr":"{\"title\":\"Automated Facial Expression Recognition System\",\"authors\":\"Andrew H. Ryan, J. F. Cohn, S. Lucey, Jason M. Saragih, P. Lucey, F. de la Torre, Adam Rossi\",\"doi\":\"10.1109/CCST.2009.5335546\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Heightened concerns about the treatment of individuals during interviews and interrogations have stimulated efforts to develop “non-intrusive” technologies for rapidly assessing the credibility of statements by individuals in a variety of sensitive environments. Methods or processes that have the potential to precisely focus investigative resources will advance operational excellence and improve investigative capabilities. Facial expressions have the ability to communicate emotion and regulate interpersonal behavior. Over the past 30 years, scientists have developed human-observer based methods that can be used to classify and correlate facial expressions with human emotion. However, these methods have proven to be labor intensive, qualitative, and difficult to standardize. The Facial Action Coding System (FACS) developed by Paul Ekman and Wallace V. Friesen is the most widely used and validated method for measuring and describing facial behaviors. The Automated Facial Expression Recognition System (AFERS) automates the manual practice of FACS, leveraging the research and technology behind the CMU/PITT Automated Facial Image Analysis System (AFA) system developed by Dr. Jeffery Cohn and his colleagues at the Robotics Institute of Carnegie Mellon University. This portable, near real-time system will detect the seven universal expressions of emotion (figure 1), providing investigators with indicators of the presence of deception during the interview process. In addition, the system will include features such as full video support, snapshot generation, and case management utilities, enabling users to re-evaluate interviews in detail at a later date.\",\"PeriodicalId\":117285,\"journal\":{\"name\":\"43rd Annual 2009 International Carnahan Conference on Security Technology\",\"volume\":\"24 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2009-11-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"111\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"43rd Annual 2009 International Carnahan Conference on Security Technology\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CCST.2009.5335546\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"43rd Annual 2009 International Carnahan Conference on Security Technology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CCST.2009.5335546","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 111

摘要

对个人在面谈和审讯期间所受待遇的高度关注,促使人们努力发展“非侵入性”技术,以便在各种敏感环境中迅速评估个人陈述的可信度。有可能精确集中调查资源的方法或流程将推进卓越的行动并提高调查能力。面部表情具有交流情绪和调节人际行为的能力。在过去的30年里,科学家们开发了基于人类观察者的方法,可以用来对面部表情与人类情绪进行分类和关联。然而,这些方法已被证明是劳动密集型的,定性的,难以标准化。由Paul Ekman和Wallace V. Friesen开发的面部动作编码系统(FACS)是测量和描述面部行为的最广泛使用和验证的方法。自动面部表情识别系统(AFERS)利用卡内基梅隆大学机器人研究所Jeffery Cohn博士及其同事开发的CMU/PITT自动面部图像分析系统(AFA)系统背后的研究和技术,使FACS的手动操作自动化。这种便携式、接近实时的系统将检测七种普遍的情绪表达(图1),为调查人员提供在采访过程中是否存在欺骗的指标。此外,该系统还将包括完整视频支持、快照生成和案例管理实用程序等功能,使用户能够在以后详细地重新评估面试。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Automated Facial Expression Recognition System
Heightened concerns about the treatment of individuals during interviews and interrogations have stimulated efforts to develop “non-intrusive” technologies for rapidly assessing the credibility of statements by individuals in a variety of sensitive environments. Methods or processes that have the potential to precisely focus investigative resources will advance operational excellence and improve investigative capabilities. Facial expressions have the ability to communicate emotion and regulate interpersonal behavior. Over the past 30 years, scientists have developed human-observer based methods that can be used to classify and correlate facial expressions with human emotion. However, these methods have proven to be labor intensive, qualitative, and difficult to standardize. The Facial Action Coding System (FACS) developed by Paul Ekman and Wallace V. Friesen is the most widely used and validated method for measuring and describing facial behaviors. The Automated Facial Expression Recognition System (AFERS) automates the manual practice of FACS, leveraging the research and technology behind the CMU/PITT Automated Facial Image Analysis System (AFA) system developed by Dr. Jeffery Cohn and his colleagues at the Robotics Institute of Carnegie Mellon University. This portable, near real-time system will detect the seven universal expressions of emotion (figure 1), providing investigators with indicators of the presence of deception during the interview process. In addition, the system will include features such as full video support, snapshot generation, and case management utilities, enabling users to re-evaluate interviews in detail at a later date.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信