Can deepfakes be used to study emotion perception? A comparison of dynamic face stimuli.

IF 4.6 2区 心理学 Q1 PSYCHOLOGY, EXPERIMENTAL
Behavior Research Methods Pub Date : 2024-10-01 Epub Date: 2024-06-04 DOI:10.3758/s13428-024-02443-y
Casey Becker, Russell Conduit, Philippe A Chouinard, Robin Laycock
{"title":"Can deepfakes be used to study emotion perception? A comparison of dynamic face stimuli.","authors":"Casey Becker, Russell Conduit, Philippe A Chouinard, Robin Laycock","doi":"10.3758/s13428-024-02443-y","DOIUrl":null,"url":null,"abstract":"<p><p>Video recordings accurately capture facial expression movements; however, they are difficult for face perception researchers to standardise and manipulate. For this reason, dynamic morphs of photographs are often used, despite their lack of naturalistic facial motion. This study aimed to investigate how humans perceive emotions from faces using real videos and two different approaches to artificially generating dynamic expressions - dynamic morphs, and AI-synthesised deepfakes. Our participants perceived dynamic morphed expressions as less intense when compared with videos (all emotions) and deepfakes (fearful, happy, sad). Videos and deepfakes were perceived similarly. Additionally, they perceived morphed happiness and sadness, but not morphed anger or fear, as less genuine than other formats. Our findings support previous research indicating that social responses to morphed emotions are not representative of those to video recordings. The findings also suggest that deepfakes may offer a more suitable standardized stimulus type compared to morphs. Additionally, qualitative data were collected from participants and analysed using ChatGPT, a large language model. ChatGPT successfully identified themes in the data consistent with those identified by an independent human researcher. According to this analysis, our participants perceived dynamic morphs as less natural compared with videos and deepfakes. That participants perceived deepfakes and videos similarly suggests that deepfakes effectively replicate natural facial movements, making them a promising alternative for face perception research. The study contributes to the growing body of research exploring the usefulness of generative artificial intelligence for advancing the study of human perception.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":null,"pages":null},"PeriodicalIF":4.6000,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11362322/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Behavior Research Methods","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.3758/s13428-024-02443-y","RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/6/4 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"PSYCHOLOGY, EXPERIMENTAL","Score":null,"Total":0}
引用次数: 0

Abstract

Video recordings accurately capture facial expression movements; however, they are difficult for face perception researchers to standardise and manipulate. For this reason, dynamic morphs of photographs are often used, despite their lack of naturalistic facial motion. This study aimed to investigate how humans perceive emotions from faces using real videos and two different approaches to artificially generating dynamic expressions - dynamic morphs, and AI-synthesised deepfakes. Our participants perceived dynamic morphed expressions as less intense when compared with videos (all emotions) and deepfakes (fearful, happy, sad). Videos and deepfakes were perceived similarly. Additionally, they perceived morphed happiness and sadness, but not morphed anger or fear, as less genuine than other formats. Our findings support previous research indicating that social responses to morphed emotions are not representative of those to video recordings. The findings also suggest that deepfakes may offer a more suitable standardized stimulus type compared to morphs. Additionally, qualitative data were collected from participants and analysed using ChatGPT, a large language model. ChatGPT successfully identified themes in the data consistent with those identified by an independent human researcher. According to this analysis, our participants perceived dynamic morphs as less natural compared with videos and deepfakes. That participants perceived deepfakes and videos similarly suggests that deepfakes effectively replicate natural facial movements, making them a promising alternative for face perception research. The study contributes to the growing body of research exploring the usefulness of generative artificial intelligence for advancing the study of human perception.

Abstract Image

深度伪造可以用来研究情绪感知吗?动态面部刺激对比
视频记录能准确捕捉面部表情动作,但对于面部感知研究人员来说,很难对其进行标准化处理。因此,尽管照片缺乏自然的面部动作,但仍经常使用照片的动态变形。本研究旨在利用真实视频和人工生成动态表情的两种不同方法--动态变形和人工智能合成的深度伪造--研究人类如何从人脸中感知情绪。与视频(所有情绪)和假表情(恐惧、快乐、悲伤)相比,我们的参与者认为动态变形表情的强度较低。参与者对视频和深度伪造表情的感知类似。此外,他们认为变形的快乐和悲伤,而不是变形的愤怒或恐惧,不如其他形式真实。我们的研究结果支持了之前的研究,这些研究表明,社会对变形情绪的反应并不代表对视频录像的反应。研究结果还表明,与变形相比,深假象可能是一种更合适的标准化刺激类型。此外,我们还收集了参与者的定性数据,并使用大型语言模型 ChatGPT 进行了分析。ChatGPT 成功确定了数据中的主题,这些主题与独立人类研究员确定的主题一致。根据这一分析,我们的参与者认为动态变形与视频和深度伪造相比不够自然。参与者对deepfakes和视频的感知类似,这表明deepfakes能有效地复制自然的面部动作,使其成为人脸感知研究的一种有前途的替代方法。这项研究为越来越多的探索生成人工智能对人类感知研究的有用性的研究做出了贡献。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
10.30
自引率
9.30%
发文量
266
期刊介绍: Behavior Research Methods publishes articles concerned with the methods, techniques, and instrumentation of research in experimental psychology. The journal focuses particularly on the use of computer technology in psychological research. An annual special issue is devoted to this field.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信