在线约会中的视觉欺骗:性别如何影响人工智能生成的图像检测

Lidor Ivan
{"title":"在线约会中的视觉欺骗:性别如何影响人工智能生成的图像检测","authors":"Lidor Ivan","doi":"10.1016/j.chbah.2025.100208","DOIUrl":null,"url":null,"abstract":"<div><div>The rise of AI-generated images is reshaping online interactions, particularly in dating contexts where visual authenticity plays a central role. While prior research has focused on textual deception, less is known about users’ ability to detect synthetic images. Grounded in Truth-Default Theory and the notion of visual realism, this study explores how users evaluate authenticity in images that challenge conventional expectations of photographic trust.</div><div>An online experiment was conducted with 831 American heterosexual online daters. Participants were shown both real and AI-generated profile photos, rated their perceived origin, and provided open-ended justifications. Overall, AI-generated images detection accuracy was low, falling below chance. Women outperformed men in identifying AI-generated images, but were also more likely to misclassify real ones—suggesting heightened, but sometimes misplaced, skepticism. Participants relied on three main strategies: identifying <em>visual inconsistencies</em>, signs of <em>perfection</em>, and <em>technical flaws</em>. These heuristics often failed to keep pace with improving AI realism. To conceptualize this process, the study introduces the “<em>Learning Loop</em>”—a dynamic cycle in which users develop detection strategies, AI systems adapt to those strategies, and users must recalibrate once again. As synthetic deception becomes more seamless, the findings underscore the instability of visual trust and the need to understand how users adapt (or fail to adapt) to rapidly evolving visual technologies.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"6 ","pages":"Article 100208"},"PeriodicalIF":0.0000,"publicationDate":"2025-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Visual deception in online dating: How gender shapes AI-generated image detection\",\"authors\":\"Lidor Ivan\",\"doi\":\"10.1016/j.chbah.2025.100208\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>The rise of AI-generated images is reshaping online interactions, particularly in dating contexts where visual authenticity plays a central role. While prior research has focused on textual deception, less is known about users’ ability to detect synthetic images. Grounded in Truth-Default Theory and the notion of visual realism, this study explores how users evaluate authenticity in images that challenge conventional expectations of photographic trust.</div><div>An online experiment was conducted with 831 American heterosexual online daters. Participants were shown both real and AI-generated profile photos, rated their perceived origin, and provided open-ended justifications. Overall, AI-generated images detection accuracy was low, falling below chance. Women outperformed men in identifying AI-generated images, but were also more likely to misclassify real ones—suggesting heightened, but sometimes misplaced, skepticism. Participants relied on three main strategies: identifying <em>visual inconsistencies</em>, signs of <em>perfection</em>, and <em>technical flaws</em>. These heuristics often failed to keep pace with improving AI realism. To conceptualize this process, the study introduces the “<em>Learning Loop</em>”—a dynamic cycle in which users develop detection strategies, AI systems adapt to those strategies, and users must recalibrate once again. As synthetic deception becomes more seamless, the findings underscore the instability of visual trust and the need to understand how users adapt (or fail to adapt) to rapidly evolving visual technologies.</div></div>\",\"PeriodicalId\":100324,\"journal\":{\"name\":\"Computers in Human Behavior: Artificial Humans\",\"volume\":\"6 \",\"pages\":\"Article 100208\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2025-09-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computers in Human Behavior: Artificial Humans\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2949882125000921\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers in Human Behavior: Artificial Humans","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2949882125000921","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

人工智能生成图像的兴起正在重塑在线互动,特别是在视觉真实性起着核心作用的约会环境中。虽然之前的研究主要集中在文本欺骗上,但对用户检测合成图像的能力知之甚少。基于事实默认理论和视觉现实主义的概念,本研究探讨了用户如何评估挑战摄影信任传统期望的图像真实性。一项针对831名美国异性恋在线约会者的在线实验。研究人员向参与者展示了真实的和人工智能生成的个人资料照片,对照片的来源进行了评分,并提供了开放式的理由。总体而言,人工智能生成的图像检测精度较低,低于机会。女性在识别人工智能生成的图像方面表现得比男性好,但也更有可能对真实图像进行错误分类,这表明怀疑情绪加剧,但有时是错误的。参与者主要依靠三种策略:识别视觉上的不一致、完美的迹象和技术上的缺陷。这些启发式方法往往无法跟上AI现实主义的发展步伐。为了将这一过程概念化,该研究引入了“学习循环”——一个动态循环,在这个循环中,用户制定检测策略,人工智能系统适应这些策略,用户必须再次重新校准。随着合成欺骗变得更加无缝,研究结果强调了视觉信任的不稳定性,以及了解用户如何适应(或不适应)快速发展的视觉技术的必要性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Visual deception in online dating: How gender shapes AI-generated image detection
The rise of AI-generated images is reshaping online interactions, particularly in dating contexts where visual authenticity plays a central role. While prior research has focused on textual deception, less is known about users’ ability to detect synthetic images. Grounded in Truth-Default Theory and the notion of visual realism, this study explores how users evaluate authenticity in images that challenge conventional expectations of photographic trust.
An online experiment was conducted with 831 American heterosexual online daters. Participants were shown both real and AI-generated profile photos, rated their perceived origin, and provided open-ended justifications. Overall, AI-generated images detection accuracy was low, falling below chance. Women outperformed men in identifying AI-generated images, but were also more likely to misclassify real ones—suggesting heightened, but sometimes misplaced, skepticism. Participants relied on three main strategies: identifying visual inconsistencies, signs of perfection, and technical flaws. These heuristics often failed to keep pace with improving AI realism. To conceptualize this process, the study introduces the “Learning Loop”—a dynamic cycle in which users develop detection strategies, AI systems adapt to those strategies, and users must recalibrate once again. As synthetic deception becomes more seamless, the findings underscore the instability of visual trust and the need to understand how users adapt (or fail to adapt) to rapidly evolving visual technologies.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信