{"title":"在线约会中的视觉欺骗:性别如何影响人工智能生成的图像检测","authors":"Lidor Ivan","doi":"10.1016/j.chbah.2025.100208","DOIUrl":null,"url":null,"abstract":"<div><div>The rise of AI-generated images is reshaping online interactions, particularly in dating contexts where visual authenticity plays a central role. While prior research has focused on textual deception, less is known about users’ ability to detect synthetic images. Grounded in Truth-Default Theory and the notion of visual realism, this study explores how users evaluate authenticity in images that challenge conventional expectations of photographic trust.</div><div>An online experiment was conducted with 831 American heterosexual online daters. Participants were shown both real and AI-generated profile photos, rated their perceived origin, and provided open-ended justifications. Overall, AI-generated images detection accuracy was low, falling below chance. Women outperformed men in identifying AI-generated images, but were also more likely to misclassify real ones—suggesting heightened, but sometimes misplaced, skepticism. Participants relied on three main strategies: identifying <em>visual inconsistencies</em>, signs of <em>perfection</em>, and <em>technical flaws</em>. These heuristics often failed to keep pace with improving AI realism. To conceptualize this process, the study introduces the “<em>Learning Loop</em>”—a dynamic cycle in which users develop detection strategies, AI systems adapt to those strategies, and users must recalibrate once again. As synthetic deception becomes more seamless, the findings underscore the instability of visual trust and the need to understand how users adapt (or fail to adapt) to rapidly evolving visual technologies.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"6 ","pages":"Article 100208"},"PeriodicalIF":0.0000,"publicationDate":"2025-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Visual deception in online dating: How gender shapes AI-generated image detection\",\"authors\":\"Lidor Ivan\",\"doi\":\"10.1016/j.chbah.2025.100208\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>The rise of AI-generated images is reshaping online interactions, particularly in dating contexts where visual authenticity plays a central role. While prior research has focused on textual deception, less is known about users’ ability to detect synthetic images. Grounded in Truth-Default Theory and the notion of visual realism, this study explores how users evaluate authenticity in images that challenge conventional expectations of photographic trust.</div><div>An online experiment was conducted with 831 American heterosexual online daters. Participants were shown both real and AI-generated profile photos, rated their perceived origin, and provided open-ended justifications. Overall, AI-generated images detection accuracy was low, falling below chance. Women outperformed men in identifying AI-generated images, but were also more likely to misclassify real ones—suggesting heightened, but sometimes misplaced, skepticism. Participants relied on three main strategies: identifying <em>visual inconsistencies</em>, signs of <em>perfection</em>, and <em>technical flaws</em>. These heuristics often failed to keep pace with improving AI realism. To conceptualize this process, the study introduces the “<em>Learning Loop</em>”—a dynamic cycle in which users develop detection strategies, AI systems adapt to those strategies, and users must recalibrate once again. As synthetic deception becomes more seamless, the findings underscore the instability of visual trust and the need to understand how users adapt (or fail to adapt) to rapidly evolving visual technologies.</div></div>\",\"PeriodicalId\":100324,\"journal\":{\"name\":\"Computers in Human Behavior: Artificial Humans\",\"volume\":\"6 \",\"pages\":\"Article 100208\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2025-09-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computers in Human Behavior: Artificial Humans\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2949882125000921\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers in Human Behavior: Artificial Humans","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2949882125000921","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Visual deception in online dating: How gender shapes AI-generated image detection
The rise of AI-generated images is reshaping online interactions, particularly in dating contexts where visual authenticity plays a central role. While prior research has focused on textual deception, less is known about users’ ability to detect synthetic images. Grounded in Truth-Default Theory and the notion of visual realism, this study explores how users evaluate authenticity in images that challenge conventional expectations of photographic trust.
An online experiment was conducted with 831 American heterosexual online daters. Participants were shown both real and AI-generated profile photos, rated their perceived origin, and provided open-ended justifications. Overall, AI-generated images detection accuracy was low, falling below chance. Women outperformed men in identifying AI-generated images, but were also more likely to misclassify real ones—suggesting heightened, but sometimes misplaced, skepticism. Participants relied on three main strategies: identifying visual inconsistencies, signs of perfection, and technical flaws. These heuristics often failed to keep pace with improving AI realism. To conceptualize this process, the study introduces the “Learning Loop”—a dynamic cycle in which users develop detection strategies, AI systems adapt to those strategies, and users must recalibrate once again. As synthetic deception becomes more seamless, the findings underscore the instability of visual trust and the need to understand how users adapt (or fail to adapt) to rapidly evolving visual technologies.