{"title":"人工智能人物角色在欺骗检测实验中的有效性","authors":"David M Markowitz, Timothy R Levine","doi":"10.1093/joc/jqaf034","DOIUrl":null,"url":null,"abstract":"Artificial intelligence (AI) has recently been used to aid in deception detection and to simulate human data in social scientific research. Thus, it is important to consider how well these tools can inform both enterprises. We report 12 studies, accessed through the Viewpoints.ai research platform, where AI (gemini-1.5-flash) made veracity judgments of humans. We systematically varied the nature and duration of the communication, modality, truth-lie base rate, and AI persona. AI performed best (57.7%) when detecting truths and lies involving feelings about friends, although it was notably truth-biased (71.7%). However, in assessing cheating interrogations, AI was lie-biased by judging more than three-quarters of interviewees as cheating liars. In assessing interviews where humans perform at rates over 70%, accuracy plummeted to 15.9% with an ecological base-rate. AI yielded results different from prior human studies and therefore, we caution using certain large language models for lie detection.","PeriodicalId":48410,"journal":{"name":"Journal of Communication","volume":"27 1","pages":""},"PeriodicalIF":5.5000,"publicationDate":"2025-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"The (in)efficacy of AI personas in deception detection experiments\",\"authors\":\"David M Markowitz, Timothy R Levine\",\"doi\":\"10.1093/joc/jqaf034\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Artificial intelligence (AI) has recently been used to aid in deception detection and to simulate human data in social scientific research. Thus, it is important to consider how well these tools can inform both enterprises. We report 12 studies, accessed through the Viewpoints.ai research platform, where AI (gemini-1.5-flash) made veracity judgments of humans. We systematically varied the nature and duration of the communication, modality, truth-lie base rate, and AI persona. AI performed best (57.7%) when detecting truths and lies involving feelings about friends, although it was notably truth-biased (71.7%). However, in assessing cheating interrogations, AI was lie-biased by judging more than three-quarters of interviewees as cheating liars. In assessing interviews where humans perform at rates over 70%, accuracy plummeted to 15.9% with an ecological base-rate. AI yielded results different from prior human studies and therefore, we caution using certain large language models for lie detection.\",\"PeriodicalId\":48410,\"journal\":{\"name\":\"Journal of Communication\",\"volume\":\"27 1\",\"pages\":\"\"},\"PeriodicalIF\":5.5000,\"publicationDate\":\"2025-09-07\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Communication\",\"FirstCategoryId\":\"98\",\"ListUrlMain\":\"https://doi.org/10.1093/joc/jqaf034\",\"RegionNum\":1,\"RegionCategory\":\"文学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMMUNICATION\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Communication","FirstCategoryId":"98","ListUrlMain":"https://doi.org/10.1093/joc/jqaf034","RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMMUNICATION","Score":null,"Total":0}
The (in)efficacy of AI personas in deception detection experiments
Artificial intelligence (AI) has recently been used to aid in deception detection and to simulate human data in social scientific research. Thus, it is important to consider how well these tools can inform both enterprises. We report 12 studies, accessed through the Viewpoints.ai research platform, where AI (gemini-1.5-flash) made veracity judgments of humans. We systematically varied the nature and duration of the communication, modality, truth-lie base rate, and AI persona. AI performed best (57.7%) when detecting truths and lies involving feelings about friends, although it was notably truth-biased (71.7%). However, in assessing cheating interrogations, AI was lie-biased by judging more than three-quarters of interviewees as cheating liars. In assessing interviews where humans perform at rates over 70%, accuracy plummeted to 15.9% with an ecological base-rate. AI yielded results different from prior human studies and therefore, we caution using certain large language models for lie detection.
期刊介绍:
The Journal of Communication, the flagship journal of the International Communication Association, is a vital publication for communication specialists and policymakers alike. Focusing on communication research, practice, policy, and theory, it delivers the latest and most significant findings in communication studies. The journal also includes an extensive book review section and symposia of selected studies on current issues. JoC publishes top-quality scholarship on all aspects of communication, with a particular interest in research that transcends disciplinary and sub-field boundaries.