{"title":"Do people trust humans more than ChatGPT?","authors":"Joy Buchanan , William Hickman","doi":"10.1016/j.socec.2024.102239","DOIUrl":null,"url":null,"abstract":"<div><p>We explore whether people trust the accuracy of statements produced by large language models (LLMs) versus those written by humans. While LLMs have showcased impressive capabilities in generating text, concerns have been raised regarding the potential for misinformation, bias, or false responses. In this experiment, participants rate the accuracy of statements under different information conditions. Participants who are not explicitly informed of authorship tend to trust statements they believe are human-written more than those attributed to ChatGPT. However, when informed about authorship, participants show equal skepticism towards both human and AI writers. Informed participants are, overall, more likely to choose costly fact-checking. These outcomes suggest that trust in AI-generated content is context-dependent.</p></div>","PeriodicalId":51637,"journal":{"name":"Journal of Behavioral and Experimental Economics","volume":null,"pages":null},"PeriodicalIF":1.6000,"publicationDate":"2024-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Behavioral and Experimental Economics","FirstCategoryId":"96","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2214804324000776","RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ECONOMICS","Score":null,"Total":0}
引用次数: 0
Abstract
We explore whether people trust the accuracy of statements produced by large language models (LLMs) versus those written by humans. While LLMs have showcased impressive capabilities in generating text, concerns have been raised regarding the potential for misinformation, bias, or false responses. In this experiment, participants rate the accuracy of statements under different information conditions. Participants who are not explicitly informed of authorship tend to trust statements they believe are human-written more than those attributed to ChatGPT. However, when informed about authorship, participants show equal skepticism towards both human and AI writers. Informed participants are, overall, more likely to choose costly fact-checking. These outcomes suggest that trust in AI-generated content is context-dependent.
期刊介绍:
The Journal of Behavioral and Experimental Economics (formerly the Journal of Socio-Economics) welcomes submissions that deal with various economic topics but also involve issues that are related to other social sciences, especially psychology, or use experimental methods of inquiry. Thus, contributions in behavioral economics, experimental economics, economic psychology, and judgment and decision making are especially welcome. The journal is open to different research methodologies, as long as they are relevant to the topic and employed rigorously. Possible methodologies include, for example, experiments, surveys, empirical work, theoretical models, meta-analyses, case studies, and simulation-based analyses. Literature reviews that integrate findings from many studies are also welcome, but they should synthesize the literature in a useful manner and provide substantial contribution beyond what the reader could get by simply reading the abstracts of the cited papers. In empirical work, it is important that the results are not only statistically significant but also economically significant. A high contribution-to-length ratio is expected from published articles and therefore papers should not be unnecessarily long, and short articles are welcome. Articles should be written in a manner that is intelligible to our generalist readership. Book reviews are generally solicited but occasionally unsolicited reviews will also be published. Contact the Book Review Editor for related inquiries.