Exploring people's perceptions of LLM-generated advice

Joel Wester, Sander de Jong, Henning Pohl, Niels van Berkel
{"title":"Exploring people's perceptions of LLM-generated advice","authors":"Joel Wester,&nbsp;Sander de Jong,&nbsp;Henning Pohl,&nbsp;Niels van Berkel","doi":"10.1016/j.chbah.2024.100072","DOIUrl":null,"url":null,"abstract":"<div><p>When searching and browsing the web, more and more of the information we encounter is generated or mediated through large language models (LLMs). This can be looking for a recipe, getting help on an essay, or looking for relationship advice. Yet, there is limited understanding of how individuals perceive advice provided by these LLMs. In this paper, we explore people's perception of LLM-generated advice, and what role diverse user characteristics (i.e., personality and technology readiness) play in shaping their perception. Further, as LLM-generated advice can be difficult to distinguish from human advice, we assess the perceived creepiness of such advice. To investigate this, we run an exploratory study (<em>N</em> = 91), where participants rate advice in different styles (generated by GPT-3.5 Turbo). Notably, our findings suggest that individuals who identify as more agreeable tend to like the advice more and find it more useful. Further, individuals with higher technological insecurity are more likely to follow and find the advice more useful, and deem it more likely that a friend could have given the advice. Lastly, we see that advice given in a ‘skeptical’ style was rated most unpredictable, and advice given in a ‘whimsical’ style was rated least malicious—indicating that LLM advice styles influence user perceptions. Our results also provide an overview of people's considerations on <em>likelihood</em>, <em>receptiveness</em>, and <em>what advice</em> they are likely to seek from these digital assistants. Based on our results, we provide design takeaways for LLM-generated advice and outline future research directions to further inform the design of LLM-generated advice for support applications targeting people with diverse expectations and needs.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 2","pages":"Article 100072"},"PeriodicalIF":0.0000,"publicationDate":"2024-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S294988212400032X/pdfft?md5=ed36391afd77ad6dce64841705e4cd1b&pid=1-s2.0-S294988212400032X-main.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers in Human Behavior: Artificial Humans","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S294988212400032X","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

When searching and browsing the web, more and more of the information we encounter is generated or mediated through large language models (LLMs). This can be looking for a recipe, getting help on an essay, or looking for relationship advice. Yet, there is limited understanding of how individuals perceive advice provided by these LLMs. In this paper, we explore people's perception of LLM-generated advice, and what role diverse user characteristics (i.e., personality and technology readiness) play in shaping their perception. Further, as LLM-generated advice can be difficult to distinguish from human advice, we assess the perceived creepiness of such advice. To investigate this, we run an exploratory study (N = 91), where participants rate advice in different styles (generated by GPT-3.5 Turbo). Notably, our findings suggest that individuals who identify as more agreeable tend to like the advice more and find it more useful. Further, individuals with higher technological insecurity are more likely to follow and find the advice more useful, and deem it more likely that a friend could have given the advice. Lastly, we see that advice given in a ‘skeptical’ style was rated most unpredictable, and advice given in a ‘whimsical’ style was rated least malicious—indicating that LLM advice styles influence user perceptions. Our results also provide an overview of people's considerations on likelihood, receptiveness, and what advice they are likely to seek from these digital assistants. Based on our results, we provide design takeaways for LLM-generated advice and outline future research directions to further inform the design of LLM-generated advice for support applications targeting people with diverse expectations and needs.

探索人们对法律硕士所提建议的看法
在搜索和浏览网页时,我们遇到的越来越多的信息都是通过大型语言模型(LLM)生成或中介的。这可以是寻找食谱、获得论文帮助,也可以是寻找恋爱建议。然而,人们对个人如何看待这些大型语言模型所提供的建议的了解还很有限。在本文中,我们将探讨人们对由 LLM 生成的建议的看法,以及不同的用户特征(即个性和技术准备程度)在影响他们的看法方面所起的作用。此外,由于 LLM 生成的建议很难与人工建议区分开来,我们对此类建议的令人毛骨悚然的感知进行了评估。为了调查这一点,我们进行了一项探索性研究(N = 91),让参与者对不同风格的建议(由 GPT-3.5 Turbo 生成)进行评分。值得注意的是,我们的研究结果表明,认同感较高的人往往更喜欢建议,并认为建议更有用。此外,技术不安全感较高的人更有可能听从建议,并认为建议更有用,而且更有可能是朋友给出的建议。最后,我们发现 "怀疑 "风格的建议被认为是最不可预测的,而 "异想天开 "风格的建议被认为是最不恶意的--这表明法律硕士的建议风格会影响用户的看法。我们的研究结果还概括了人们对可能性、接受度的考虑,以及他们可能会从这些数字助理那里寻求哪些建议。基于我们的研究结果,我们为法律硕士生成的建议提供了设计启示,并概述了未来的研究方向,以便为针对不同期望和需求的人群的支持应用设计法律硕士生成的建议提供更多信息。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信