Quantitative Insights into Language Model Usage and Trust in Academia: An Empirical Study

Minseok Jung, Aurora Zhang, Junho Lee, Paul Pu Liang
{"title":"Quantitative Insights into Language Model Usage and Trust in Academia: An Empirical Study","authors":"Minseok Jung, Aurora Zhang, Junho Lee, Paul Pu Liang","doi":"arxiv-2409.09186","DOIUrl":null,"url":null,"abstract":"Language models (LMs) are revolutionizing knowledge retrieval and processing\nin academia. However, concerns regarding their misuse and erroneous outputs,\nsuch as hallucinations and fabrications, are reasons for distrust in LMs within\nacademic communities. Consequently, there is a pressing need to deepen the\nunderstanding of how actual practitioners use and trust these models. There is\na notable gap in quantitative evidence regarding the extent of LM usage, user\ntrust in their outputs, and issues to prioritize for real-world development.\nThis study addresses these gaps by providing data and analysis of LM usage and\ntrust. Specifically, our study surveyed 125 individuals at a private school and\nsecured 88 data points after pre-processing. Through both quantitative analysis\nand qualitative evidence, we found a significant variation in trust levels,\nwhich are strongly related to usage time and frequency. Additionally, we\ndiscover through a polling process that fact-checking is the most critical\nissue limiting usage. These findings inform several actionable insights:\ndistrust can be overcome by providing exposure to the models, policies should\nbe developed that prioritize fact-checking, and user trust can be enhanced by\nincreasing engagement. By addressing these critical gaps, this research not\nonly adds to the understanding of user experiences and trust in LMs but also\ninforms the development of more effective LMs.","PeriodicalId":501112,"journal":{"name":"arXiv - CS - Computers and Society","volume":"34 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Computers and Society","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.09186","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Language models (LMs) are revolutionizing knowledge retrieval and processing in academia. However, concerns regarding their misuse and erroneous outputs, such as hallucinations and fabrications, are reasons for distrust in LMs within academic communities. Consequently, there is a pressing need to deepen the understanding of how actual practitioners use and trust these models. There is a notable gap in quantitative evidence regarding the extent of LM usage, user trust in their outputs, and issues to prioritize for real-world development. This study addresses these gaps by providing data and analysis of LM usage and trust. Specifically, our study surveyed 125 individuals at a private school and secured 88 data points after pre-processing. Through both quantitative analysis and qualitative evidence, we found a significant variation in trust levels, which are strongly related to usage time and frequency. Additionally, we discover through a polling process that fact-checking is the most critical issue limiting usage. These findings inform several actionable insights: distrust can be overcome by providing exposure to the models, policies should be developed that prioritize fact-checking, and user trust can be enhanced by increasing engagement. By addressing these critical gaps, this research not only adds to the understanding of user experiences and trust in LMs but also informs the development of more effective LMs.
对学术界语言模型使用和信任的定量洞察:实证研究
语言模型(LM)正在彻底改变学术界的知识检索和处理方式。然而,学术界对语言模型的误用和错误输出(如幻觉和捏造)的担忧,也是不信任语言模型的原因。因此,迫切需要加深对实际工作者如何使用和信任这些模型的了解。本研究通过提供有关 LM 使用情况和信任度的数据和分析,填补了这些空白。具体来说,我们的研究调查了一所私立学校的 125 名学生,并在预处理后获得了 88 个数据点。通过定量分析和定性证据,我们发现信任度存在显著差异,这与使用时间和频率密切相关。此外,我们通过民意调查发现,事实核查是限制使用的最关键问题。这些发现为我们提供了一些可操作的启示:可以通过让用户接触模型来克服不信任;应该制定优先考虑事实核查的政策;可以通过提高参与度来增强用户信任。通过解决这些关键问题,这项研究不仅加深了人们对用户体验和对 LM 信任度的理解,还为开发更有效的 LM 提供了依据。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信