Minseok Jung, Aurora Zhang, Junho Lee, Paul Pu Liang
{"title":"Quantitative Insights into Language Model Usage and Trust in Academia: An Empirical Study","authors":"Minseok Jung, Aurora Zhang, Junho Lee, Paul Pu Liang","doi":"arxiv-2409.09186","DOIUrl":null,"url":null,"abstract":"Language models (LMs) are revolutionizing knowledge retrieval and processing\nin academia. However, concerns regarding their misuse and erroneous outputs,\nsuch as hallucinations and fabrications, are reasons for distrust in LMs within\nacademic communities. Consequently, there is a pressing need to deepen the\nunderstanding of how actual practitioners use and trust these models. There is\na notable gap in quantitative evidence regarding the extent of LM usage, user\ntrust in their outputs, and issues to prioritize for real-world development.\nThis study addresses these gaps by providing data and analysis of LM usage and\ntrust. Specifically, our study surveyed 125 individuals at a private school and\nsecured 88 data points after pre-processing. Through both quantitative analysis\nand qualitative evidence, we found a significant variation in trust levels,\nwhich are strongly related to usage time and frequency. Additionally, we\ndiscover through a polling process that fact-checking is the most critical\nissue limiting usage. These findings inform several actionable insights:\ndistrust can be overcome by providing exposure to the models, policies should\nbe developed that prioritize fact-checking, and user trust can be enhanced by\nincreasing engagement. By addressing these critical gaps, this research not\nonly adds to the understanding of user experiences and trust in LMs but also\ninforms the development of more effective LMs.","PeriodicalId":501112,"journal":{"name":"arXiv - CS - Computers and Society","volume":"34 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Computers and Society","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.09186","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Language models (LMs) are revolutionizing knowledge retrieval and processing
in academia. However, concerns regarding their misuse and erroneous outputs,
such as hallucinations and fabrications, are reasons for distrust in LMs within
academic communities. Consequently, there is a pressing need to deepen the
understanding of how actual practitioners use and trust these models. There is
a notable gap in quantitative evidence regarding the extent of LM usage, user
trust in their outputs, and issues to prioritize for real-world development.
This study addresses these gaps by providing data and analysis of LM usage and
trust. Specifically, our study surveyed 125 individuals at a private school and
secured 88 data points after pre-processing. Through both quantitative analysis
and qualitative evidence, we found a significant variation in trust levels,
which are strongly related to usage time and frequency. Additionally, we
discover through a polling process that fact-checking is the most critical
issue limiting usage. These findings inform several actionable insights:
distrust can be overcome by providing exposure to the models, policies should
be developed that prioritize fact-checking, and user trust can be enhanced by
increasing engagement. By addressing these critical gaps, this research not
only adds to the understanding of user experiences and trust in LMs but also
informs the development of more effective LMs.