反对无效!普通人能从律师那里分辨大型语言模型,但仍偏爱法学硕士的建议

Eike Schneiders, Tina Seabrooke, Joshua Krook, Richard Hyde, Natalie Leesakul, Jeremie Clos, Joel Fischer
{"title":"反对无效!普通人能从律师那里分辨大型语言模型,但仍偏爱法学硕士的建议","authors":"Eike Schneiders, Tina Seabrooke, Joshua Krook, Richard Hyde, Natalie Leesakul, Jeremie Clos, Joel Fischer","doi":"arxiv-2409.07871","DOIUrl":null,"url":null,"abstract":"Large Language Models (LLMs) are seemingly infiltrating every domain, and the\nlegal context is no exception. In this paper, we present the results of three\nexperiments (total N=288) that investigated lay people's willingness to act\nupon, and their ability to discriminate between, LLM- and lawyer-generated\nlegal advice. In Experiment 1, participants judged their willingness to act on\nlegal advice when the source of the advice was either known or unknown. When\nthe advice source was unknown, participants indicated that they were\nsignificantly more willing to act on the LLM-generated advice. This result was\nreplicated in Experiment 2. Intriguingly, despite participants indicating\nhigher willingness to act on LLM-generated advice in Experiments 1 and 2,\nparticipants discriminated between the LLM- and lawyer-generated texts\nsignificantly above chance-level in Experiment 3. Lastly, we discuss potential\nexplanations and risks of our findings, limitations and future work, and the\nimportance of language complexity and real-world comparability.","PeriodicalId":501541,"journal":{"name":"arXiv - CS - Human-Computer Interaction","volume":"23 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Objection Overruled! Lay People can Distinguish Large Language Models from Lawyers, but still Favour Advice from an LLM\",\"authors\":\"Eike Schneiders, Tina Seabrooke, Joshua Krook, Richard Hyde, Natalie Leesakul, Jeremie Clos, Joel Fischer\",\"doi\":\"arxiv-2409.07871\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Large Language Models (LLMs) are seemingly infiltrating every domain, and the\\nlegal context is no exception. In this paper, we present the results of three\\nexperiments (total N=288) that investigated lay people's willingness to act\\nupon, and their ability to discriminate between, LLM- and lawyer-generated\\nlegal advice. In Experiment 1, participants judged their willingness to act on\\nlegal advice when the source of the advice was either known or unknown. When\\nthe advice source was unknown, participants indicated that they were\\nsignificantly more willing to act on the LLM-generated advice. This result was\\nreplicated in Experiment 2. Intriguingly, despite participants indicating\\nhigher willingness to act on LLM-generated advice in Experiments 1 and 2,\\nparticipants discriminated between the LLM- and lawyer-generated texts\\nsignificantly above chance-level in Experiment 3. Lastly, we discuss potential\\nexplanations and risks of our findings, limitations and future work, and the\\nimportance of language complexity and real-world comparability.\",\"PeriodicalId\":501541,\"journal\":{\"name\":\"arXiv - CS - Human-Computer Interaction\",\"volume\":\"23 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Human-Computer Interaction\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.07871\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Human-Computer Interaction","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.07871","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

大型语言模型(LLM)似乎正在渗透到各个领域,法律领域也不例外。在本文中,我们介绍了三项实验(总人数=288)的结果,这些实验调查了非专业人士对 LLM 和律师提供的法律建议采取行动的意愿及其辨别能力。在实验 1 中,参与者判断自己是否愿意接受已知或未知来源的法律建议。当建议来源不明时,参与者表示他们明显更愿意对法律硕士提出的建议采取行动。实验 2 复制了这一结果。耐人寻味的是,尽管在实验 1 和实验 2 中,参与者表示更愿意对法律硕士提供的建议采取行动,但在实验 3 中,参与者对法律硕士和律师提供的文本的辨别能力显著高于偶然水平。最后,我们讨论了研究结果的潜在解释和风险、局限性和未来工作,以及语言复杂性和现实世界可比性的重要性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Objection Overruled! Lay People can Distinguish Large Language Models from Lawyers, but still Favour Advice from an LLM
Large Language Models (LLMs) are seemingly infiltrating every domain, and the legal context is no exception. In this paper, we present the results of three experiments (total N=288) that investigated lay people's willingness to act upon, and their ability to discriminate between, LLM- and lawyer-generated legal advice. In Experiment 1, participants judged their willingness to act on legal advice when the source of the advice was either known or unknown. When the advice source was unknown, participants indicated that they were significantly more willing to act on the LLM-generated advice. This result was replicated in Experiment 2. Intriguingly, despite participants indicating higher willingness to act on LLM-generated advice in Experiments 1 and 2, participants discriminated between the LLM- and lawyer-generated texts significantly above chance-level in Experiment 3. Lastly, we discuss potential explanations and risks of our findings, limitations and future work, and the importance of language complexity and real-world comparability.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信