Eike Schneiders, Tina Seabrooke, Joshua Krook, Richard Hyde, Natalie Leesakul, Jeremie Clos, Joel Fischer
{"title":"Objection Overruled! Lay People can Distinguish Large Language Models from Lawyers, but still Favour Advice from an LLM","authors":"Eike Schneiders, Tina Seabrooke, Joshua Krook, Richard Hyde, Natalie Leesakul, Jeremie Clos, Joel Fischer","doi":"arxiv-2409.07871","DOIUrl":null,"url":null,"abstract":"Large Language Models (LLMs) are seemingly infiltrating every domain, and the\nlegal context is no exception. In this paper, we present the results of three\nexperiments (total N=288) that investigated lay people's willingness to act\nupon, and their ability to discriminate between, LLM- and lawyer-generated\nlegal advice. In Experiment 1, participants judged their willingness to act on\nlegal advice when the source of the advice was either known or unknown. When\nthe advice source was unknown, participants indicated that they were\nsignificantly more willing to act on the LLM-generated advice. This result was\nreplicated in Experiment 2. Intriguingly, despite participants indicating\nhigher willingness to act on LLM-generated advice in Experiments 1 and 2,\nparticipants discriminated between the LLM- and lawyer-generated texts\nsignificantly above chance-level in Experiment 3. Lastly, we discuss potential\nexplanations and risks of our findings, limitations and future work, and the\nimportance of language complexity and real-world comparability.","PeriodicalId":501541,"journal":{"name":"arXiv - CS - Human-Computer Interaction","volume":"23 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Human-Computer Interaction","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.07871","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Large Language Models (LLMs) are seemingly infiltrating every domain, and the
legal context is no exception. In this paper, we present the results of three
experiments (total N=288) that investigated lay people's willingness to act
upon, and their ability to discriminate between, LLM- and lawyer-generated
legal advice. In Experiment 1, participants judged their willingness to act on
legal advice when the source of the advice was either known or unknown. When
the advice source was unknown, participants indicated that they were
significantly more willing to act on the LLM-generated advice. This result was
replicated in Experiment 2. Intriguingly, despite participants indicating
higher willingness to act on LLM-generated advice in Experiments 1 and 2,
participants discriminated between the LLM- and lawyer-generated texts
significantly above chance-level in Experiment 3. Lastly, we discuss potential
explanations and risks of our findings, limitations and future work, and the
importance of language complexity and real-world comparability.