重新评估 GPT-4 的律师资格考试成绩

IF 3.1 2区 社会学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Eric Martínez
{"title":"重新评估 GPT-4 的律师资格考试成绩","authors":"Eric Martínez","doi":"10.1007/s10506-024-09396-9","DOIUrl":null,"url":null,"abstract":"<div><p>Perhaps the most widely touted of GPT-4’s at-launch, zero-shot capabilities has been its reported 90th-percentile performance on the Uniform Bar Exam. This paper begins by investigating the methodological challenges in documenting and verifying the 90th-percentile claim, presenting four sets of findings that indicate that OpenAI’s estimates of GPT-4’s UBE percentile are overinflated. First, although GPT-4’s UBE score nears the 90th percentile when examining approximate conversions from February administrations of the Illinois Bar Exam, these estimates are heavily skewed towards repeat test-takers who failed the July administration and score significantly lower than the general test-taking population. Second, data from a recent July administration of the same exam suggests GPT-4’s overall UBE percentile was below the 69th percentile, and <span>\\(\\sim\\)</span>48th percentile on essays. Third, examining official NCBE data and using several conservative statistical assumptions, GPT-4’s performance against first-time test takers is estimated to be <span>\\(\\sim\\)</span>62nd percentile, including <span>\\(\\sim\\)</span>42nd percentile on essays. Fourth, when examining only those who passed the exam (i.e. licensed or license-pending attorneys), GPT-4’s performance is estimated to drop to <span>\\(\\sim\\)</span>48th percentile overall, and <span>\\(\\sim\\)</span>15th percentile on essays. In addition to investigating the validity of the percentile claim, the paper also investigates the validity of GPT-4’s reported scaled UBE score of 298. The paper successfully replicates the MBE score, but highlights several methodological issues in the grading of the MPT + MEE components of the exam, which call into question the validity of the reported essay score. Finally, the paper investigates the effect of different hyperparameter combinations on GPT-4’s MBE performance, finding no significant effect of adjusting temperature settings, and a significant effect of few-shot chain-of-thought prompting over basic zero-shot prompting. Taken together, these findings carry timely insights for the desirability and feasibility of outsourcing legally relevant tasks to AI models, as well as for the importance for AI developers to implement rigorous and transparent capabilities evaluations to help secure safe and trustworthy AI.</p></div>","PeriodicalId":51336,"journal":{"name":"Artificial Intelligence and Law","volume":"33 3","pages":"581 - 604"},"PeriodicalIF":3.1000,"publicationDate":"2024-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10506-024-09396-9.pdf","citationCount":"0","resultStr":"{\"title\":\"Re-evaluating GPT-4’s bar exam performance\",\"authors\":\"Eric Martínez\",\"doi\":\"10.1007/s10506-024-09396-9\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Perhaps the most widely touted of GPT-4’s at-launch, zero-shot capabilities has been its reported 90th-percentile performance on the Uniform Bar Exam. This paper begins by investigating the methodological challenges in documenting and verifying the 90th-percentile claim, presenting four sets of findings that indicate that OpenAI’s estimates of GPT-4’s UBE percentile are overinflated. First, although GPT-4’s UBE score nears the 90th percentile when examining approximate conversions from February administrations of the Illinois Bar Exam, these estimates are heavily skewed towards repeat test-takers who failed the July administration and score significantly lower than the general test-taking population. Second, data from a recent July administration of the same exam suggests GPT-4’s overall UBE percentile was below the 69th percentile, and <span>\\\\(\\\\sim\\\\)</span>48th percentile on essays. Third, examining official NCBE data and using several conservative statistical assumptions, GPT-4’s performance against first-time test takers is estimated to be <span>\\\\(\\\\sim\\\\)</span>62nd percentile, including <span>\\\\(\\\\sim\\\\)</span>42nd percentile on essays. Fourth, when examining only those who passed the exam (i.e. licensed or license-pending attorneys), GPT-4’s performance is estimated to drop to <span>\\\\(\\\\sim\\\\)</span>48th percentile overall, and <span>\\\\(\\\\sim\\\\)</span>15th percentile on essays. In addition to investigating the validity of the percentile claim, the paper also investigates the validity of GPT-4’s reported scaled UBE score of 298. The paper successfully replicates the MBE score, but highlights several methodological issues in the grading of the MPT + MEE components of the exam, which call into question the validity of the reported essay score. Finally, the paper investigates the effect of different hyperparameter combinations on GPT-4’s MBE performance, finding no significant effect of adjusting temperature settings, and a significant effect of few-shot chain-of-thought prompting over basic zero-shot prompting. Taken together, these findings carry timely insights for the desirability and feasibility of outsourcing legally relevant tasks to AI models, as well as for the importance for AI developers to implement rigorous and transparent capabilities evaluations to help secure safe and trustworthy AI.</p></div>\",\"PeriodicalId\":51336,\"journal\":{\"name\":\"Artificial Intelligence and Law\",\"volume\":\"33 3\",\"pages\":\"581 - 604\"},\"PeriodicalIF\":3.1000,\"publicationDate\":\"2024-03-30\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://link.springer.com/content/pdf/10.1007/s10506-024-09396-9.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Artificial Intelligence and Law\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://link.springer.com/article/10.1007/s10506-024-09396-9\",\"RegionNum\":2,\"RegionCategory\":\"社会学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Artificial Intelligence and Law","FirstCategoryId":"94","ListUrlMain":"https://link.springer.com/article/10.1007/s10506-024-09396-9","RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

也许GPT-4在发射时最被广泛吹捧的零射击能力是其在统一律师考试中的90百分位表现。本文首先调查了记录和验证第90百分位声明的方法挑战,提出了四组发现,表明OpenAI对GPT-4的UBE百分位的估计过高。首先,虽然GPT-4的UBE分数接近于2月份伊利诺伊州律师考试的近似转换,但这些估计严重偏向于7月份未通过考试的重复考生,他们的分数明显低于一般参加考试的人群。其次,同一考试最近7月份的一次管理数据显示,GPT-4的整体UBE百分位数低于69百分位数,而论文则低于\(\sim\)第48百分位数。第三,通过检查NCBE官方数据并使用几个保守的统计假设,GPT-4相对于首次参加考试的人的表现估计为\(\sim\)第62个百分位,其中包括\(\sim\)第42个百分位的论文。第四,如果只考察那些通过考试的人(即有执照或正在申请执照的律师),GPT-4的总体成绩估计会下降到\(\sim\)第48百分位,论文成绩则下降到\(\sim\)第15百分位。除了调查百分位声明的有效性外,本文还调查了GPT-4报告的量表UBE得分298的有效性。本文成功地复制了MBE分数,但强调了考试中MPT + MEE部分评分的几个方法问题,这些问题对报告的论文分数的有效性提出了质疑。最后,本文研究了不同超参数组合对GPT-4的MBE性能的影响,发现调节温度设置对GPT-4的MBE性能没有显著影响,并且少弹链提示比基本零弹提示效果显著。综上所述,这些发现为将合法相关任务外包给人工智能模型的可取性和可行性提供了及时的见解,同时也为人工智能开发人员实施严格和透明的能力评估以帮助确保安全和值得信赖的人工智能的重要性提供了见解。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Re-evaluating GPT-4’s bar exam performance

Perhaps the most widely touted of GPT-4’s at-launch, zero-shot capabilities has been its reported 90th-percentile performance on the Uniform Bar Exam. This paper begins by investigating the methodological challenges in documenting and verifying the 90th-percentile claim, presenting four sets of findings that indicate that OpenAI’s estimates of GPT-4’s UBE percentile are overinflated. First, although GPT-4’s UBE score nears the 90th percentile when examining approximate conversions from February administrations of the Illinois Bar Exam, these estimates are heavily skewed towards repeat test-takers who failed the July administration and score significantly lower than the general test-taking population. Second, data from a recent July administration of the same exam suggests GPT-4’s overall UBE percentile was below the 69th percentile, and \(\sim\)48th percentile on essays. Third, examining official NCBE data and using several conservative statistical assumptions, GPT-4’s performance against first-time test takers is estimated to be \(\sim\)62nd percentile, including \(\sim\)42nd percentile on essays. Fourth, when examining only those who passed the exam (i.e. licensed or license-pending attorneys), GPT-4’s performance is estimated to drop to \(\sim\)48th percentile overall, and \(\sim\)15th percentile on essays. In addition to investigating the validity of the percentile claim, the paper also investigates the validity of GPT-4’s reported scaled UBE score of 298. The paper successfully replicates the MBE score, but highlights several methodological issues in the grading of the MPT + MEE components of the exam, which call into question the validity of the reported essay score. Finally, the paper investigates the effect of different hyperparameter combinations on GPT-4’s MBE performance, finding no significant effect of adjusting temperature settings, and a significant effect of few-shot chain-of-thought prompting over basic zero-shot prompting. Taken together, these findings carry timely insights for the desirability and feasibility of outsourcing legally relevant tasks to AI models, as well as for the importance for AI developers to implement rigorous and transparent capabilities evaluations to help secure safe and trustworthy AI.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
9.50
自引率
26.80%
发文量
33
期刊介绍: Artificial Intelligence and Law is an international forum for the dissemination of original interdisciplinary research in the following areas: Theoretical or empirical studies in artificial intelligence (AI), cognitive psychology, jurisprudence, linguistics, or philosophy which address the development of formal or computational models of legal knowledge, reasoning, and decision making. In-depth studies of innovative artificial intelligence systems that are being used in the legal domain. Studies which address the legal, ethical and social implications of the field of Artificial Intelligence and Law. Topics of interest include, but are not limited to, the following: Computational models of legal reasoning and decision making; judgmental reasoning, adversarial reasoning, case-based reasoning, deontic reasoning, and normative reasoning. Formal representation of legal knowledge: deontic notions, normative modalities, rights, factors, values, rules. Jurisprudential theories of legal reasoning. Specialized logics for law. Psychological and linguistic studies concerning legal reasoning. Legal expert systems; statutory systems, legal practice systems, predictive systems, and normative systems. AI and law support for legislative drafting, judicial decision-making, and public administration. Intelligent processing of legal documents; conceptual retrieval of cases and statutes, automatic text understanding, intelligent document assembly systems, hypertext, and semantic markup of legal documents. Intelligent processing of legal information on the World Wide Web, legal ontologies, automated intelligent legal agents, electronic legal institutions, computational models of legal texts. Ramifications for AI and Law in e-Commerce, automatic contracting and negotiation, digital rights management, and automated dispute resolution. Ramifications for AI and Law in e-governance, e-government, e-Democracy, and knowledge-based systems supporting public services, public dialogue and mediation. Intelligent computer-assisted instructional systems in law or ethics. Evaluation and auditing techniques for legal AI systems. Systemic problems in the construction and delivery of legal AI systems. Impact of AI on the law and legal institutions. Ethical issues concerning legal AI systems. In addition to original research contributions, the Journal will include a Book Review section, a series of Technology Reports describing existing and emerging products, applications and technologies, and a Research Notes section of occasional essays posing interesting and timely research challenges for the field of Artificial Intelligence and Law. Financial support for the Journal of Artificial Intelligence and Law is provided by the University of Pittsburgh School of Law.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信