{"title":"重新评估 GPT-4 的律师资格考试成绩","authors":"Eric Martínez","doi":"10.1007/s10506-024-09396-9","DOIUrl":null,"url":null,"abstract":"<div><p>Perhaps the most widely touted of GPT-4’s at-launch, zero-shot capabilities has been its reported 90th-percentile performance on the Uniform Bar Exam. This paper begins by investigating the methodological challenges in documenting and verifying the 90th-percentile claim, presenting four sets of findings that indicate that OpenAI’s estimates of GPT-4’s UBE percentile are overinflated. First, although GPT-4’s UBE score nears the 90th percentile when examining approximate conversions from February administrations of the Illinois Bar Exam, these estimates are heavily skewed towards repeat test-takers who failed the July administration and score significantly lower than the general test-taking population. Second, data from a recent July administration of the same exam suggests GPT-4’s overall UBE percentile was below the 69th percentile, and <span>\\(\\sim\\)</span>48th percentile on essays. Third, examining official NCBE data and using several conservative statistical assumptions, GPT-4’s performance against first-time test takers is estimated to be <span>\\(\\sim\\)</span>62nd percentile, including <span>\\(\\sim\\)</span>42nd percentile on essays. Fourth, when examining only those who passed the exam (i.e. licensed or license-pending attorneys), GPT-4’s performance is estimated to drop to <span>\\(\\sim\\)</span>48th percentile overall, and <span>\\(\\sim\\)</span>15th percentile on essays. In addition to investigating the validity of the percentile claim, the paper also investigates the validity of GPT-4’s reported scaled UBE score of 298. The paper successfully replicates the MBE score, but highlights several methodological issues in the grading of the MPT + MEE components of the exam, which call into question the validity of the reported essay score. Finally, the paper investigates the effect of different hyperparameter combinations on GPT-4’s MBE performance, finding no significant effect of adjusting temperature settings, and a significant effect of few-shot chain-of-thought prompting over basic zero-shot prompting. Taken together, these findings carry timely insights for the desirability and feasibility of outsourcing legally relevant tasks to AI models, as well as for the importance for AI developers to implement rigorous and transparent capabilities evaluations to help secure safe and trustworthy AI.</p></div>","PeriodicalId":51336,"journal":{"name":"Artificial Intelligence and Law","volume":"33 3","pages":"581 - 604"},"PeriodicalIF":3.1000,"publicationDate":"2024-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10506-024-09396-9.pdf","citationCount":"0","resultStr":"{\"title\":\"Re-evaluating GPT-4’s bar exam performance\",\"authors\":\"Eric Martínez\",\"doi\":\"10.1007/s10506-024-09396-9\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Perhaps the most widely touted of GPT-4’s at-launch, zero-shot capabilities has been its reported 90th-percentile performance on the Uniform Bar Exam. This paper begins by investigating the methodological challenges in documenting and verifying the 90th-percentile claim, presenting four sets of findings that indicate that OpenAI’s estimates of GPT-4’s UBE percentile are overinflated. First, although GPT-4’s UBE score nears the 90th percentile when examining approximate conversions from February administrations of the Illinois Bar Exam, these estimates are heavily skewed towards repeat test-takers who failed the July administration and score significantly lower than the general test-taking population. Second, data from a recent July administration of the same exam suggests GPT-4’s overall UBE percentile was below the 69th percentile, and <span>\\\\(\\\\sim\\\\)</span>48th percentile on essays. Third, examining official NCBE data and using several conservative statistical assumptions, GPT-4’s performance against first-time test takers is estimated to be <span>\\\\(\\\\sim\\\\)</span>62nd percentile, including <span>\\\\(\\\\sim\\\\)</span>42nd percentile on essays. Fourth, when examining only those who passed the exam (i.e. licensed or license-pending attorneys), GPT-4’s performance is estimated to drop to <span>\\\\(\\\\sim\\\\)</span>48th percentile overall, and <span>\\\\(\\\\sim\\\\)</span>15th percentile on essays. In addition to investigating the validity of the percentile claim, the paper also investigates the validity of GPT-4’s reported scaled UBE score of 298. The paper successfully replicates the MBE score, but highlights several methodological issues in the grading of the MPT + MEE components of the exam, which call into question the validity of the reported essay score. Finally, the paper investigates the effect of different hyperparameter combinations on GPT-4’s MBE performance, finding no significant effect of adjusting temperature settings, and a significant effect of few-shot chain-of-thought prompting over basic zero-shot prompting. Taken together, these findings carry timely insights for the desirability and feasibility of outsourcing legally relevant tasks to AI models, as well as for the importance for AI developers to implement rigorous and transparent capabilities evaluations to help secure safe and trustworthy AI.</p></div>\",\"PeriodicalId\":51336,\"journal\":{\"name\":\"Artificial Intelligence and Law\",\"volume\":\"33 3\",\"pages\":\"581 - 604\"},\"PeriodicalIF\":3.1000,\"publicationDate\":\"2024-03-30\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://link.springer.com/content/pdf/10.1007/s10506-024-09396-9.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Artificial Intelligence and Law\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://link.springer.com/article/10.1007/s10506-024-09396-9\",\"RegionNum\":2,\"RegionCategory\":\"社会学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Artificial Intelligence and Law","FirstCategoryId":"94","ListUrlMain":"https://link.springer.com/article/10.1007/s10506-024-09396-9","RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Perhaps the most widely touted of GPT-4’s at-launch, zero-shot capabilities has been its reported 90th-percentile performance on the Uniform Bar Exam. This paper begins by investigating the methodological challenges in documenting and verifying the 90th-percentile claim, presenting four sets of findings that indicate that OpenAI’s estimates of GPT-4’s UBE percentile are overinflated. First, although GPT-4’s UBE score nears the 90th percentile when examining approximate conversions from February administrations of the Illinois Bar Exam, these estimates are heavily skewed towards repeat test-takers who failed the July administration and score significantly lower than the general test-taking population. Second, data from a recent July administration of the same exam suggests GPT-4’s overall UBE percentile was below the 69th percentile, and \(\sim\)48th percentile on essays. Third, examining official NCBE data and using several conservative statistical assumptions, GPT-4’s performance against first-time test takers is estimated to be \(\sim\)62nd percentile, including \(\sim\)42nd percentile on essays. Fourth, when examining only those who passed the exam (i.e. licensed or license-pending attorneys), GPT-4’s performance is estimated to drop to \(\sim\)48th percentile overall, and \(\sim\)15th percentile on essays. In addition to investigating the validity of the percentile claim, the paper also investigates the validity of GPT-4’s reported scaled UBE score of 298. The paper successfully replicates the MBE score, but highlights several methodological issues in the grading of the MPT + MEE components of the exam, which call into question the validity of the reported essay score. Finally, the paper investigates the effect of different hyperparameter combinations on GPT-4’s MBE performance, finding no significant effect of adjusting temperature settings, and a significant effect of few-shot chain-of-thought prompting over basic zero-shot prompting. Taken together, these findings carry timely insights for the desirability and feasibility of outsourcing legally relevant tasks to AI models, as well as for the importance for AI developers to implement rigorous and transparent capabilities evaluations to help secure safe and trustworthy AI.
期刊介绍:
Artificial Intelligence and Law is an international forum for the dissemination of original interdisciplinary research in the following areas: Theoretical or empirical studies in artificial intelligence (AI), cognitive psychology, jurisprudence, linguistics, or philosophy which address the development of formal or computational models of legal knowledge, reasoning, and decision making. In-depth studies of innovative artificial intelligence systems that are being used in the legal domain. Studies which address the legal, ethical and social implications of the field of Artificial Intelligence and Law.
Topics of interest include, but are not limited to, the following: Computational models of legal reasoning and decision making; judgmental reasoning, adversarial reasoning, case-based reasoning, deontic reasoning, and normative reasoning. Formal representation of legal knowledge: deontic notions, normative
modalities, rights, factors, values, rules. Jurisprudential theories of legal reasoning. Specialized logics for law. Psychological and linguistic studies concerning legal reasoning. Legal expert systems; statutory systems, legal practice systems, predictive systems, and normative systems. AI and law support for legislative drafting, judicial decision-making, and
public administration. Intelligent processing of legal documents; conceptual retrieval of cases and statutes, automatic text understanding, intelligent document assembly systems, hypertext, and semantic markup of legal documents. Intelligent processing of legal information on the World Wide Web, legal ontologies, automated intelligent legal agents, electronic legal institutions, computational models of legal texts. Ramifications for AI and Law in e-Commerce, automatic contracting and negotiation, digital rights management, and automated dispute resolution. Ramifications for AI and Law in e-governance, e-government, e-Democracy, and knowledge-based systems supporting public services, public dialogue and mediation. Intelligent computer-assisted instructional systems in law or ethics. Evaluation and auditing techniques for legal AI systems. Systemic problems in the construction and delivery of legal AI systems. Impact of AI on the law and legal institutions. Ethical issues concerning legal AI systems. In addition to original research contributions, the Journal will include a Book Review section, a series of Technology Reports describing existing and emerging products, applications and technologies, and a Research Notes section of occasional essays posing interesting and timely research challenges for the field of Artificial Intelligence and Law. Financial support for the Journal of Artificial Intelligence and Law is provided by the University of Pittsburgh School of Law.