Manuel Portela, Carlos Castillo, Songül Tolan, Marzieh Karimi-Haghighi, Antonio Andres Pueyo
{"title":"算法支持的累犯风险评估中人类预测的比较用户研究","authors":"Manuel Portela, Carlos Castillo, Songül Tolan, Marzieh Karimi-Haghighi, Antonio Andres Pueyo","doi":"10.1007/s10506-024-09393-y","DOIUrl":null,"url":null,"abstract":"<div><p>In this paper, we study the effects of using an algorithm-based risk assessment instrument (RAI) to support the prediction of risk of violent recidivism upon release. The instrument we used is a machine learning version of RiskCanvi used by the Justice Department of <i>Catalonia, Spain</i>. It was hypothesized that people can improve their performance on defining the risk of recidivism when assisted with a RAI. Also, that professionals can perform better than non-experts on the domain. Participants had to predict whether a person who has been released from prison will commit a new crime leading to re-incarceration, within the next two years. This user study is done with (1) <i>general</i> participants from diverse backgrounds recruited through a crowdsourcing platform, (2) <i>targeted</i> participants who are students and practitioners of data science, criminology, or social work and professionals who work with RisCanvi. We also run focus groups with participants of the <i>targeted</i> study, including people who use <i>RisCanvi</i> in a professional capacity, to interpret the quantitative results. Among other findings, we observe that algorithmic support systematically leads to more accurate predictions from all participants, but that statistically significant gains are only seen in the performance of <i>targeted</i> participants with respect to that of crowdsourced participants. Among other comments, professional participants indicate that they would not foresee using a fully-automated system in criminal risk assessment, but do consider it valuable for training, standardization, and to fine-tune or double-check their predictions on particularly difficult cases. We found that the revised prediction by using a RAI increases the performance of all groups, while professionals show a better performance in general. And, a RAI can be considered for extending professional capacities and skills along their careers.</p></div>","PeriodicalId":51336,"journal":{"name":"Artificial Intelligence and Law","volume":"33 2","pages":"471 - 517"},"PeriodicalIF":3.1000,"publicationDate":"2024-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10506-024-09393-y.pdf","citationCount":"0","resultStr":"{\"title\":\"A comparative user study of human predictions in algorithm-supported recidivism risk assessment\",\"authors\":\"Manuel Portela, Carlos Castillo, Songül Tolan, Marzieh Karimi-Haghighi, Antonio Andres Pueyo\",\"doi\":\"10.1007/s10506-024-09393-y\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>In this paper, we study the effects of using an algorithm-based risk assessment instrument (RAI) to support the prediction of risk of violent recidivism upon release. The instrument we used is a machine learning version of RiskCanvi used by the Justice Department of <i>Catalonia, Spain</i>. It was hypothesized that people can improve their performance on defining the risk of recidivism when assisted with a RAI. Also, that professionals can perform better than non-experts on the domain. Participants had to predict whether a person who has been released from prison will commit a new crime leading to re-incarceration, within the next two years. This user study is done with (1) <i>general</i> participants from diverse backgrounds recruited through a crowdsourcing platform, (2) <i>targeted</i> participants who are students and practitioners of data science, criminology, or social work and professionals who work with RisCanvi. We also run focus groups with participants of the <i>targeted</i> study, including people who use <i>RisCanvi</i> in a professional capacity, to interpret the quantitative results. Among other findings, we observe that algorithmic support systematically leads to more accurate predictions from all participants, but that statistically significant gains are only seen in the performance of <i>targeted</i> participants with respect to that of crowdsourced participants. Among other comments, professional participants indicate that they would not foresee using a fully-automated system in criminal risk assessment, but do consider it valuable for training, standardization, and to fine-tune or double-check their predictions on particularly difficult cases. We found that the revised prediction by using a RAI increases the performance of all groups, while professionals show a better performance in general. And, a RAI can be considered for extending professional capacities and skills along their careers.</p></div>\",\"PeriodicalId\":51336,\"journal\":{\"name\":\"Artificial Intelligence and Law\",\"volume\":\"33 2\",\"pages\":\"471 - 517\"},\"PeriodicalIF\":3.1000,\"publicationDate\":\"2024-03-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://link.springer.com/content/pdf/10.1007/s10506-024-09393-y.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Artificial Intelligence and Law\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://link.springer.com/article/10.1007/s10506-024-09393-y\",\"RegionNum\":2,\"RegionCategory\":\"社会学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Artificial Intelligence and Law","FirstCategoryId":"94","ListUrlMain":"https://link.springer.com/article/10.1007/s10506-024-09393-y","RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
A comparative user study of human predictions in algorithm-supported recidivism risk assessment
In this paper, we study the effects of using an algorithm-based risk assessment instrument (RAI) to support the prediction of risk of violent recidivism upon release. The instrument we used is a machine learning version of RiskCanvi used by the Justice Department of Catalonia, Spain. It was hypothesized that people can improve their performance on defining the risk of recidivism when assisted with a RAI. Also, that professionals can perform better than non-experts on the domain. Participants had to predict whether a person who has been released from prison will commit a new crime leading to re-incarceration, within the next two years. This user study is done with (1) general participants from diverse backgrounds recruited through a crowdsourcing platform, (2) targeted participants who are students and practitioners of data science, criminology, or social work and professionals who work with RisCanvi. We also run focus groups with participants of the targeted study, including people who use RisCanvi in a professional capacity, to interpret the quantitative results. Among other findings, we observe that algorithmic support systematically leads to more accurate predictions from all participants, but that statistically significant gains are only seen in the performance of targeted participants with respect to that of crowdsourced participants. Among other comments, professional participants indicate that they would not foresee using a fully-automated system in criminal risk assessment, but do consider it valuable for training, standardization, and to fine-tune or double-check their predictions on particularly difficult cases. We found that the revised prediction by using a RAI increases the performance of all groups, while professionals show a better performance in general. And, a RAI can be considered for extending professional capacities and skills along their careers.
期刊介绍:
Artificial Intelligence and Law is an international forum for the dissemination of original interdisciplinary research in the following areas: Theoretical or empirical studies in artificial intelligence (AI), cognitive psychology, jurisprudence, linguistics, or philosophy which address the development of formal or computational models of legal knowledge, reasoning, and decision making. In-depth studies of innovative artificial intelligence systems that are being used in the legal domain. Studies which address the legal, ethical and social implications of the field of Artificial Intelligence and Law.
Topics of interest include, but are not limited to, the following: Computational models of legal reasoning and decision making; judgmental reasoning, adversarial reasoning, case-based reasoning, deontic reasoning, and normative reasoning. Formal representation of legal knowledge: deontic notions, normative
modalities, rights, factors, values, rules. Jurisprudential theories of legal reasoning. Specialized logics for law. Psychological and linguistic studies concerning legal reasoning. Legal expert systems; statutory systems, legal practice systems, predictive systems, and normative systems. AI and law support for legislative drafting, judicial decision-making, and
public administration. Intelligent processing of legal documents; conceptual retrieval of cases and statutes, automatic text understanding, intelligent document assembly systems, hypertext, and semantic markup of legal documents. Intelligent processing of legal information on the World Wide Web, legal ontologies, automated intelligent legal agents, electronic legal institutions, computational models of legal texts. Ramifications for AI and Law in e-Commerce, automatic contracting and negotiation, digital rights management, and automated dispute resolution. Ramifications for AI and Law in e-governance, e-government, e-Democracy, and knowledge-based systems supporting public services, public dialogue and mediation. Intelligent computer-assisted instructional systems in law or ethics. Evaluation and auditing techniques for legal AI systems. Systemic problems in the construction and delivery of legal AI systems. Impact of AI on the law and legal institutions. Ethical issues concerning legal AI systems. In addition to original research contributions, the Journal will include a Book Review section, a series of Technology Reports describing existing and emerging products, applications and technologies, and a Research Notes section of occasional essays posing interesting and timely research challenges for the field of Artificial Intelligence and Law. Financial support for the Journal of Artificial Intelligence and Law is provided by the University of Pittsburgh School of Law.