Jacob Verrey, Peter Neyroud, Lawrence Sherman, Barak Ariel
{"title":"一个利用国家罪犯数据库实时预测累犯的公平尺度。","authors":"Jacob Verrey, Peter Neyroud, Lawrence Sherman, Barak Ariel","doi":"10.1007/s00521-025-11478-x","DOIUrl":null,"url":null,"abstract":"<p><p>This investigation explores whether machine learning can predict recidivism while addressing societal biases. To investigate this, we obtained conviction data from the UK's Police National Computer (PNC) on 346,685 records between January 1, 2000, and February 3, 2006 (His Majesty's Inspectorate of Constabulary in Use of the Police National Computer: An inspection of the ACRO Criminal Records Office. His Majesty's Inspectorate of Constabulary, Birmingham, https://assets-hmicfrs.justiceinspectorates.gov.uk/uploads/police-national-computer-use-acro-criminal-records-office.pdf, 2017). We generate twelve machine learning models-six to forecast general recidivism, and six to forecast violent recidivism-over a 3-year period, evaluated via fivefold cross-validation. Our best-performing models outperform the existing state-of-the-arts, receiving an area under curve (AUC) score of 0.8660 and 0.8375 for general and violent recidivism, respectively. Next, we construct a fairness scale that communicates the semantic and technical trade-offs associated with debiasing a criminal justice forecasting model. We use this scale to debias our best-performing models. Results indicate both models can achieve all five fairness definitions because the metrics measuring these definitions-the statistical range of recall, precision, positive rate, and error balance between demographics-indicate that these scores are within a one percentage point difference of each other. Deployment recommendations and implications are discussed. These include recommended safeguards against false positives, an explication of how these models addressed societal biases, and a case study illustrating how these models can improve existing criminal justice practices. That is, these models may help police identify fewer people in a way less impacted by structural bias while still reducing crime. A randomized control trial is proposed to test this illustrated case study, and further directions explored.</p><p><strong>Supplementary information: </strong>The online version contains supplementary material available at 10.1007/s00521-025-11478-x.</p>","PeriodicalId":49766,"journal":{"name":"Neural Computing & Applications","volume":"37 26","pages":"21607-21657"},"PeriodicalIF":4.5000,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12401775/pdf/","citationCount":"0","resultStr":"{\"title\":\"A fairness scale for real-time recidivism forecasts using a national database of convicted offenders.\",\"authors\":\"Jacob Verrey, Peter Neyroud, Lawrence Sherman, Barak Ariel\",\"doi\":\"10.1007/s00521-025-11478-x\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>This investigation explores whether machine learning can predict recidivism while addressing societal biases. To investigate this, we obtained conviction data from the UK's Police National Computer (PNC) on 346,685 records between January 1, 2000, and February 3, 2006 (His Majesty's Inspectorate of Constabulary in Use of the Police National Computer: An inspection of the ACRO Criminal Records Office. His Majesty's Inspectorate of Constabulary, Birmingham, https://assets-hmicfrs.justiceinspectorates.gov.uk/uploads/police-national-computer-use-acro-criminal-records-office.pdf, 2017). We generate twelve machine learning models-six to forecast general recidivism, and six to forecast violent recidivism-over a 3-year period, evaluated via fivefold cross-validation. Our best-performing models outperform the existing state-of-the-arts, receiving an area under curve (AUC) score of 0.8660 and 0.8375 for general and violent recidivism, respectively. Next, we construct a fairness scale that communicates the semantic and technical trade-offs associated with debiasing a criminal justice forecasting model. We use this scale to debias our best-performing models. Results indicate both models can achieve all five fairness definitions because the metrics measuring these definitions-the statistical range of recall, precision, positive rate, and error balance between demographics-indicate that these scores are within a one percentage point difference of each other. Deployment recommendations and implications are discussed. These include recommended safeguards against false positives, an explication of how these models addressed societal biases, and a case study illustrating how these models can improve existing criminal justice practices. That is, these models may help police identify fewer people in a way less impacted by structural bias while still reducing crime. A randomized control trial is proposed to test this illustrated case study, and further directions explored.</p><p><strong>Supplementary information: </strong>The online version contains supplementary material available at 10.1007/s00521-025-11478-x.</p>\",\"PeriodicalId\":49766,\"journal\":{\"name\":\"Neural Computing & Applications\",\"volume\":\"37 26\",\"pages\":\"21607-21657\"},\"PeriodicalIF\":4.5000,\"publicationDate\":\"2025-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12401775/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Neural Computing & Applications\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1007/s00521-025-11478-x\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2025/8/1 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Computing & Applications","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s00521-025-11478-x","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/8/1 0:00:00","PubModel":"Epub","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
A fairness scale for real-time recidivism forecasts using a national database of convicted offenders.
This investigation explores whether machine learning can predict recidivism while addressing societal biases. To investigate this, we obtained conviction data from the UK's Police National Computer (PNC) on 346,685 records between January 1, 2000, and February 3, 2006 (His Majesty's Inspectorate of Constabulary in Use of the Police National Computer: An inspection of the ACRO Criminal Records Office. His Majesty's Inspectorate of Constabulary, Birmingham, https://assets-hmicfrs.justiceinspectorates.gov.uk/uploads/police-national-computer-use-acro-criminal-records-office.pdf, 2017). We generate twelve machine learning models-six to forecast general recidivism, and six to forecast violent recidivism-over a 3-year period, evaluated via fivefold cross-validation. Our best-performing models outperform the existing state-of-the-arts, receiving an area under curve (AUC) score of 0.8660 and 0.8375 for general and violent recidivism, respectively. Next, we construct a fairness scale that communicates the semantic and technical trade-offs associated with debiasing a criminal justice forecasting model. We use this scale to debias our best-performing models. Results indicate both models can achieve all five fairness definitions because the metrics measuring these definitions-the statistical range of recall, precision, positive rate, and error balance between demographics-indicate that these scores are within a one percentage point difference of each other. Deployment recommendations and implications are discussed. These include recommended safeguards against false positives, an explication of how these models addressed societal biases, and a case study illustrating how these models can improve existing criminal justice practices. That is, these models may help police identify fewer people in a way less impacted by structural bias while still reducing crime. A randomized control trial is proposed to test this illustrated case study, and further directions explored.
Supplementary information: The online version contains supplementary material available at 10.1007/s00521-025-11478-x.
期刊介绍:
Neural Computing & Applications is an international journal which publishes original research and other information in the field of practical applications of neural computing and related techniques such as genetic algorithms, fuzzy logic and neuro-fuzzy systems.
All items relevant to building practical systems are within its scope, including but not limited to:
-adaptive computing-
algorithms-
applicable neural networks theory-
applied statistics-
architectures-
artificial intelligence-
benchmarks-
case histories of innovative applications-
fuzzy logic-
genetic algorithms-
hardware implementations-
hybrid intelligent systems-
intelligent agents-
intelligent control systems-
intelligent diagnostics-
intelligent forecasting-
machine learning-
neural networks-
neuro-fuzzy systems-
pattern recognition-
performance measures-
self-learning systems-
software simulations-
supervised and unsupervised learning methods-
system engineering and integration.
Featured contributions fall into several categories: Original Articles, Review Articles, Book Reviews and Announcements.