{"title":"算法判决——美国刑事司法系统风险评估软件存在偏见和准确性不足","authors":"W. Gravett","doi":"10.47348/SACJ/V34/I1A2","DOIUrl":null,"url":null,"abstract":"Developments in artificial intelligence and machine learning have caused governments to start outsourcing authority in performing public functions to machines. Indeed, algorithmic decision-making is becoming ubiquitous, from assigning credit scores to people, to identifying the best candidates for an employment position, to ranking applicants for admission to university. Apart from the broader social, ethical and legal considerations, controversies have arisen regarding the inaccuracy of AI systems and their bias against vulnerable populations. The growing use of automated risk-assessment software in criminal sentencing is a cause for both optimism and scepticism. While these tools could potentially increase sentencing accuracy and reduce the risk of human error and bias by providing evidence-based reasons in place of ‘ad-hoc’ decisions by human beings beset with cognitive and implicit biases, they also have the potential to reinforce and exacerbate existing biases, and to undermine certain of the basic constitutional guarantees embedded in the justice system. A 2016 decision in the United States, S v Loomis, exemplifies the threat that the unchecked and unrestrained outsourcing of public power to AI systems might undermine human rights and the rule of law.","PeriodicalId":256796,"journal":{"name":"South African journal of criminal justice","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Sentenced by an algorithm — Bias and lack of accuracy in risk-assessment software in the United States criminal justice system\",\"authors\":\"W. Gravett\",\"doi\":\"10.47348/SACJ/V34/I1A2\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Developments in artificial intelligence and machine learning have caused governments to start outsourcing authority in performing public functions to machines. Indeed, algorithmic decision-making is becoming ubiquitous, from assigning credit scores to people, to identifying the best candidates for an employment position, to ranking applicants for admission to university. Apart from the broader social, ethical and legal considerations, controversies have arisen regarding the inaccuracy of AI systems and their bias against vulnerable populations. The growing use of automated risk-assessment software in criminal sentencing is a cause for both optimism and scepticism. While these tools could potentially increase sentencing accuracy and reduce the risk of human error and bias by providing evidence-based reasons in place of ‘ad-hoc’ decisions by human beings beset with cognitive and implicit biases, they also have the potential to reinforce and exacerbate existing biases, and to undermine certain of the basic constitutional guarantees embedded in the justice system. A 2016 decision in the United States, S v Loomis, exemplifies the threat that the unchecked and unrestrained outsourcing of public power to AI systems might undermine human rights and the rule of law.\",\"PeriodicalId\":256796,\"journal\":{\"name\":\"South African journal of criminal justice\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1900-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"South African journal of criminal justice\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.47348/SACJ/V34/I1A2\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"South African journal of criminal justice","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.47348/SACJ/V34/I1A2","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
摘要
人工智能和机器学习的发展导致政府开始将执行公共职能的权力外包给机器。事实上,算法决策正变得无处不在,从为人们分配信用评分,到确定就业职位的最佳候选人,再到大学录取申请人排名。除了更广泛的社会、伦理和法律考虑之外,关于人工智能系统的不准确性及其对弱势群体的偏见也引发了争议。在刑事判决中越来越多地使用自动风险评估软件,这既是乐观的原因,也是怀疑的原因。虽然这些工具可以通过提供基于证据的理由来取代被认知和隐性偏见困扰的人类“临时”决定,从而潜在地提高量刑的准确性,减少人为错误和偏见的风险,但它们也有可能加强和加剧现有的偏见,并破坏司法系统中某些基本的宪法保障。2016年美国“S诉卢米斯案”(S v Loomis)的一项裁决,体现了将公共权力不受约束地外包给人工智能系统可能会破坏人权和法治的威胁。
Sentenced by an algorithm — Bias and lack of accuracy in risk-assessment software in the United States criminal justice system
Developments in artificial intelligence and machine learning have caused governments to start outsourcing authority in performing public functions to machines. Indeed, algorithmic decision-making is becoming ubiquitous, from assigning credit scores to people, to identifying the best candidates for an employment position, to ranking applicants for admission to university. Apart from the broader social, ethical and legal considerations, controversies have arisen regarding the inaccuracy of AI systems and their bias against vulnerable populations. The growing use of automated risk-assessment software in criminal sentencing is a cause for both optimism and scepticism. While these tools could potentially increase sentencing accuracy and reduce the risk of human error and bias by providing evidence-based reasons in place of ‘ad-hoc’ decisions by human beings beset with cognitive and implicit biases, they also have the potential to reinforce and exacerbate existing biases, and to undermine certain of the basic constitutional guarantees embedded in the justice system. A 2016 decision in the United States, S v Loomis, exemplifies the threat that the unchecked and unrestrained outsourcing of public power to AI systems might undermine human rights and the rule of law.