法官对人工智能风险评估工具的情感内容分析

A. Fine, S. Le, M. K. Miller
{"title":"法官对人工智能风险评估工具的情感内容分析","authors":"A. Fine, S. Le, M. K. Miller","doi":"10.21202/2782-2923.2024.1.246-263","DOIUrl":null,"url":null,"abstract":"Objective: to analyze the positions of judges on risk assessment tools using artificial intelligence.Methods: dialectical approach to cognition of social phenomena, allowing to analyze them in historical development and functioning in the context of the totality of objective and subjective factors, which predetermined the following research methods: formal-logical and sociological.Results: Artificial intelligence (AI) uses computer programming to make predictions (e.g., bail decisions) and has the potential to benefit the justice system (e.g., save time and reduce bias). This secondary data analysis assessed 381 judges’ responses to the question, “Do you feel that artificial intelligence (using computer programs and algorithms) holds promise to remove bias from bail and sentencing decisions?”Scientific novelty: The authors created apriori themes based on the literature, which included judges’ algorithm aversion and appreciation, locus of control, procedural justice, and legitimacy. Results suggest that judges experience algorithm aversion, have significant concerns about bias being exacerbated by AI, and worry about being replaced by computers. Judges believe that AI has the potential to inform their decisions about bail and sentencing; however, it must be empirically tested and follow guidelines. Using the data gathered about judges’ sentiments toward AI, the authors discuss the integration of AI into the legal system and future research.Practical significance: the main provisions and conclusions of the article can be used in scientific, pedagogical and law enforcement activities when considering the issues related to the legal risks of using artificial intelligence.","PeriodicalId":507562,"journal":{"name":"Russian Journal of Economics and Law","volume":"65 12","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Content Analysis of Judges’ Sentiments Toward Artificial Intelligence Risk Assessment Tools\",\"authors\":\"A. Fine, S. Le, M. K. Miller\",\"doi\":\"10.21202/2782-2923.2024.1.246-263\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Objective: to analyze the positions of judges on risk assessment tools using artificial intelligence.Methods: dialectical approach to cognition of social phenomena, allowing to analyze them in historical development and functioning in the context of the totality of objective and subjective factors, which predetermined the following research methods: formal-logical and sociological.Results: Artificial intelligence (AI) uses computer programming to make predictions (e.g., bail decisions) and has the potential to benefit the justice system (e.g., save time and reduce bias). This secondary data analysis assessed 381 judges’ responses to the question, “Do you feel that artificial intelligence (using computer programs and algorithms) holds promise to remove bias from bail and sentencing decisions?”Scientific novelty: The authors created apriori themes based on the literature, which included judges’ algorithm aversion and appreciation, locus of control, procedural justice, and legitimacy. Results suggest that judges experience algorithm aversion, have significant concerns about bias being exacerbated by AI, and worry about being replaced by computers. Judges believe that AI has the potential to inform their decisions about bail and sentencing; however, it must be empirically tested and follow guidelines. Using the data gathered about judges’ sentiments toward AI, the authors discuss the integration of AI into the legal system and future research.Practical significance: the main provisions and conclusions of the article can be used in scientific, pedagogical and law enforcement activities when considering the issues related to the legal risks of using artificial intelligence.\",\"PeriodicalId\":507562,\"journal\":{\"name\":\"Russian Journal of Economics and Law\",\"volume\":\"65 12\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-03-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Russian Journal of Economics and Law\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.21202/2782-2923.2024.1.246-263\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Russian Journal of Economics and Law","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.21202/2782-2923.2024.1.246-263","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

目的:分析法官对使用人工智能的风险评估工具的立场。方法:辩证地认知社会现象,从而在主客观因素的整体背景下分析社会现象的历史发展和运作,这就决定了以下研究方法:形式逻辑法和社会学法:人工智能(AI)使用计算机编程进行预测(如保释决定),并有可能使司法系统受益(如节省时间和减少偏见)。这项二手数据分析评估了381名法官对以下问题的回答:"您觉得人工智能(使用计算机程序和算法)有望消除保释和量刑判决中的偏见吗?"科学新颖性:作者根据文献创建了先验主题,其中包括法官的算法厌恶和欣赏、控制位置、程序正义和合法性。结果表明,法官对算法有厌恶感,非常担心人工智能会加剧偏见,并担心被计算机取代。法官们认为,人工智能有可能为他们的保释和量刑决定提供信息;但是,人工智能必须经过实证测试并遵循指导方针。作者利用收集到的有关法官对人工智能的看法的数据,讨论了将人工智能融入法律体系和未来研究的问题。实用意义:在考虑与使用人工智能的法律风险有关的问题时,文章的主要规定和结论可用于科学、教学和执法活动。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Content Analysis of Judges’ Sentiments Toward Artificial Intelligence Risk Assessment Tools
Objective: to analyze the positions of judges on risk assessment tools using artificial intelligence.Methods: dialectical approach to cognition of social phenomena, allowing to analyze them in historical development and functioning in the context of the totality of objective and subjective factors, which predetermined the following research methods: formal-logical and sociological.Results: Artificial intelligence (AI) uses computer programming to make predictions (e.g., bail decisions) and has the potential to benefit the justice system (e.g., save time and reduce bias). This secondary data analysis assessed 381 judges’ responses to the question, “Do you feel that artificial intelligence (using computer programs and algorithms) holds promise to remove bias from bail and sentencing decisions?”Scientific novelty: The authors created apriori themes based on the literature, which included judges’ algorithm aversion and appreciation, locus of control, procedural justice, and legitimacy. Results suggest that judges experience algorithm aversion, have significant concerns about bias being exacerbated by AI, and worry about being replaced by computers. Judges believe that AI has the potential to inform their decisions about bail and sentencing; however, it must be empirically tested and follow guidelines. Using the data gathered about judges’ sentiments toward AI, the authors discuss the integration of AI into the legal system and future research.Practical significance: the main provisions and conclusions of the article can be used in scientific, pedagogical and law enforcement activities when considering the issues related to the legal risks of using artificial intelligence.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信