Human Control and Discretion in AI-driven Decision-making in Government

L. Mitrou, M. Janssen, E. Loukis
{"title":"Human Control and Discretion in AI-driven Decision-making in Government","authors":"L. Mitrou, M. Janssen, E. Loukis","doi":"10.1145/3494193.3494195","DOIUrl":null,"url":null,"abstract":"Traditionally public decision-makers have been given discretion in many of the decisions they have to make in how to comply with legislation and policies. In this way, the context and specific circumstances can be taken into account when making decisions. This enables more acceptable solutions, but at the same time, discretion might result in treating individuals differently. With the advance of AI-based decisions, the role of the decision-makers is changing. The automation might result in fully automated decisions, humans-in-the-loop or AI might only be used as recommender systems in which humans have the discretion to deviate from the suggested decision. The predictability of and the accountability of the decisions might vary in these circumstances, although humans always remain accountable. Hence, there is a need for human-control and the decision-makers should be given sufficient authority to control the system and deal with undesired outcomes. In this direction this paper analyzes the degree of discretion and human control needed in AI-driven decision-making in government. Our analysis is based on the legal requirements set/posed to the administration, by the extensive legal frameworks that have been created for its operation, concerning the rule of law, the fairness – non-discrimination, the justifiability and accountability, and the certainty/ predictability.","PeriodicalId":360191,"journal":{"name":"Proceedings of the 14th International Conference on Theory and Practice of Electronic Governance","volume":"17 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 14th International Conference on Theory and Practice of Electronic Governance","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3494193.3494195","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

Abstract

Traditionally public decision-makers have been given discretion in many of the decisions they have to make in how to comply with legislation and policies. In this way, the context and specific circumstances can be taken into account when making decisions. This enables more acceptable solutions, but at the same time, discretion might result in treating individuals differently. With the advance of AI-based decisions, the role of the decision-makers is changing. The automation might result in fully automated decisions, humans-in-the-loop or AI might only be used as recommender systems in which humans have the discretion to deviate from the suggested decision. The predictability of and the accountability of the decisions might vary in these circumstances, although humans always remain accountable. Hence, there is a need for human-control and the decision-makers should be given sufficient authority to control the system and deal with undesired outcomes. In this direction this paper analyzes the degree of discretion and human control needed in AI-driven decision-making in government. Our analysis is based on the legal requirements set/posed to the administration, by the extensive legal frameworks that have been created for its operation, concerning the rule of law, the fairness – non-discrimination, the justifiability and accountability, and the certainty/ predictability.
人工智能驱动的政府决策中的人类控制和自由裁量权
传统上,公共决策者在他们必须如何遵守立法和政策的许多决定中被赋予自由裁量权。这样,在做决定时就可以考虑到背景和具体情况。这就产生了更可接受的解决方案,但与此同时,谨慎可能导致对个人的不同对待。随着人工智能决策的发展,决策者的角色正在发生变化。自动化可能会导致完全自动化的决策,人工智能或人工智能可能只被用作推荐系统,在这个系统中,人类有自由裁量权偏离建议的决策。在这些情况下,决策的可预测性和可问责性可能会有所不同,尽管人类总是负责任的。因此,需要人为控制,决策者应该被赋予足够的权力来控制系统和处理不希望的结果。在这个方向上,本文分析了人工智能驱动的政府决策所需的自由裁量权和人类控制的程度。我们的分析是基于为政府运作而建立的广泛法律框架对政府提出的法律要求,包括法治、公平-非歧视、可辩护性和问责性,以及确定性/可预测性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信