{"title":"公共行政中的自动化偏见--从法律和心理学的跨学科视角出发","authors":"Hannah Ruschemeier, Lukas J. Hondrich","doi":"10.1016/j.giq.2024.101953","DOIUrl":null,"url":null,"abstract":"<div><p>The objective of this paper is to break down the widely presumed dichotomy, especially in law, between fully automated decisions and human decisions from a psychological and normative perspective. This is particularly relevant as human oversight is seen as an effective means of quality control, including in the current AI Act. The phenomenon of automation bias argues against this assumption. We have investigated this phenomenon of automation bias, as a behavioral effect of and its implications in normative institutional decision-making situations. The phenomenon of automation bias, whereby individuals overly rely on machine-generated decisions or proposals, has far-reaching implications. Excessive reliance may result in a failure to meaningfully engage with the decision at hand, resulting in an inability to detect automation failures, and an overall deterioration in decision quality, potentially up to a net-negative impact of the decision support system. As legal systems emphasize the role of human decisions in ensuring fairness and quality, this paper critically examines the inadequacies of current EU and national legal frameworks in addressing the risks of automation bias. Contributing a novel perspective, this article integrates psychological, technical, and normative elements to analyze automation bias and its legal implications. Anchoring human decisions within legal principles, it navigates the intersections between AI and human-machine interactions from a normative point of view. An exploration of the AI Act sheds light on potential avenues for improvement. In conclusion, our paper proposes four steps aimed at effectively countering the potential perils posed by automation bias. By linking psychological insights, legal analysis, and technical implications, this paper advocates a holistic approach to evolving legal frameworks in an increasingly automated world.</p></div>","PeriodicalId":48258,"journal":{"name":"Government Information Quarterly","volume":"41 3","pages":"Article 101953"},"PeriodicalIF":7.8000,"publicationDate":"2024-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0740624X24000455/pdfft?md5=f139afd2536a788af4e4774e64383581&pid=1-s2.0-S0740624X24000455-main.pdf","citationCount":"0","resultStr":"{\"title\":\"Automation bias in public administration – an interdisciplinary perspective from law and psychology\",\"authors\":\"Hannah Ruschemeier, Lukas J. Hondrich\",\"doi\":\"10.1016/j.giq.2024.101953\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>The objective of this paper is to break down the widely presumed dichotomy, especially in law, between fully automated decisions and human decisions from a psychological and normative perspective. This is particularly relevant as human oversight is seen as an effective means of quality control, including in the current AI Act. The phenomenon of automation bias argues against this assumption. We have investigated this phenomenon of automation bias, as a behavioral effect of and its implications in normative institutional decision-making situations. The phenomenon of automation bias, whereby individuals overly rely on machine-generated decisions or proposals, has far-reaching implications. Excessive reliance may result in a failure to meaningfully engage with the decision at hand, resulting in an inability to detect automation failures, and an overall deterioration in decision quality, potentially up to a net-negative impact of the decision support system. As legal systems emphasize the role of human decisions in ensuring fairness and quality, this paper critically examines the inadequacies of current EU and national legal frameworks in addressing the risks of automation bias. Contributing a novel perspective, this article integrates psychological, technical, and normative elements to analyze automation bias and its legal implications. Anchoring human decisions within legal principles, it navigates the intersections between AI and human-machine interactions from a normative point of view. An exploration of the AI Act sheds light on potential avenues for improvement. In conclusion, our paper proposes four steps aimed at effectively countering the potential perils posed by automation bias. By linking psychological insights, legal analysis, and technical implications, this paper advocates a holistic approach to evolving legal frameworks in an increasingly automated world.</p></div>\",\"PeriodicalId\":48258,\"journal\":{\"name\":\"Government Information Quarterly\",\"volume\":\"41 3\",\"pages\":\"Article 101953\"},\"PeriodicalIF\":7.8000,\"publicationDate\":\"2024-06-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.sciencedirect.com/science/article/pii/S0740624X24000455/pdfft?md5=f139afd2536a788af4e4774e64383581&pid=1-s2.0-S0740624X24000455-main.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Government Information Quarterly\",\"FirstCategoryId\":\"91\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0740624X24000455\",\"RegionNum\":1,\"RegionCategory\":\"管理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"INFORMATION SCIENCE & LIBRARY SCIENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Government Information Quarterly","FirstCategoryId":"91","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0740624X24000455","RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"INFORMATION SCIENCE & LIBRARY SCIENCE","Score":null,"Total":0}
Automation bias in public administration – an interdisciplinary perspective from law and psychology
The objective of this paper is to break down the widely presumed dichotomy, especially in law, between fully automated decisions and human decisions from a psychological and normative perspective. This is particularly relevant as human oversight is seen as an effective means of quality control, including in the current AI Act. The phenomenon of automation bias argues against this assumption. We have investigated this phenomenon of automation bias, as a behavioral effect of and its implications in normative institutional decision-making situations. The phenomenon of automation bias, whereby individuals overly rely on machine-generated decisions or proposals, has far-reaching implications. Excessive reliance may result in a failure to meaningfully engage with the decision at hand, resulting in an inability to detect automation failures, and an overall deterioration in decision quality, potentially up to a net-negative impact of the decision support system. As legal systems emphasize the role of human decisions in ensuring fairness and quality, this paper critically examines the inadequacies of current EU and national legal frameworks in addressing the risks of automation bias. Contributing a novel perspective, this article integrates psychological, technical, and normative elements to analyze automation bias and its legal implications. Anchoring human decisions within legal principles, it navigates the intersections between AI and human-machine interactions from a normative point of view. An exploration of the AI Act sheds light on potential avenues for improvement. In conclusion, our paper proposes four steps aimed at effectively countering the potential perils posed by automation bias. By linking psychological insights, legal analysis, and technical implications, this paper advocates a holistic approach to evolving legal frameworks in an increasingly automated world.
期刊介绍:
Government Information Quarterly (GIQ) delves into the convergence of policy, information technology, government, and the public. It explores the impact of policies on government information flows, the role of technology in innovative government services, and the dynamic between citizens and governing bodies in the digital age. GIQ serves as a premier journal, disseminating high-quality research and insights that bridge the realms of policy, information technology, government, and public engagement.