{"title":"What do people want from algorithms? Public perceptions of algorithms in government.","authors":"Amit Haim, Dvir Yogev","doi":"10.1037/lhb0000614","DOIUrl":null,"url":null,"abstract":"<p><strong>Objective: </strong>This study examined how specific attributes of algorithmic decision-making tools (ADTs), related to algorithm design and institutional governance, affect the public's perceptions of implementing ADTs in government programs.</p><p><strong>Hypotheses: </strong>We hypothesized that acceptability varies systematically by policy domain. Regarding algorithm design, we predicted that higher accuracy, transparency, and government in-house development will enhance acceptability. Institutional features were also expected to shape perceptions: Explanations, stakeholder engagement, oversight mechanisms, and human involvement are anticipated to increase public perceptions.</p><p><strong>Method: </strong>This study employed a conjoint experimental design with 1,213 U.S. adults. Participants evaluated five policy proposals, each featuring a proposal to implement an ADT. Each proposal included randomly generated attributes across nine dimensions. Participants decided on the ADT's acceptability, fairness, and efficiency for each proposal. The analysis focused on the average marginal component effects of ADT attributes.</p><p><strong>Results: </strong>A combination of attributes related to process individualization significantly enhanced the perceived acceptability of the use of algorithms by government. Participants preferred ADTs that elevate the agency of the stakeholder (decision explanations, hearing options, notices, and human involvement in the decision-making process). The policy domain mattered most for fairness and acceptability, whereas accuracy mattered most for efficiency perceptions.</p><p><strong>Conclusion: </strong>Explaining decisions made using an algorithm, giving appropriate notice, providing a hearing option, and maintaining the supervision of a human agent are key components for public support when algorithmic systems are being implemented. (PsycInfo Database Record (c) 2025 APA, all rights reserved).</p>","PeriodicalId":48230,"journal":{"name":"Law and Human Behavior","volume":" ","pages":"263-280"},"PeriodicalIF":3.2000,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Law and Human Behavior","FirstCategoryId":"90","ListUrlMain":"https://doi.org/10.1037/lhb0000614","RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/6/9 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"LAW","Score":null,"Total":0}
引用次数: 0
Abstract
Objective: This study examined how specific attributes of algorithmic decision-making tools (ADTs), related to algorithm design and institutional governance, affect the public's perceptions of implementing ADTs in government programs.
Hypotheses: We hypothesized that acceptability varies systematically by policy domain. Regarding algorithm design, we predicted that higher accuracy, transparency, and government in-house development will enhance acceptability. Institutional features were also expected to shape perceptions: Explanations, stakeholder engagement, oversight mechanisms, and human involvement are anticipated to increase public perceptions.
Method: This study employed a conjoint experimental design with 1,213 U.S. adults. Participants evaluated five policy proposals, each featuring a proposal to implement an ADT. Each proposal included randomly generated attributes across nine dimensions. Participants decided on the ADT's acceptability, fairness, and efficiency for each proposal. The analysis focused on the average marginal component effects of ADT attributes.
Results: A combination of attributes related to process individualization significantly enhanced the perceived acceptability of the use of algorithms by government. Participants preferred ADTs that elevate the agency of the stakeholder (decision explanations, hearing options, notices, and human involvement in the decision-making process). The policy domain mattered most for fairness and acceptability, whereas accuracy mattered most for efficiency perceptions.
Conclusion: Explaining decisions made using an algorithm, giving appropriate notice, providing a hearing option, and maintaining the supervision of a human agent are key components for public support when algorithmic systems are being implemented. (PsycInfo Database Record (c) 2025 APA, all rights reserved).
期刊介绍:
Law and Human Behavior, the official journal of the American Psychology-Law Society/Division 41 of the American Psychological Association, is a multidisciplinary forum for the publication of articles and discussions of issues arising out of the relationships between human behavior and the law, our legal system, and the legal process. This journal publishes original research, reviews of past research, and theoretical studies from professionals in criminal justice, law, psychology, sociology, psychiatry, political science, education, communication, and other areas germane to the field.