Regulating for trust: Can law establish trust in artificial intelligence?

IF 3.2 2区 社会学 Q1 LAW
Aurelia Tamò-Larrieux, Clement Guitton, Simon Mayer, Christoph Lutz
{"title":"Regulating for trust: Can law establish trust in artificial intelligence?","authors":"Aurelia Tamò-Larrieux, Clement Guitton, Simon Mayer, Christoph Lutz","doi":"10.1111/rego.12568","DOIUrl":null,"url":null,"abstract":"The current political and regulatory discourse frequently references the term “trustworthy artificial intelligence (AI).” In Europe, the attempts to ensure trustworthy AI started already with the High-Level Expert Group Ethics Guidelines for Trustworthy AI and have now merged into the regulatory discourse on the EU AI Act. Around the globe, policymakers are actively pursuing initiatives—as the US Executive Order on Safe, Secure, and Trustworthy AI, or the Bletchley Declaration on AI showcase—based on the premise that the right regulatory strategy can shape trust in AI. To analyze the validity of this premise, we propose to consider the broader literature on trust in automation. On this basis, we constructed a framework to analyze 16 factors that impact trust in AI and automation more broadly. We analyze the interplay between these factors and disentangle them to determine the impact regulation can have on each. The article thus provides policymakers and legal scholars with a foundation to gauge different regulatory strategies, notably by differentiating between those strategies where regulation is more likely to also influence trust on AI (e.g., regulating the types of tasks that AI may fulfill) and those where its influence on trust is more limited (e.g., measures that increase awareness of complacency and automation biases). Our analysis underscores the critical role of nuanced regulation in shaping the human-automation relationship and offers a targeted approach to policymakers to debate how to streamline regulatory efforts for future AI governance.","PeriodicalId":21026,"journal":{"name":"Regulation & Governance","volume":null,"pages":null},"PeriodicalIF":3.2000,"publicationDate":"2023-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Regulation & Governance","FirstCategoryId":"91","ListUrlMain":"https://doi.org/10.1111/rego.12568","RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"LAW","Score":null,"Total":0}
引用次数: 0

Abstract

The current political and regulatory discourse frequently references the term “trustworthy artificial intelligence (AI).” In Europe, the attempts to ensure trustworthy AI started already with the High-Level Expert Group Ethics Guidelines for Trustworthy AI and have now merged into the regulatory discourse on the EU AI Act. Around the globe, policymakers are actively pursuing initiatives—as the US Executive Order on Safe, Secure, and Trustworthy AI, or the Bletchley Declaration on AI showcase—based on the premise that the right regulatory strategy can shape trust in AI. To analyze the validity of this premise, we propose to consider the broader literature on trust in automation. On this basis, we constructed a framework to analyze 16 factors that impact trust in AI and automation more broadly. We analyze the interplay between these factors and disentangle them to determine the impact regulation can have on each. The article thus provides policymakers and legal scholars with a foundation to gauge different regulatory strategies, notably by differentiating between those strategies where regulation is more likely to also influence trust on AI (e.g., regulating the types of tasks that AI may fulfill) and those where its influence on trust is more limited (e.g., measures that increase awareness of complacency and automation biases). Our analysis underscores the critical role of nuanced regulation in shaping the human-automation relationship and offers a targeted approach to policymakers to debate how to streamline regulatory efforts for future AI governance.
监管信任:法律能否建立对人工智能的信任?
当前的政治和监管话语经常提到“可信赖的人工智能(AI)”这个词。在欧洲,确保值得信赖的人工智能的尝试已经开始于《值得信赖的人工智能高级别专家组道德准则》,现在已经合并到欧盟人工智能法案的监管话语中。在全球范围内,政策制定者正在积极采取行动,如美国关于安全、可靠和可信赖的人工智能的行政命令,或关于人工智能的布莱切利宣言,这些举措都是基于正确的监管策略可以塑造对人工智能的信任这一前提。为了分析这一前提的有效性,我们建议考虑更广泛的关于自动化信任的文献。在此基础上,我们构建了一个框架来分析更广泛地影响人工智能和自动化信任的16个因素。我们分析了这些因素之间的相互作用,并将它们分开,以确定监管对每个因素的影响。因此,本文为政策制定者和法律学者提供了衡量不同监管策略的基础,特别是通过区分监管更有可能影响对人工智能的信任的策略(例如,规范人工智能可能完成的任务类型)和其对信任的影响更有限的策略(例如,提高对自满和自动化偏见的认识的措施)。我们的分析强调了细微差别的监管在塑造人类与自动化关系方面的关键作用,并为政策制定者提供了一种有针对性的方法,以讨论如何简化未来人工智能治理的监管工作。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
7.80
自引率
10.00%
发文量
57
期刊介绍: Regulation & Governance serves as the leading platform for the study of regulation and governance by political scientists, lawyers, sociologists, historians, criminologists, psychologists, anthropologists, economists and others. Research on regulation and governance, once fragmented across various disciplines and subject areas, has emerged at the cutting edge of paradigmatic change in the social sciences. Through the peer-reviewed journal Regulation & Governance, we seek to advance discussions between various disciplines about regulation and governance, promote the development of new theoretical and empirical understanding, and serve the growing needs of practitioners for a useful academic reference.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信