人工智能中的信任与可信赖的机器学习技术之间的关系

Ehsan Toreini, M. Aitken, Kovila P. L. Coopamootoo, Karen Elliott, Carlos Vladimiro Gonzalez Zelaya, A. Moorsel
{"title":"人工智能中的信任与可信赖的机器学习技术之间的关系","authors":"Ehsan Toreini, M. Aitken, Kovila P. L. Coopamootoo, Karen Elliott, Carlos Vladimiro Gonzalez Zelaya, A. Moorsel","doi":"10.1145/3351095.3372834","DOIUrl":null,"url":null,"abstract":"To design and develop AI-based systems that users and the larger public can justifiably trust, one needs to understand how machine learning technologies impact trust. To guide the design and implementation of trusted AI-based systems, this paper provides a systematic approach to relate considerations about trust from the social sciences to trustworthiness technologies proposed for AI-based services and products. We start from the ABI+ (Ability, Benevolence, Integrity, Predictability) framework augmented with a recently proposed mapping of ABI+ on qualities of technologies that support trust. We consider four categories of trustworthiness technologies for machine learning, namely these for Fairness, Explainability, Auditability and Safety (FEAS) and discuss if and how these support the required qualities. Moreover, trust can be impacted throughout the life cycle of AI-based systems, and we therefore introduce the concept of Chain of Trust to discuss trustworthiness technologies in all stages of the life cycle. In so doing we establish the ways in which machine learning technologies support trusted AI-based systems. Finally, FEAS has obvious relations with known frameworks and therefore we relate FEAS to a variety of international 'principled AI' policy and technology frameworks that have emerged in recent years.","PeriodicalId":377829,"journal":{"name":"Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency","volume":"232 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"169","resultStr":"{\"title\":\"The relationship between trust in AI and trustworthy machine learning technologies\",\"authors\":\"Ehsan Toreini, M. Aitken, Kovila P. L. Coopamootoo, Karen Elliott, Carlos Vladimiro Gonzalez Zelaya, A. Moorsel\",\"doi\":\"10.1145/3351095.3372834\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"To design and develop AI-based systems that users and the larger public can justifiably trust, one needs to understand how machine learning technologies impact trust. To guide the design and implementation of trusted AI-based systems, this paper provides a systematic approach to relate considerations about trust from the social sciences to trustworthiness technologies proposed for AI-based services and products. We start from the ABI+ (Ability, Benevolence, Integrity, Predictability) framework augmented with a recently proposed mapping of ABI+ on qualities of technologies that support trust. We consider four categories of trustworthiness technologies for machine learning, namely these for Fairness, Explainability, Auditability and Safety (FEAS) and discuss if and how these support the required qualities. Moreover, trust can be impacted throughout the life cycle of AI-based systems, and we therefore introduce the concept of Chain of Trust to discuss trustworthiness technologies in all stages of the life cycle. In so doing we establish the ways in which machine learning technologies support trusted AI-based systems. Finally, FEAS has obvious relations with known frameworks and therefore we relate FEAS to a variety of international 'principled AI' policy and technology frameworks that have emerged in recent years.\",\"PeriodicalId\":377829,\"journal\":{\"name\":\"Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency\",\"volume\":\"232 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-11-27\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"169\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3351095.3372834\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3351095.3372834","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 169

摘要

为了设计和开发用户和公众可以合理信任的基于人工智能的系统,人们需要了解机器学习技术如何影响信任。为了指导可信任的基于人工智能的系统的设计和实现,本文提供了一种系统的方法,将社会科学对信任的考虑与基于人工智能的服务和产品的可信任技术联系起来。我们从ABI+(能力、仁慈、诚信、可预测性)框架开始,并辅以最近提出的支持信任的技术质量的ABI+映射。我们考虑了机器学习的四类可信度技术,即公平性、可解释性、可审计性和安全性(FEAS),并讨论了这些技术是否以及如何支持所需的质量。此外,信任会在基于人工智能的系统的整个生命周期中受到影响,因此我们引入信任链的概念来讨论生命周期各个阶段的可信度技术。通过这样做,我们建立了机器学习技术支持可信的基于人工智能的系统的方式。最后,FEAS与已知框架有明显的关系,因此我们将FEAS与近年来出现的各种国际“原则性人工智能”政策和技术框架联系起来。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
The relationship between trust in AI and trustworthy machine learning technologies
To design and develop AI-based systems that users and the larger public can justifiably trust, one needs to understand how machine learning technologies impact trust. To guide the design and implementation of trusted AI-based systems, this paper provides a systematic approach to relate considerations about trust from the social sciences to trustworthiness technologies proposed for AI-based services and products. We start from the ABI+ (Ability, Benevolence, Integrity, Predictability) framework augmented with a recently proposed mapping of ABI+ on qualities of technologies that support trust. We consider four categories of trustworthiness technologies for machine learning, namely these for Fairness, Explainability, Auditability and Safety (FEAS) and discuss if and how these support the required qualities. Moreover, trust can be impacted throughout the life cycle of AI-based systems, and we therefore introduce the concept of Chain of Trust to discuss trustworthiness technologies in all stages of the life cycle. In so doing we establish the ways in which machine learning technologies support trusted AI-based systems. Finally, FEAS has obvious relations with known frameworks and therefore we relate FEAS to a variety of international 'principled AI' policy and technology frameworks that have emerged in recent years.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信