Global Challenges in the Standardization of Ethics for Trustworthy AI

Q3 Decision Sciences
Dave Lewis;Linda Hogan;David Filip;P. J. Wall
{"title":"Global Challenges in the Standardization of Ethics for Trustworthy AI","authors":"Dave Lewis;Linda Hogan;David Filip;P. J. Wall","doi":"10.13052/jicts2245-800X.823","DOIUrl":null,"url":null,"abstract":"In this paper, we examine the challenges of developing international standards for Trustworthy AI that aim both to be global applicable and to address the ethical questions key to building trust at a commercial and societal level. We begin by examining the validity of grounding standards that aim for international reach on human right agreements, and the need to accommodate variations in prioritization and tradeoffs in implementing rights in different societal and cultural settings. We then examine the major recent proposals from the OECD, the EU and the IEEE on ethical governance of Trustworthy AI systems in terms of their scope and use of normative language. From this analysis, we propose a preliminary minimal model for the functional roles relevant to Trustworthy AI as a framing for further standards development in this area. We also identify the different types of interoperability reference points that may exist between these functional roles and remark on the potential role they could play in future standardization. Finally we examine a current AI standardization effort under ISO/IEC JTC1 to consider how future Trustworthy AI standards may be able to build on existing standards in developing ethical guidelines and in particular on the ISO standard on Social Responsibility. We conclude by proposing some future directions for research and development of Trustworthy AI standards.","PeriodicalId":36697,"journal":{"name":"Journal of ICT Standardization","volume":"8 2","pages":"123-150"},"PeriodicalIF":0.0000,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/iel7/10251929/10258069/10258095.pdf","citationCount":"10","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of ICT Standardization","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10258095/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"Decision Sciences","Score":null,"Total":0}
引用次数: 10

Abstract

In this paper, we examine the challenges of developing international standards for Trustworthy AI that aim both to be global applicable and to address the ethical questions key to building trust at a commercial and societal level. We begin by examining the validity of grounding standards that aim for international reach on human right agreements, and the need to accommodate variations in prioritization and tradeoffs in implementing rights in different societal and cultural settings. We then examine the major recent proposals from the OECD, the EU and the IEEE on ethical governance of Trustworthy AI systems in terms of their scope and use of normative language. From this analysis, we propose a preliminary minimal model for the functional roles relevant to Trustworthy AI as a framing for further standards development in this area. We also identify the different types of interoperability reference points that may exist between these functional roles and remark on the potential role they could play in future standardization. Finally we examine a current AI standardization effort under ISO/IEC JTC1 to consider how future Trustworthy AI standards may be able to build on existing standards in developing ethical guidelines and in particular on the ISO standard on Social Responsibility. We conclude by proposing some future directions for research and development of Trustworthy AI standards.
值得信赖的人工智能伦理标准化的全球挑战
在本文中,我们研究了为值得信赖的人工智能制定国际标准的挑战,这些标准旨在全球适用,并解决在商业和社会层面建立信任的关键道德问题。我们首先要审查旨在国际达成人权协议的基础标准的有效性,以及在不同的社会和文化环境中实施权利的优先次序和权衡方面适应差异的必要性。然后,我们审查了经合组织、欧盟和IEEE最近就值得信赖的人工智能系统的道德治理提出的主要建议,包括其范围和规范语言的使用。根据这一分析,我们提出了一个与值得信赖的人工智能相关的功能角色的初步最小模型,作为该领域进一步标准开发的框架。我们还确定了这些功能角色之间可能存在的不同类型的互操作性参考点,并评论了它们在未来标准化中可能发挥的潜在作用。最后,我们研究了ISO/IEC JTC1下的当前人工智能标准化工作,以考虑未来值得信赖的人工智能标准如何能够在制定道德准则的现有标准基础上,特别是在ISO社会责任标准的基础上。最后,我们提出了值得信赖的人工智能标准的研究和开发的一些未来方向。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Journal of ICT Standardization
Journal of ICT Standardization Computer Science-Information Systems
CiteScore
2.20
自引率
0.00%
发文量
18
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信