Standards, frameworks, and legislation for artificial intelligence (AI) transparency

Brady Lund, Zeynep Orhan, Nishith Reddy Mannuru, Ravi Varma Kumar Bevara, Brett Porter, Meka Kasi Vinaih, Padmapadanand Bhaskara
{"title":"Standards, frameworks, and legislation for artificial intelligence (AI) transparency","authors":"Brady Lund,&nbsp;Zeynep Orhan,&nbsp;Nishith Reddy Mannuru,&nbsp;Ravi Varma Kumar Bevara,&nbsp;Brett Porter,&nbsp;Meka Kasi Vinaih,&nbsp;Padmapadanand Bhaskara","doi":"10.1007/s43681-025-00661-4","DOIUrl":null,"url":null,"abstract":"<div><p>The global landscape of transparency standards, frameworks, and legislation for artificial intelligence (AI) shows an increasing focus on building trust, accountability, and ethical deployment. This paper presents comparative analysis of key frameworks for AI transparency, such as the IEEE P7001 standard and the CLeAR Documentation Framework, highlighting how regions like the United States, European Union, China, and Japan are addressing the need for transparent and trustworthy AI systems. Common themes across these standards include the need for tiered transparency levels based on system risk and impact, continuous documentation updates throughout the development and revision processes, and the production of explanations tailored to various stakeholder groups. Several key challenges arise in the development of AI transparency standards, frameworks, and legislation, including balancing transparency with privacy, ensuring intellectual property rights, and addressing security concerns. Promoting adaptable, sector-specific transparency regulatory structures is critical in the development of frameworks flexible enough to keep pace with AI’s rapid technological advancement. These insights contribute to a growing body of literature on how best to develop transparency regulatory structures that not only build trust in AI but also support innovation across industries.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 4","pages":"3639 - 3655"},"PeriodicalIF":0.0000,"publicationDate":"2025-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"AI and ethics","FirstCategoryId":"1085","ListUrlMain":"https://link.springer.com/article/10.1007/s43681-025-00661-4","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

The global landscape of transparency standards, frameworks, and legislation for artificial intelligence (AI) shows an increasing focus on building trust, accountability, and ethical deployment. This paper presents comparative analysis of key frameworks for AI transparency, such as the IEEE P7001 standard and the CLeAR Documentation Framework, highlighting how regions like the United States, European Union, China, and Japan are addressing the need for transparent and trustworthy AI systems. Common themes across these standards include the need for tiered transparency levels based on system risk and impact, continuous documentation updates throughout the development and revision processes, and the production of explanations tailored to various stakeholder groups. Several key challenges arise in the development of AI transparency standards, frameworks, and legislation, including balancing transparency with privacy, ensuring intellectual property rights, and addressing security concerns. Promoting adaptable, sector-specific transparency regulatory structures is critical in the development of frameworks flexible enough to keep pace with AI’s rapid technological advancement. These insights contribute to a growing body of literature on how best to develop transparency regulatory structures that not only build trust in AI but also support innovation across industries.

人工智能(AI)透明度的标准、框架和立法
人工智能(AI)的透明度标准、框架和立法的全球格局表明,人们越来越关注建立信任、问责制和道德部署。本文对人工智能透明度的关键框架(如IEEE P7001标准和清晰文档框架)进行了比较分析,强调了美国、欧盟、中国和日本等地区如何解决对透明和可信赖的人工智能系统的需求。这些标准的共同主题包括基于系统风险和影响的分层透明度水平的需求,在整个开发和修订过程中持续的文档更新,以及针对不同利益相关者群体量身定制的解释。在制定人工智能透明度标准、框架和立法过程中,出现了几个关键挑战,包括平衡透明度与隐私、确保知识产权和解决安全问题。促进适应性强的、特定行业的透明度监管结构对于制定足够灵活的框架以跟上人工智能快速技术进步的步伐至关重要。这些见解促成了越来越多关于如何最好地建立透明度监管结构的文献,这些监管结构不仅要建立对人工智能的信任,还要支持跨行业的创新。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信