人工智能数字工具产品生命周期治理框架通过道德和合规设计†

Eduardo Ortega, Michelle Tran, Grace Bandeen
{"title":"人工智能数字工具产品生命周期治理框架通过道德和合规设计†","authors":"Eduardo Ortega, Michelle Tran, Grace Bandeen","doi":"10.1109/CAI54212.2023.00155","DOIUrl":null,"url":null,"abstract":"The acceleration of Artificial Intelligence (AI) has brought forward new digital tools that have had a wide impact across society. However, AI digital tools (such as ChatGPT, midjourney, DALL-E 2) have brought forward legal and ethical concerns. — Internationally, public, and private leaders are introducing regulatory frameworks to address data governance for such these AI digital tools (i.e., Global Data Protection Regulation, the European AI Act, Blueprint for an AI Bill of Rights, NIST Risk Management Framework, etc.). We recognize that these AI digital tools are a vital aspect of future technological development, but they require input from various sectors in addressing ethics and compliance design. We survey the current landscape of published AI-specific regulatory frameworks and known engineering design process methods. Using a product lifecycle approach, we also introduce a trans-disciplinary framework to address AI ethics and compliance via design. This product lifecycle approach considers several principles: a Human-Centered Design for Risk Assessment, Functional Safety and Risk Management Standardization, and Continuous Governance throughout Product Lifecycle. Establishing risk management throughout AI product lifecycles can ensure accountability for AI product use cases. In addition, by utilizing previous Functional Safety considerations we can create safety mechanisms throughout the product lifecycle of AI digital tools. Finally, establishing in-field testing for continuous governance will enable the flexibility for new compliance standards and transparency. We establish this governance framework to aid in new compliance strategies for these emerging issues with AI digital tools.","PeriodicalId":129324,"journal":{"name":"2023 IEEE Conference on Artificial Intelligence (CAI)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"AI Digital Tool Product Lifecycle Governance Framework through Ethics and Compliance by Design†\",\"authors\":\"Eduardo Ortega, Michelle Tran, Grace Bandeen\",\"doi\":\"10.1109/CAI54212.2023.00155\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The acceleration of Artificial Intelligence (AI) has brought forward new digital tools that have had a wide impact across society. However, AI digital tools (such as ChatGPT, midjourney, DALL-E 2) have brought forward legal and ethical concerns. — Internationally, public, and private leaders are introducing regulatory frameworks to address data governance for such these AI digital tools (i.e., Global Data Protection Regulation, the European AI Act, Blueprint for an AI Bill of Rights, NIST Risk Management Framework, etc.). We recognize that these AI digital tools are a vital aspect of future technological development, but they require input from various sectors in addressing ethics and compliance design. We survey the current landscape of published AI-specific regulatory frameworks and known engineering design process methods. Using a product lifecycle approach, we also introduce a trans-disciplinary framework to address AI ethics and compliance via design. This product lifecycle approach considers several principles: a Human-Centered Design for Risk Assessment, Functional Safety and Risk Management Standardization, and Continuous Governance throughout Product Lifecycle. Establishing risk management throughout AI product lifecycles can ensure accountability for AI product use cases. In addition, by utilizing previous Functional Safety considerations we can create safety mechanisms throughout the product lifecycle of AI digital tools. Finally, establishing in-field testing for continuous governance will enable the flexibility for new compliance standards and transparency. We establish this governance framework to aid in new compliance strategies for these emerging issues with AI digital tools.\",\"PeriodicalId\":129324,\"journal\":{\"name\":\"2023 IEEE Conference on Artificial Intelligence (CAI)\",\"volume\":\"24 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-06-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 IEEE Conference on Artificial Intelligence (CAI)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CAI54212.2023.00155\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE Conference on Artificial Intelligence (CAI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CAI54212.2023.00155","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

人工智能(AI)的加速发展带来了新的数字工具,对整个社会产生了广泛的影响。然而,人工智能数字工具(如ChatGPT、midjourney、dall - e2)带来了法律和道德问题。-国际上,公共和私人领导人正在引入监管框架,以解决这些人工智能数字工具的数据治理问题(即全球数据保护条例,欧洲人工智能法案,人工智能权利法案蓝图,NIST风险管理框架等)。我们认识到,这些人工智能数字工具是未来技术发展的一个重要方面,但它们需要各个部门在解决道德和合规设计方面的投入。我们调查了已发布的人工智能特定监管框架和已知工程设计过程方法的现状。通过使用产品生命周期方法,我们还引入了一个跨学科框架,通过设计来解决人工智能伦理和合规性问题。这种产品生命周期方法考虑了几个原则:以人为中心的风险评估设计,功能安全和风险管理标准化,以及整个产品生命周期的持续治理。在整个人工智能产品生命周期中建立风险管理可以确保对人工智能产品用例的问责制。此外,通过利用先前的功能安全考虑,我们可以在人工智能数字工具的整个产品生命周期中创建安全机制。最后,为持续治理建立现场测试将为新的遵从性标准和透明度提供灵活性。我们建立了这个治理框架,以帮助使用人工智能数字工具为这些新出现的问题制定新的合规策略。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
AI Digital Tool Product Lifecycle Governance Framework through Ethics and Compliance by Design†
The acceleration of Artificial Intelligence (AI) has brought forward new digital tools that have had a wide impact across society. However, AI digital tools (such as ChatGPT, midjourney, DALL-E 2) have brought forward legal and ethical concerns. — Internationally, public, and private leaders are introducing regulatory frameworks to address data governance for such these AI digital tools (i.e., Global Data Protection Regulation, the European AI Act, Blueprint for an AI Bill of Rights, NIST Risk Management Framework, etc.). We recognize that these AI digital tools are a vital aspect of future technological development, but they require input from various sectors in addressing ethics and compliance design. We survey the current landscape of published AI-specific regulatory frameworks and known engineering design process methods. Using a product lifecycle approach, we also introduce a trans-disciplinary framework to address AI ethics and compliance via design. This product lifecycle approach considers several principles: a Human-Centered Design for Risk Assessment, Functional Safety and Risk Management Standardization, and Continuous Governance throughout Product Lifecycle. Establishing risk management throughout AI product lifecycles can ensure accountability for AI product use cases. In addition, by utilizing previous Functional Safety considerations we can create safety mechanisms throughout the product lifecycle of AI digital tools. Finally, establishing in-field testing for continuous governance will enable the flexibility for new compliance standards and transparency. We establish this governance framework to aid in new compliance strategies for these emerging issues with AI digital tools.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信