Responsible (use of) AI

Joseph B. Lyons, Kerianne Hobbs, Steve Rogers, Scott H. Clouse
{"title":"Responsible (use of) AI","authors":"Joseph B. Lyons, Kerianne Hobbs, Steve Rogers, Scott H. Clouse","doi":"10.3389/fnrgo.2023.1201777","DOIUrl":null,"url":null,"abstract":"Although there is a rich history of philosophical definitions of ethics when applied to human behavior, applying the same concepts and principles to AI may be fraught with problems. Anthropomorphizing AI to have characteristics such as “ethics” may promote a dangerous, unrealistic expectation that AI can be trained to have inherent, guaranteed ethical behavior. The authors instead advocate for increased research into the ethical use of AI from initial ideation and design through operational use and sustainment. The authors advocate for five key research areas: (1) education in ethics and core AI concepts for AI developers, leaders, and users, (2) development and use of model cards or datasheets for datasets to provide transparency into the strengths, limits, and potential biases of a trained model, (3) employing human-centered design that seeks to understand human value structures within a task context and enable effective human-machine interaction through intuitive and transparent interfaces, (4) targeted use of run time assurance that monitors and modifies the inputs or outputs of a trained model when necessary to enforce ethical principles such as safety or limiting bias, and (5) developing best practices for the use of a joint human-AI co-creation and training experience to enable a shared mental model and higher performance through potential emergent behavior.","PeriodicalId":507972,"journal":{"name":"Frontiers in Neuroergonomics","volume":"40 3","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2023-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Frontiers in Neuroergonomics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3389/fnrgo.2023.1201777","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Although there is a rich history of philosophical definitions of ethics when applied to human behavior, applying the same concepts and principles to AI may be fraught with problems. Anthropomorphizing AI to have characteristics such as “ethics” may promote a dangerous, unrealistic expectation that AI can be trained to have inherent, guaranteed ethical behavior. The authors instead advocate for increased research into the ethical use of AI from initial ideation and design through operational use and sustainment. The authors advocate for five key research areas: (1) education in ethics and core AI concepts for AI developers, leaders, and users, (2) development and use of model cards or datasheets for datasets to provide transparency into the strengths, limits, and potential biases of a trained model, (3) employing human-centered design that seeks to understand human value structures within a task context and enable effective human-machine interaction through intuitive and transparent interfaces, (4) targeted use of run time assurance that monitors and modifies the inputs or outputs of a trained model when necessary to enforce ethical principles such as safety or limiting bias, and (5) developing best practices for the use of a joint human-AI co-creation and training experience to enable a shared mental model and higher performance through potential emergent behavior.
负责任地(使用)人工智能
虽然将伦理定义应用于人类行为的哲学历史悠久,但将同样的概念和原则应用于人工智能可能会充满问题。将人工智能拟人化,使其具有 "伦理 "等特征,可能会助长一种危险的、不切实际的期望,即人工智能可以被训练成具有固有的、有保障的伦理行为。相反,作者主张从最初的构思和设计到实际使用和维持,加强对人工智能伦理使用的研究。作者倡导五个关键研究领域:(1) 对人工智能开发人员、领导者和用户进行伦理和人工智能核心概念的教育,(2) 开发和使用模型卡或数据集,使训练有素的模型的优势、局限性和潜在偏差透明化,(3) 采用以人为本的设计,力求了解任务背景下的人类价值结构,并通过直观透明的界面实现有效的人机交互、(4) 有针对性地使用运行时间保证,在必要时监控和修改训练模型的输入或输出,以执行道德原则,如安全或限制偏差,以及 (5) 开发使用人类-人工智能联合共创和训练体验的最佳实践,通过潜在的突发行为实现共享心智模型和更高的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信