在使用人工智能作为设计材料时系统应用人文伦理的框架

Q1 Arts and Humanities
Kyle D. Dent, Richelle Dumond, Mike Kuniavsky
{"title":"在使用人工智能作为设计材料时系统应用人文伦理的框架","authors":"Kyle D. Dent, Richelle Dumond, Mike Kuniavsky","doi":"10.46467/tdd35.2019.178-197","DOIUrl":null,"url":null,"abstract":"As machine learning and AI systems gain greater capabilities and are deployed more widely, we – as designers, developers, and researchers – must consider both the positive and negative implications of their use. In light of this, PARC’s researchers recognize the need to be vigilant against the potential for harm caused by artificial intelligence through intentional or inadvertent discrimination, unjust treatment, or physical danger that might occur against individuals or groups of people. Because AI-supported and autonomous decision making has the potential for widespread negative personal, social, and environmental effects, we aim to take a proactive stance to uphold human rights, respect individuals’ privacy, protect personal data, and enable freedom of expression and equality. \nTechnology is not inherently neutral and reflects decisions and trade-offs made by the designers, researchers, and engineers developing it and using it in their work. Datasets often reflect historical biases. AI technologies that hire people, evaluate their job performance, deliver their healthcare, and mete out penalties are obvious examples of possible areas for systematic algorithmic errors that result in unfair or unjust treatment. Because nearly all technology includes trade-offs and embodies the values and judgments of the people creating it, it is imperative that researchers are aware of the value judgments they make and are transparent about them with all stakeholders involved.","PeriodicalId":34368,"journal":{"name":"Temes de Disseny","volume":"43 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2019-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A framework for systematically applying humanistic ethics when using AI as a design material\",\"authors\":\"Kyle D. Dent, Richelle Dumond, Mike Kuniavsky\",\"doi\":\"10.46467/tdd35.2019.178-197\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"As machine learning and AI systems gain greater capabilities and are deployed more widely, we – as designers, developers, and researchers – must consider both the positive and negative implications of their use. In light of this, PARC’s researchers recognize the need to be vigilant against the potential for harm caused by artificial intelligence through intentional or inadvertent discrimination, unjust treatment, or physical danger that might occur against individuals or groups of people. Because AI-supported and autonomous decision making has the potential for widespread negative personal, social, and environmental effects, we aim to take a proactive stance to uphold human rights, respect individuals’ privacy, protect personal data, and enable freedom of expression and equality. \\nTechnology is not inherently neutral and reflects decisions and trade-offs made by the designers, researchers, and engineers developing it and using it in their work. Datasets often reflect historical biases. AI technologies that hire people, evaluate their job performance, deliver their healthcare, and mete out penalties are obvious examples of possible areas for systematic algorithmic errors that result in unfair or unjust treatment. Because nearly all technology includes trade-offs and embodies the values and judgments of the people creating it, it is imperative that researchers are aware of the value judgments they make and are transparent about them with all stakeholders involved.\",\"PeriodicalId\":34368,\"journal\":{\"name\":\"Temes de Disseny\",\"volume\":\"43 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-07-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Temes de Disseny\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.46467/tdd35.2019.178-197\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"Arts and Humanities\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Temes de Disseny","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.46467/tdd35.2019.178-197","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"Arts and Humanities","Score":null,"Total":0}
引用次数: 0

摘要

随着机器学习和人工智能系统获得更大的能力并得到更广泛的部署,我们——作为设计师、开发人员和研究人员——必须考虑它们使用的积极和消极影响。鉴于此,帕洛阿尔托研究中心的研究人员认识到,有必要警惕人工智能可能造成的伤害,包括有意或无意的歧视、不公正的待遇或可能发生在个人或群体身上的人身危险。由于人工智能支持的自主决策有可能对个人、社会和环境产生广泛的负面影响,我们的目标是采取积极的立场来维护人权,尊重个人隐私,保护个人数据,并实现言论自由和平等。技术本身并不是中立的,它反映了设计师、研究人员和工程师开发它并在他们的工作中使用它所做出的决定和权衡。数据集经常反映历史偏差。人工智能技术雇佣员工、评估他们的工作表现、为他们提供医疗保健以及实施惩罚,这些都是可能出现系统性算法错误的领域的明显例子,这些错误可能导致不公平或不公正的待遇。因为几乎所有的技术都包含权衡,体现了创造它的人的价值观和判断,研究人员必须意识到他们所做的价值判断,并对所有相关利益相关者透明。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
A framework for systematically applying humanistic ethics when using AI as a design material
As machine learning and AI systems gain greater capabilities and are deployed more widely, we – as designers, developers, and researchers – must consider both the positive and negative implications of their use. In light of this, PARC’s researchers recognize the need to be vigilant against the potential for harm caused by artificial intelligence through intentional or inadvertent discrimination, unjust treatment, or physical danger that might occur against individuals or groups of people. Because AI-supported and autonomous decision making has the potential for widespread negative personal, social, and environmental effects, we aim to take a proactive stance to uphold human rights, respect individuals’ privacy, protect personal data, and enable freedom of expression and equality. Technology is not inherently neutral and reflects decisions and trade-offs made by the designers, researchers, and engineers developing it and using it in their work. Datasets often reflect historical biases. AI technologies that hire people, evaluate their job performance, deliver their healthcare, and mete out penalties are obvious examples of possible areas for systematic algorithmic errors that result in unfair or unjust treatment. Because nearly all technology includes trade-offs and embodies the values and judgments of the people creating it, it is imperative that researchers are aware of the value judgments they make and are transparent about them with all stakeholders involved.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
1.00
自引率
0.00%
发文量
8
审稿时长
30 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信