{"title":"在使用人工智能作为设计材料时系统应用人文伦理的框架","authors":"Kyle D. Dent, Richelle Dumond, Mike Kuniavsky","doi":"10.46467/tdd35.2019.178-197","DOIUrl":null,"url":null,"abstract":"As machine learning and AI systems gain greater capabilities and are deployed more widely, we – as designers, developers, and researchers – must consider both the positive and negative implications of their use. In light of this, PARC’s researchers recognize the need to be vigilant against the potential for harm caused by artificial intelligence through intentional or inadvertent discrimination, unjust treatment, or physical danger that might occur against individuals or groups of people. Because AI-supported and autonomous decision making has the potential for widespread negative personal, social, and environmental effects, we aim to take a proactive stance to uphold human rights, respect individuals’ privacy, protect personal data, and enable freedom of expression and equality. \nTechnology is not inherently neutral and reflects decisions and trade-offs made by the designers, researchers, and engineers developing it and using it in their work. Datasets often reflect historical biases. AI technologies that hire people, evaluate their job performance, deliver their healthcare, and mete out penalties are obvious examples of possible areas for systematic algorithmic errors that result in unfair or unjust treatment. Because nearly all technology includes trade-offs and embodies the values and judgments of the people creating it, it is imperative that researchers are aware of the value judgments they make and are transparent about them with all stakeholders involved.","PeriodicalId":34368,"journal":{"name":"Temes de Disseny","volume":"43 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2019-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A framework for systematically applying humanistic ethics when using AI as a design material\",\"authors\":\"Kyle D. Dent, Richelle Dumond, Mike Kuniavsky\",\"doi\":\"10.46467/tdd35.2019.178-197\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"As machine learning and AI systems gain greater capabilities and are deployed more widely, we – as designers, developers, and researchers – must consider both the positive and negative implications of their use. In light of this, PARC’s researchers recognize the need to be vigilant against the potential for harm caused by artificial intelligence through intentional or inadvertent discrimination, unjust treatment, or physical danger that might occur against individuals or groups of people. Because AI-supported and autonomous decision making has the potential for widespread negative personal, social, and environmental effects, we aim to take a proactive stance to uphold human rights, respect individuals’ privacy, protect personal data, and enable freedom of expression and equality. \\nTechnology is not inherently neutral and reflects decisions and trade-offs made by the designers, researchers, and engineers developing it and using it in their work. Datasets often reflect historical biases. AI technologies that hire people, evaluate their job performance, deliver their healthcare, and mete out penalties are obvious examples of possible areas for systematic algorithmic errors that result in unfair or unjust treatment. Because nearly all technology includes trade-offs and embodies the values and judgments of the people creating it, it is imperative that researchers are aware of the value judgments they make and are transparent about them with all stakeholders involved.\",\"PeriodicalId\":34368,\"journal\":{\"name\":\"Temes de Disseny\",\"volume\":\"43 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-07-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Temes de Disseny\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.46467/tdd35.2019.178-197\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"Arts and Humanities\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Temes de Disseny","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.46467/tdd35.2019.178-197","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"Arts and Humanities","Score":null,"Total":0}
A framework for systematically applying humanistic ethics when using AI as a design material
As machine learning and AI systems gain greater capabilities and are deployed more widely, we – as designers, developers, and researchers – must consider both the positive and negative implications of their use. In light of this, PARC’s researchers recognize the need to be vigilant against the potential for harm caused by artificial intelligence through intentional or inadvertent discrimination, unjust treatment, or physical danger that might occur against individuals or groups of people. Because AI-supported and autonomous decision making has the potential for widespread negative personal, social, and environmental effects, we aim to take a proactive stance to uphold human rights, respect individuals’ privacy, protect personal data, and enable freedom of expression and equality.
Technology is not inherently neutral and reflects decisions and trade-offs made by the designers, researchers, and engineers developing it and using it in their work. Datasets often reflect historical biases. AI technologies that hire people, evaluate their job performance, deliver their healthcare, and mete out penalties are obvious examples of possible areas for systematic algorithmic errors that result in unfair or unjust treatment. Because nearly all technology includes trade-offs and embodies the values and judgments of the people creating it, it is imperative that researchers are aware of the value judgments they make and are transparent about them with all stakeholders involved.