{"title":"Ethics of Artificial Intelligence in Society","authors":"Emma Johnson, Eloy Parrilla, Austin Burg","doi":"10.33697/ajur.2023.070","DOIUrl":null,"url":null,"abstract":"ABSTRACT: Every day, artificial intelligence (AI) is becoming more prevalent as new technologies are presented to the public with the intent of integrating them into society. However, these systems are not perfect and are known to cause failures that impact a multitude of people. The purpose of this study is to explore how ethical guidelines are followed by AI when it is being designed and implemented in society. Three ethics theories, along with nine ethical principles of AI, and the Agent, Deed, Consequence (ADC) model were investigated to analyze failures involving AI. When a system fails to follow the models listed, a set of refined ethical principles are created. By analyzing the failures, an understanding of how similar incidents may be prevented was gained. Additionally, the importance of ethics being a part of AI programming was demonstrated, followed by recommendations for the future incorporation of ethics into AI. The term “failure” is specifically used throughout the paper because of the nature in which the events involving AI occur. The events are not necessarily “accidents” since the AI was intended to act in certain ways, but the events are also not “malfunctions” because the AI examples were not internally compromised. For these reasons, the much broader term “failure” is used. KEYWORDS: Ethics; Artificial Intelligence; Agent-Deed-Consequence (ADC) Model; Principles of Artificial Intelligence; Virtue Ethics; Deontology; Consequentialism; AI Systems","PeriodicalId":72177,"journal":{"name":"American journal of undergraduate research","volume":"1 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2023-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"American journal of undergraduate research","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.33697/ajur.2023.070","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
ABSTRACT: Every day, artificial intelligence (AI) is becoming more prevalent as new technologies are presented to the public with the intent of integrating them into society. However, these systems are not perfect and are known to cause failures that impact a multitude of people. The purpose of this study is to explore how ethical guidelines are followed by AI when it is being designed and implemented in society. Three ethics theories, along with nine ethical principles of AI, and the Agent, Deed, Consequence (ADC) model were investigated to analyze failures involving AI. When a system fails to follow the models listed, a set of refined ethical principles are created. By analyzing the failures, an understanding of how similar incidents may be prevented was gained. Additionally, the importance of ethics being a part of AI programming was demonstrated, followed by recommendations for the future incorporation of ethics into AI. The term “failure” is specifically used throughout the paper because of the nature in which the events involving AI occur. The events are not necessarily “accidents” since the AI was intended to act in certain ways, but the events are also not “malfunctions” because the AI examples were not internally compromised. For these reasons, the much broader term “failure” is used. KEYWORDS: Ethics; Artificial Intelligence; Agent-Deed-Consequence (ADC) Model; Principles of Artificial Intelligence; Virtue Ethics; Deontology; Consequentialism; AI Systems