{"title":"Ethics dumping in artificial intelligence.","authors":"Jean-Christophe Bélisle-Pipon, Gavin Victor","doi":"10.3389/frai.2024.1426761","DOIUrl":null,"url":null,"abstract":"<p><p>Artificial Intelligence (AI) systems encode not just statistical models and complex algorithms designed to process and analyze data, but also significant normative baggage. This ethical dimension, derived from the underlying code and training data, shapes the recommendations given, behaviors exhibited, and perceptions had by AI. These factors influence how AI is regulated, used, misused, and impacts end-users. The multifaceted nature of AI's influence has sparked extensive discussions across disciplines like Science and Technology Studies (STS), Ethical, Legal and Social Implications (ELSI) studies, public policy analysis, and responsible innovation-underscoring the need to examine AI's ethical ramifications. While the initial wave of AI ethics focused on articulating principles and guidelines, recent scholarship increasingly emphasizes the practical implementation of ethical principles, regulatory oversight, and mitigating unforeseen negative consequences. Drawing from the concept of \"ethics dumping\" in research ethics, this paper argues that practices surrounding AI development and deployment can, unduly and in a very concerning way, offload ethical responsibilities from developers and regulators to ill-equipped users and host environments. Four key trends illustrating such ethics dumping are identified: (1) AI developers embedding ethics through coded value assumptions, (2) AI ethics guidelines promoting broad or unactionable principles disconnected from local contexts, (3) institutions implementing AI systems without evaluating ethical implications, and (4) decision-makers enacting ethical governance frameworks disconnected from practice. Mitigating AI ethics dumping requires empowering users, fostering stakeholder engagement in norm-setting, harmonizing ethical guidelines while allowing flexibility for local variation, and establishing clear accountability mechanisms across the AI ecosystem.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"7 ","pages":"1426761"},"PeriodicalIF":3.0000,"publicationDate":"2024-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11582056/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Frontiers in Artificial Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3389/frai.2024.1426761","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/1/1 0:00:00","PubModel":"eCollection","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Artificial Intelligence (AI) systems encode not just statistical models and complex algorithms designed to process and analyze data, but also significant normative baggage. This ethical dimension, derived from the underlying code and training data, shapes the recommendations given, behaviors exhibited, and perceptions had by AI. These factors influence how AI is regulated, used, misused, and impacts end-users. The multifaceted nature of AI's influence has sparked extensive discussions across disciplines like Science and Technology Studies (STS), Ethical, Legal and Social Implications (ELSI) studies, public policy analysis, and responsible innovation-underscoring the need to examine AI's ethical ramifications. While the initial wave of AI ethics focused on articulating principles and guidelines, recent scholarship increasingly emphasizes the practical implementation of ethical principles, regulatory oversight, and mitigating unforeseen negative consequences. Drawing from the concept of "ethics dumping" in research ethics, this paper argues that practices surrounding AI development and deployment can, unduly and in a very concerning way, offload ethical responsibilities from developers and regulators to ill-equipped users and host environments. Four key trends illustrating such ethics dumping are identified: (1) AI developers embedding ethics through coded value assumptions, (2) AI ethics guidelines promoting broad or unactionable principles disconnected from local contexts, (3) institutions implementing AI systems without evaluating ethical implications, and (4) decision-makers enacting ethical governance frameworks disconnected from practice. Mitigating AI ethics dumping requires empowering users, fostering stakeholder engagement in norm-setting, harmonizing ethical guidelines while allowing flexibility for local variation, and establishing clear accountability mechanisms across the AI ecosystem.