人工智能系统中的道德代理与责任

Luiz Saraiva
{"title":"人工智能系统中的道德代理与责任","authors":"Luiz Saraiva","doi":"10.47941/ijp.1867","DOIUrl":null,"url":null,"abstract":"Purpose: The general objective of this study was to explore moral agency and responsibility in AI systems. \nMethodology: The study adopted a desktop research methodology. Desk research refers to secondary data or that which can be collected without fieldwork. Desk research is basically involved in collecting data from existing resources hence it is often considered a low cost technique as compared to field research, as the main cost is involved in executive’s time, telephone charges and directories. Thus, the study relied on already published studies, reports and statistics. This secondary data was easily accessed through the online journals and library. \nFindings: The findings reveal that there exists a contextual and methodological gap relating to moral agency and responsibility in AI systems. Preliminary empirical review revealed that AI systems possess a form of moral agency, albeit different from human agents, and promoting transparency and accountability was deemed crucial in ensuring ethical decision-making. Interdisciplinary collaboration and stakeholder engagement were emphasized for addressing ethical challenges. Ultimately, the study highlighted the importance of upholding ethical principles to ensure that AI systems contribute positively to society. \nUnique Contribution to Theory, Practice and Policy: Utilitarianism, Kantianism and Aristotelian Virtue Ethics may be used to anchor future studies on the moral agency and responsibility in AI systems. The study provided a nuanced analysis of moral agency in AI systems, offering practical recommendations for developers, policymakers, and stakeholders. The study emphasized the importance of integrating ethical considerations into AI development and deployment, advocating for transparency, accountability, and regulatory frameworks to address ethical challenges. Its insights informed interdisciplinary collaboration and ethical reflection, shaping the discourse on responsible AI innovation and governance. \nKeywords: Moral Agency, Responsibility, AI Systems, Ethics, Decision-Making, Framework, Analysis, Regulation, Governance, Transparency, Accountability, Interdisciplinary, Innovation, Deployment, Stakeholders","PeriodicalId":512816,"journal":{"name":"International Journal of Philosophy","volume":"134 2","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-05-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Moral Agency and Responsibility in AI Systems\",\"authors\":\"Luiz Saraiva\",\"doi\":\"10.47941/ijp.1867\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Purpose: The general objective of this study was to explore moral agency and responsibility in AI systems. \\nMethodology: The study adopted a desktop research methodology. Desk research refers to secondary data or that which can be collected without fieldwork. Desk research is basically involved in collecting data from existing resources hence it is often considered a low cost technique as compared to field research, as the main cost is involved in executive’s time, telephone charges and directories. Thus, the study relied on already published studies, reports and statistics. This secondary data was easily accessed through the online journals and library. \\nFindings: The findings reveal that there exists a contextual and methodological gap relating to moral agency and responsibility in AI systems. Preliminary empirical review revealed that AI systems possess a form of moral agency, albeit different from human agents, and promoting transparency and accountability was deemed crucial in ensuring ethical decision-making. Interdisciplinary collaboration and stakeholder engagement were emphasized for addressing ethical challenges. Ultimately, the study highlighted the importance of upholding ethical principles to ensure that AI systems contribute positively to society. \\nUnique Contribution to Theory, Practice and Policy: Utilitarianism, Kantianism and Aristotelian Virtue Ethics may be used to anchor future studies on the moral agency and responsibility in AI systems. The study provided a nuanced analysis of moral agency in AI systems, offering practical recommendations for developers, policymakers, and stakeholders. The study emphasized the importance of integrating ethical considerations into AI development and deployment, advocating for transparency, accountability, and regulatory frameworks to address ethical challenges. Its insights informed interdisciplinary collaboration and ethical reflection, shaping the discourse on responsible AI innovation and governance. \\nKeywords: Moral Agency, Responsibility, AI Systems, Ethics, Decision-Making, Framework, Analysis, Regulation, Governance, Transparency, Accountability, Interdisciplinary, Innovation, Deployment, Stakeholders\",\"PeriodicalId\":512816,\"journal\":{\"name\":\"International Journal of Philosophy\",\"volume\":\"134 2\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-05-03\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Journal of Philosophy\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.47941/ijp.1867\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Philosophy","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.47941/ijp.1867","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

目的:本研究的总体目标是探讨人工智能系统中的道德代理和责任。研究方法:本研究采用桌面研究方法。案头研究指的是二手数据或无需实地考察即可收集到的数据。案头研究基本上是从现有资源中收集数据,因此与实地研究相比,案头研究通常被认为是一种低成本技术,因为主要成本涉及执行人员的时间、电话费和目录。因此,本研究依赖于已出版的研究、报告和统计数据。这些二手数据可通过在线期刊和图书馆轻松获取。研究结果:研究结果表明,在人工智能系统的道德代理和责任方面存在着背景和方法上的差距。初步的实证审查表明,尽管人工智能系统不同于人类代理,但它拥有某种形式的道德代理,而促进透明度和问责制被认为是确保道德决策的关键。跨学科合作和利益相关者的参与是应对伦理挑战的重点。最终,研究强调了坚持伦理原则的重要性,以确保人工智能系统为社会做出积极贡献。对理论、实践和政策的独特贡献:功利主义、康德主义和亚里士多德的美德伦理学可用于未来有关人工智能系统中的道德代理和责任的研究。本研究对人工智能系统中的道德代理进行了细致入微的分析,为开发人员、政策制定者和利益相关者提供了切实可行的建议。该研究强调了将伦理因素纳入人工智能开发和部署的重要性,倡导建立透明度、问责制和监管框架,以应对伦理挑战。其见解为跨学科合作和伦理反思提供了信息,影响了关于负责任的人工智能创新和治理的讨论。关键词道德机构、责任、人工智能系统、伦理、决策、框架、分析、监管、治理、透明度、问责制、跨学科、创新、部署、利益相关者
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Moral Agency and Responsibility in AI Systems
Purpose: The general objective of this study was to explore moral agency and responsibility in AI systems. Methodology: The study adopted a desktop research methodology. Desk research refers to secondary data or that which can be collected without fieldwork. Desk research is basically involved in collecting data from existing resources hence it is often considered a low cost technique as compared to field research, as the main cost is involved in executive’s time, telephone charges and directories. Thus, the study relied on already published studies, reports and statistics. This secondary data was easily accessed through the online journals and library. Findings: The findings reveal that there exists a contextual and methodological gap relating to moral agency and responsibility in AI systems. Preliminary empirical review revealed that AI systems possess a form of moral agency, albeit different from human agents, and promoting transparency and accountability was deemed crucial in ensuring ethical decision-making. Interdisciplinary collaboration and stakeholder engagement were emphasized for addressing ethical challenges. Ultimately, the study highlighted the importance of upholding ethical principles to ensure that AI systems contribute positively to society. Unique Contribution to Theory, Practice and Policy: Utilitarianism, Kantianism and Aristotelian Virtue Ethics may be used to anchor future studies on the moral agency and responsibility in AI systems. The study provided a nuanced analysis of moral agency in AI systems, offering practical recommendations for developers, policymakers, and stakeholders. The study emphasized the importance of integrating ethical considerations into AI development and deployment, advocating for transparency, accountability, and regulatory frameworks to address ethical challenges. Its insights informed interdisciplinary collaboration and ethical reflection, shaping the discourse on responsible AI innovation and governance. Keywords: Moral Agency, Responsibility, AI Systems, Ethics, Decision-Making, Framework, Analysis, Regulation, Governance, Transparency, Accountability, Interdisciplinary, Innovation, Deployment, Stakeholders
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信