Désirée Martin, Michael W. Schmidt, Rafaela Hillerbrand
{"title":"实现人工智能伦理:VPCIO模型","authors":"Désirée Martin, Michael W. Schmidt, Rafaela Hillerbrand","doi":"10.1007/s43681-025-00723-7","DOIUrl":null,"url":null,"abstract":"<div><p>Due to the development and use of artificial intelligence (AI) systems, there is a need for normative guidance on AI technology. Building on reasonably shared and systematized ethical values and principles (Martin et al. in Comparing AI ethics and AI regulation: ethical values and principles and the case of well-being, beneficence and sustainability, In: Müller, Dung, Dewey, Löhr (Eds.) Philosophy of artificial intelligence: the state of art, synthese library, Springer, Berlin, forthcoming), we aim to provide a framework for implementing ethics in AI systems. The research question in this paper is how to transfer values and principles to an AI system in a way that is understandable and evaluable for users, stakeholders, or an oversight body. Therefore, we work out how to translate values and principles into more concrete norms that can be implemented by the developer and monitored by the executive. Based on our systematization, we extend the so-called VCIO model, where VCIO stands for values, criteria, indicators and observables, as presented by Hallensleben et al. (From principles to practice—an interdisciplinary framework to operationalise AI ethics. VDE, Bertelsmann Stiftung, Frankfurt a. M./Gütersloh. https://www.ai-ethics-impact.org, 2020). Our contribution includes modifications to the model and, most importantly, the addition of principles. Building on this methodology, we present a model that is highly acceptable, the VPCIO model. We developed and evaluated the VPCIO for two case studies. The main case study is an AI-assisted robot used for reconnaissance of radiological hazards (based on a BMBF funded project, entitled KIARA (https://www.itas.kit.edu/english/projects_hill22_kiara.php)). The second case study is about an AI system in an entertaining context, namely to swap faces. Implementing the ethical aspects in these cases into the VPCIO model results in an indicator system that illustrates how ethical aspects can be transferred to an AI system in an understandable way.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 5","pages":"4975 - 4992"},"PeriodicalIF":0.0000,"publicationDate":"2025-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-025-00723-7.pdf","citationCount":"0","resultStr":"{\"title\":\"Implementing AI ethics: the VPCIO model\",\"authors\":\"Désirée Martin, Michael W. Schmidt, Rafaela Hillerbrand\",\"doi\":\"10.1007/s43681-025-00723-7\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Due to the development and use of artificial intelligence (AI) systems, there is a need for normative guidance on AI technology. Building on reasonably shared and systematized ethical values and principles (Martin et al. in Comparing AI ethics and AI regulation: ethical values and principles and the case of well-being, beneficence and sustainability, In: Müller, Dung, Dewey, Löhr (Eds.) Philosophy of artificial intelligence: the state of art, synthese library, Springer, Berlin, forthcoming), we aim to provide a framework for implementing ethics in AI systems. The research question in this paper is how to transfer values and principles to an AI system in a way that is understandable and evaluable for users, stakeholders, or an oversight body. Therefore, we work out how to translate values and principles into more concrete norms that can be implemented by the developer and monitored by the executive. Based on our systematization, we extend the so-called VCIO model, where VCIO stands for values, criteria, indicators and observables, as presented by Hallensleben et al. (From principles to practice—an interdisciplinary framework to operationalise AI ethics. VDE, Bertelsmann Stiftung, Frankfurt a. M./Gütersloh. https://www.ai-ethics-impact.org, 2020). Our contribution includes modifications to the model and, most importantly, the addition of principles. Building on this methodology, we present a model that is highly acceptable, the VPCIO model. We developed and evaluated the VPCIO for two case studies. The main case study is an AI-assisted robot used for reconnaissance of radiological hazards (based on a BMBF funded project, entitled KIARA (https://www.itas.kit.edu/english/projects_hill22_kiara.php)). The second case study is about an AI system in an entertaining context, namely to swap faces. Implementing the ethical aspects in these cases into the VPCIO model results in an indicator system that illustrates how ethical aspects can be transferred to an AI system in an understandable way.</p></div>\",\"PeriodicalId\":72137,\"journal\":{\"name\":\"AI and ethics\",\"volume\":\"5 5\",\"pages\":\"4975 - 4992\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2025-05-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://link.springer.com/content/pdf/10.1007/s43681-025-00723-7.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"AI and ethics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://link.springer.com/article/10.1007/s43681-025-00723-7\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"AI and ethics","FirstCategoryId":"1085","ListUrlMain":"https://link.springer.com/article/10.1007/s43681-025-00723-7","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
摘要
由于人工智能(AI)系统的发展和使用,需要对人工智能技术进行规范指导。建立在合理共享和系统化的伦理价值观和原则的基础上(Martin等人在比较人工智能伦理和人工智能监管:伦理价值观和原则以及福祉,慈善和可持续性的案例中,in: m ller, Dung, Dewey, Löhr(主编))。人工智能哲学:艺术状态,合成图书馆,施普林格,柏林,即将出版),我们的目标是提供一个在人工智能系统中实施伦理的框架。本文的研究问题是如何以一种用户、利益相关者或监督机构可以理解和评估的方式将价值观和原则转移到人工智能系统中。因此,我们研究出如何将价值和原则转化为更具体的规范,这些规范可以由开发人员实现,并由执行人员监视。基于我们的系统化,我们扩展了所谓的VCIO模型,其中VCIO代表价值观、标准、指标和可观察性,如Hallensleben等人所提出的(从原则到实践——一个跨学科的框架来运作人工智能伦理)。VDE,贝塔斯曼基金会,法兰克福a. M./ g tersloh。https://www.ai-ethics-impact.org, 2020)。我们的贡献包括对模型的修改,最重要的是,增加了原则。在此方法的基础上,我们提出了一个高度可接受的模型,VPCIO模型。我们为两个案例研究开发并评估了VPCIO。主要案例研究是用于侦察放射性危害的人工智能辅助机器人(基于BMBF资助的项目,名为KIARA (https://www.itas.kit.edu/english/projects_hill22_kiara.php))。第二个案例研究是关于娱乐环境中的AI系统,即交换面孔。将这些案例中的道德方面实施到VPCIO模型中会产生一个指标系统,该系统说明如何以可理解的方式将道德方面转移到人工智能系统中。
Due to the development and use of artificial intelligence (AI) systems, there is a need for normative guidance on AI technology. Building on reasonably shared and systematized ethical values and principles (Martin et al. in Comparing AI ethics and AI regulation: ethical values and principles and the case of well-being, beneficence and sustainability, In: Müller, Dung, Dewey, Löhr (Eds.) Philosophy of artificial intelligence: the state of art, synthese library, Springer, Berlin, forthcoming), we aim to provide a framework for implementing ethics in AI systems. The research question in this paper is how to transfer values and principles to an AI system in a way that is understandable and evaluable for users, stakeholders, or an oversight body. Therefore, we work out how to translate values and principles into more concrete norms that can be implemented by the developer and monitored by the executive. Based on our systematization, we extend the so-called VCIO model, where VCIO stands for values, criteria, indicators and observables, as presented by Hallensleben et al. (From principles to practice—an interdisciplinary framework to operationalise AI ethics. VDE, Bertelsmann Stiftung, Frankfurt a. M./Gütersloh. https://www.ai-ethics-impact.org, 2020). Our contribution includes modifications to the model and, most importantly, the addition of principles. Building on this methodology, we present a model that is highly acceptable, the VPCIO model. We developed and evaluated the VPCIO for two case studies. The main case study is an AI-assisted robot used for reconnaissance of radiological hazards (based on a BMBF funded project, entitled KIARA (https://www.itas.kit.edu/english/projects_hill22_kiara.php)). The second case study is about an AI system in an entertaining context, namely to swap faces. Implementing the ethical aspects in these cases into the VPCIO model results in an indicator system that illustrates how ethical aspects can be transferred to an AI system in an understandable way.