{"title":"人工智能与责任:没有差距,只有丰富","authors":"Maximilian Kiener","doi":"10.1111/japp.12765","DOIUrl":null,"url":null,"abstract":"The best‐performing AI systems, such as deep neural networks, tend to be the ones that are most difficult to control and understand. For this reason, scholars worry that the use of AI would lead to so‐called <jats:italic>responsibility gaps</jats:italic>, that is, situations in which no one is morally responsible for the harm caused by AI, because no one satisfies the so‐called control condition and epistemic condition of moral responsibility. In this article, I acknowledge that there is a significant challenge around responsibility and AI. Yet I don't think that this challenge is best captured in terms of a responsibility <jats:italic>gap</jats:italic>. Instead, I argue for the opposite view, namely that there is responsibility <jats:italic>abundance</jats:italic>, that is, a situation in which <jats:italic>numerous</jats:italic> agents are responsible for the harm caused by AI, and that the challenge comes from the difficulties of dealing with such abundance in practice. I conclude by arguing that reframing the challenge in this way offers distinct dialectic and theoretical advantages, promising to help overcome some obstacles in the current debate surrounding ‘responsibility gaps’.","PeriodicalId":47057,"journal":{"name":"Journal of Applied Philosophy","volume":"9 1","pages":""},"PeriodicalIF":0.7000,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"AI and Responsibility: No Gap, but Abundance\",\"authors\":\"Maximilian Kiener\",\"doi\":\"10.1111/japp.12765\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The best‐performing AI systems, such as deep neural networks, tend to be the ones that are most difficult to control and understand. For this reason, scholars worry that the use of AI would lead to so‐called <jats:italic>responsibility gaps</jats:italic>, that is, situations in which no one is morally responsible for the harm caused by AI, because no one satisfies the so‐called control condition and epistemic condition of moral responsibility. In this article, I acknowledge that there is a significant challenge around responsibility and AI. Yet I don't think that this challenge is best captured in terms of a responsibility <jats:italic>gap</jats:italic>. Instead, I argue for the opposite view, namely that there is responsibility <jats:italic>abundance</jats:italic>, that is, a situation in which <jats:italic>numerous</jats:italic> agents are responsible for the harm caused by AI, and that the challenge comes from the difficulties of dealing with such abundance in practice. I conclude by arguing that reframing the challenge in this way offers distinct dialectic and theoretical advantages, promising to help overcome some obstacles in the current debate surrounding ‘responsibility gaps’.\",\"PeriodicalId\":47057,\"journal\":{\"name\":\"Journal of Applied Philosophy\",\"volume\":\"9 1\",\"pages\":\"\"},\"PeriodicalIF\":0.7000,\"publicationDate\":\"2024-09-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Applied Philosophy\",\"FirstCategoryId\":\"98\",\"ListUrlMain\":\"https://doi.org/10.1111/japp.12765\",\"RegionNum\":2,\"RegionCategory\":\"哲学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"ETHICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Applied Philosophy","FirstCategoryId":"98","ListUrlMain":"https://doi.org/10.1111/japp.12765","RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"ETHICS","Score":null,"Total":0}
The best‐performing AI systems, such as deep neural networks, tend to be the ones that are most difficult to control and understand. For this reason, scholars worry that the use of AI would lead to so‐called responsibility gaps, that is, situations in which no one is morally responsible for the harm caused by AI, because no one satisfies the so‐called control condition and epistemic condition of moral responsibility. In this article, I acknowledge that there is a significant challenge around responsibility and AI. Yet I don't think that this challenge is best captured in terms of a responsibility gap. Instead, I argue for the opposite view, namely that there is responsibility abundance, that is, a situation in which numerous agents are responsible for the harm caused by AI, and that the challenge comes from the difficulties of dealing with such abundance in practice. I conclude by arguing that reframing the challenge in this way offers distinct dialectic and theoretical advantages, promising to help overcome some obstacles in the current debate surrounding ‘responsibility gaps’.