Jens Christian Bjerring, Jakob Mainz, Lauritz Munch
{"title":"深度学习模型和可解释人工智能的局限性","authors":"Jens Christian Bjerring, Jakob Mainz, Lauritz Munch","doi":"10.1007/s44204-024-00238-8","DOIUrl":null,"url":null,"abstract":"<div><p>It has often been argued that we face a trade-off between accuracy and opacity in deep learning models. The idea is that we can only harness the accuracy of deep learning models by simultaneously accepting that the grounds for the models’ decision-making are epistemically opaque to us. In this paper, we ask the following question: what are the prospects of making deep learning models transparent without compromising on their accuracy? We argue that the answer to this question depends on which kind of opacity we have in mind. If we focus on the standard notion of opacity, which tracks the <i>internal</i> complexities of deep learning models, we argue that existing explainable AI (XAI) techniques show us that the prospects look relatively good. But, as it has recently been argued in the literature, there is another notion of opacity that concerns factors <i>external</i> to the model. We argue that there are at least two types of external opacity—link opacity and structure opacity—and that existing XAI techniques can to some extent help us reduce the former but not the latter.</p></div>","PeriodicalId":93890,"journal":{"name":"Asian journal of philosophy","volume":"4 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2025-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s44204-024-00238-8.pdf","citationCount":"0","resultStr":"{\"title\":\"Deep learning models and the limits of explainable artificial intelligence\",\"authors\":\"Jens Christian Bjerring, Jakob Mainz, Lauritz Munch\",\"doi\":\"10.1007/s44204-024-00238-8\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>It has often been argued that we face a trade-off between accuracy and opacity in deep learning models. The idea is that we can only harness the accuracy of deep learning models by simultaneously accepting that the grounds for the models’ decision-making are epistemically opaque to us. In this paper, we ask the following question: what are the prospects of making deep learning models transparent without compromising on their accuracy? We argue that the answer to this question depends on which kind of opacity we have in mind. If we focus on the standard notion of opacity, which tracks the <i>internal</i> complexities of deep learning models, we argue that existing explainable AI (XAI) techniques show us that the prospects look relatively good. But, as it has recently been argued in the literature, there is another notion of opacity that concerns factors <i>external</i> to the model. We argue that there are at least two types of external opacity—link opacity and structure opacity—and that existing XAI techniques can to some extent help us reduce the former but not the latter.</p></div>\",\"PeriodicalId\":93890,\"journal\":{\"name\":\"Asian journal of philosophy\",\"volume\":\"4 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2025-01-30\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://link.springer.com/content/pdf/10.1007/s44204-024-00238-8.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Asian journal of philosophy\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://link.springer.com/article/10.1007/s44204-024-00238-8\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Asian journal of philosophy","FirstCategoryId":"1085","ListUrlMain":"https://link.springer.com/article/10.1007/s44204-024-00238-8","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Deep learning models and the limits of explainable artificial intelligence
It has often been argued that we face a trade-off between accuracy and opacity in deep learning models. The idea is that we can only harness the accuracy of deep learning models by simultaneously accepting that the grounds for the models’ decision-making are epistemically opaque to us. In this paper, we ask the following question: what are the prospects of making deep learning models transparent without compromising on their accuracy? We argue that the answer to this question depends on which kind of opacity we have in mind. If we focus on the standard notion of opacity, which tracks the internal complexities of deep learning models, we argue that existing explainable AI (XAI) techniques show us that the prospects look relatively good. But, as it has recently been argued in the literature, there is another notion of opacity that concerns factors external to the model. We argue that there are at least two types of external opacity—link opacity and structure opacity—and that existing XAI techniques can to some extent help us reduce the former but not the latter.