It has often been argued that we face a trade-off between accuracy and opacity in deep learning models. The idea is that we can only harness the accuracy of deep learning models by simultaneously accepting that the grounds for the models’ decision-making are epistemically opaque to us. In this paper, we ask the following question: what are the prospects of making deep learning models transparent without compromising on their accuracy? We argue that the answer to this question depends on which kind of opacity we have in mind. If we focus on the standard notion of opacity, which tracks the internal complexities of deep learning models, we argue that existing explainable AI (XAI) techniques show us that the prospects look relatively good. But, as it has recently been argued in the literature, there is another notion of opacity that concerns factors external to the model. We argue that there are at least two types of external opacity—link opacity and structure opacity—and that existing XAI techniques can to some extent help us reduce the former but not the latter.