{"title":"Towards the characterization of representations learned via capsule-based network architectures","authors":"Saja Tawalbeh, José Oramas","doi":"10.1016/j.neucom.2024.129027","DOIUrl":null,"url":null,"abstract":"<div><div>Capsule Neural Networks (CapsNets) have been re-introduced as a more compact and interpretable alternative to standard deep neural networks. While recent efforts have proved their compression capabilities, to date, their interpretability properties have not been fully assessed. Here, we conduct a systematic and principled study towards assessing the interpretability of these types of networks. We pay special attention towards analyzing the level to which <em>part-whole</em> relationships are encoded within the learned representation. Our analysis in the MNIST, SVHN, CIFAR-10, and CelebA datasets on several capsule-based architectures suggest that the representations encoded in CapsNets might not be as disentangled nor strictly related to <em>parts-whole</em> relationships as is commonly stated in the literature.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"617 ","pages":"Article 129027"},"PeriodicalIF":5.5000,"publicationDate":"2024-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neurocomputing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0925231224017983","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Capsule Neural Networks (CapsNets) have been re-introduced as a more compact and interpretable alternative to standard deep neural networks. While recent efforts have proved their compression capabilities, to date, their interpretability properties have not been fully assessed. Here, we conduct a systematic and principled study towards assessing the interpretability of these types of networks. We pay special attention towards analyzing the level to which part-whole relationships are encoded within the learned representation. Our analysis in the MNIST, SVHN, CIFAR-10, and CelebA datasets on several capsule-based architectures suggest that the representations encoded in CapsNets might not be as disentangled nor strictly related to parts-whole relationships as is commonly stated in the literature.
期刊介绍:
Neurocomputing publishes articles describing recent fundamental contributions in the field of neurocomputing. Neurocomputing theory, practice and applications are the essential topics being covered.