{"title":"SDDA: A progressive self-distillation with decoupled alignment for multimodal image–text classification","authors":"Xiaohao Chen , Qianjun Shuai , Feng Hu , Yongqiang Cheng","doi":"10.1016/j.neucom.2024.128794","DOIUrl":null,"url":null,"abstract":"<div><div>Multimodal image–text classification endeavors to deduce the correct category based on the information encapsulated in image–text pairs. Despite the commendable performance achieved by current image–text methodologies, the intrinsic multimodal heterogeneity persists as a challenge, with the contributions from diverse modalities exhibiting considerable variance. In this study, we address this issue by introducing a novel decoupled multimodal Self-Distillation (SDDA) approach, aimed at facilitating fine-grained alignment of shared and private features of image–text features in a low-dimensional space, thereby reducing information redundancy. Specifically, each modality representation is decoupled in an autoregressive manner into two segments within a modality-irrelevant/exclusive space. SDDA imparts additional knowledge transfer to each decoupled segment via self-distillation, while also offering flexible, richer multimodal knowledge supervision for unimodal features. Multimodal classification experiments conducted on two publicly available benchmark datasets verified the efficacy of the algorithm, demonstrating that SDDA surpasses the state-of-the-art baselines.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"614 ","pages":"Article 128794"},"PeriodicalIF":5.5000,"publicationDate":"2024-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neurocomputing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0925231224015650","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Multimodal image–text classification endeavors to deduce the correct category based on the information encapsulated in image–text pairs. Despite the commendable performance achieved by current image–text methodologies, the intrinsic multimodal heterogeneity persists as a challenge, with the contributions from diverse modalities exhibiting considerable variance. In this study, we address this issue by introducing a novel decoupled multimodal Self-Distillation (SDDA) approach, aimed at facilitating fine-grained alignment of shared and private features of image–text features in a low-dimensional space, thereby reducing information redundancy. Specifically, each modality representation is decoupled in an autoregressive manner into two segments within a modality-irrelevant/exclusive space. SDDA imparts additional knowledge transfer to each decoupled segment via self-distillation, while also offering flexible, richer multimodal knowledge supervision for unimodal features. Multimodal classification experiments conducted on two publicly available benchmark datasets verified the efficacy of the algorithm, demonstrating that SDDA surpasses the state-of-the-art baselines.
期刊介绍:
Neurocomputing publishes articles describing recent fundamental contributions in the field of neurocomputing. Neurocomputing theory, practice and applications are the essential topics being covered.