Jangyeong Jeon, Jungeun Kim, Jinwoo Park, Junyeong Kim
{"title":"MSV: Contribution of Modalities based on the Shapley Value","authors":"Jangyeong Jeon, Jungeun Kim, Jinwoo Park, Junyeong Kim","doi":"10.1109/ICCE59016.2024.10444313","DOIUrl":null,"url":null,"abstract":"Recently, with the remarkable development of deep learning, more complex tasks caused by real-world applications have been proposed to shift from single-modality learning to multiple-modality comprehension. This also means that the need for models capable of addressing comprehensive information from multi-modal datasets has increased. In multimodal tasks, proper interaction and fusion between different modalities amongst language, vision, sensory, and text play an important role in accurate predictions and identification. Therefore, detecting flaw led by the respective modalities when combining all modalities is of utmost importance. However, the complex, opaque and black-box nature of the model makes it challenging to understand the model’s working and the impact of individual modalities, especially in complicated multimodal tasks. In addressing this issue, we directly employed the method presented in previous works and effectively applied it to the Visual Commonsense Generation task to quantify the contribution of different modalities. In this paper, we introduce the Contribution of Modalities based on the Shapley Value score, a metric designed to measure the marginal contribution of each modality. Drawing inspiration from previous studies that utilized the Shapley value in modality, we extend its application to the ”Visual Commonsense Generation” task. In experiments conducted on three modal tasks, our score offers enhanced interpretability for the multi-modal model.","PeriodicalId":518694,"journal":{"name":"2024 IEEE International Conference on Consumer Electronics (ICCE)","volume":"107 12","pages":"1-6"},"PeriodicalIF":0.0000,"publicationDate":"2024-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2024 IEEE International Conference on Consumer Electronics (ICCE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCE59016.2024.10444313","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Recently, with the remarkable development of deep learning, more complex tasks caused by real-world applications have been proposed to shift from single-modality learning to multiple-modality comprehension. This also means that the need for models capable of addressing comprehensive information from multi-modal datasets has increased. In multimodal tasks, proper interaction and fusion between different modalities amongst language, vision, sensory, and text play an important role in accurate predictions and identification. Therefore, detecting flaw led by the respective modalities when combining all modalities is of utmost importance. However, the complex, opaque and black-box nature of the model makes it challenging to understand the model’s working and the impact of individual modalities, especially in complicated multimodal tasks. In addressing this issue, we directly employed the method presented in previous works and effectively applied it to the Visual Commonsense Generation task to quantify the contribution of different modalities. In this paper, we introduce the Contribution of Modalities based on the Shapley Value score, a metric designed to measure the marginal contribution of each modality. Drawing inspiration from previous studies that utilized the Shapley value in modality, we extend its application to the ”Visual Commonsense Generation” task. In experiments conducted on three modal tasks, our score offers enhanced interpretability for the multi-modal model.