{"title":"GMDA:基于 GCN 的多模式域自适应,用于实时灾害检测","authors":"Yingdong Gou, Kexin Wang, Siwen Wei, Changxin Shi","doi":"10.1142/s0218488523500435","DOIUrl":null,"url":null,"abstract":"Nowadays, with the rapid expansion of social media as a means of quick communication, real-time disaster information is widely disseminated through these platforms. Determining which real-time and multi-modal disaster information can effectively support humanitarian aid has become a major challenge. In this paper, we propose a novel end-to-end model, named GCN-based Multi-modal Domain Adaptation (GMDA), which consists of three essential modules: the GCN-based feature extraction module, the attention-based fusion module and the MMD domain adaptation module. The GCN-based feature extraction module integrates text and image representations through GCNs, while the attention-based fusion module then merges these multi-modal representations using an attention mechanism. Finally, the MMD domain adaptation module is utilized to alleviate the dependence of GMDA on source domain events by computing the maximum mean discrepancy across domains. Our proposed model has been extensively evaluated and has shown superior performance compared to state-of-the-art multi-modal domain adaptation models in terms of F1 score and variance stability.","PeriodicalId":507871,"journal":{"name":"International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems","volume":"35 4","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"GMDA: GCN-Based Multi-Modal Domain Adaptation for Real-Time Disaster Detection\",\"authors\":\"Yingdong Gou, Kexin Wang, Siwen Wei, Changxin Shi\",\"doi\":\"10.1142/s0218488523500435\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Nowadays, with the rapid expansion of social media as a means of quick communication, real-time disaster information is widely disseminated through these platforms. Determining which real-time and multi-modal disaster information can effectively support humanitarian aid has become a major challenge. In this paper, we propose a novel end-to-end model, named GCN-based Multi-modal Domain Adaptation (GMDA), which consists of three essential modules: the GCN-based feature extraction module, the attention-based fusion module and the MMD domain adaptation module. The GCN-based feature extraction module integrates text and image representations through GCNs, while the attention-based fusion module then merges these multi-modal representations using an attention mechanism. Finally, the MMD domain adaptation module is utilized to alleviate the dependence of GMDA on source domain events by computing the maximum mean discrepancy across domains. Our proposed model has been extensively evaluated and has shown superior performance compared to state-of-the-art multi-modal domain adaptation models in terms of F1 score and variance stability.\",\"PeriodicalId\":507871,\"journal\":{\"name\":\"International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems\",\"volume\":\"35 4\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1142/s0218488523500435\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1142/s0218488523500435","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
GMDA: GCN-Based Multi-Modal Domain Adaptation for Real-Time Disaster Detection
Nowadays, with the rapid expansion of social media as a means of quick communication, real-time disaster information is widely disseminated through these platforms. Determining which real-time and multi-modal disaster information can effectively support humanitarian aid has become a major challenge. In this paper, we propose a novel end-to-end model, named GCN-based Multi-modal Domain Adaptation (GMDA), which consists of three essential modules: the GCN-based feature extraction module, the attention-based fusion module and the MMD domain adaptation module. The GCN-based feature extraction module integrates text and image representations through GCNs, while the attention-based fusion module then merges these multi-modal representations using an attention mechanism. Finally, the MMD domain adaptation module is utilized to alleviate the dependence of GMDA on source domain events by computing the maximum mean discrepancy across domains. Our proposed model has been extensively evaluated and has shown superior performance compared to state-of-the-art multi-modal domain adaptation models in terms of F1 score and variance stability.