{"title":"Multimodal Co-Attention Mechanism for One-stage Visual Grounding","authors":"Zhihan Yu, Mingcong Lu, Ruifan Li","doi":"10.1109/CCIS57298.2022.10016352","DOIUrl":null,"url":null,"abstract":"Visual grounding aims to locate a specific region in a given image guided by a natural language query. It relies on the alignment of visual information and text semantics in a fine-grained fashion. We propose a one-stage visual grounding model based on cross-modal feature fusion, which regards the task as a coordinate regression problem and implement an end-to-end optimization. The coordinates of bounding box are directly predicted by the fusion features, but previous fusion methods such as element-wise product, summation, and concatenation are too simple to combine the deep information within feature vectors. In order to improve the quality of the fusion features, we incorporate co-attention mechanism to deeply transform the representations from two modalities. We evaluate our grounding model on publicly available datasets, including Flickr30k Entities, RefCOCO, RefCOCO+ and RefCOCOg. Quantitative evaluation results show that co-attention mechanism plays a positive role in multi-modal feature fusion for the task of visual grounding.","PeriodicalId":374660,"journal":{"name":"2022 IEEE 8th International Conference on Cloud Computing and Intelligent Systems (CCIS)","volume":"50 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE 8th International Conference on Cloud Computing and Intelligent Systems (CCIS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CCIS57298.2022.10016352","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Visual grounding aims to locate a specific region in a given image guided by a natural language query. It relies on the alignment of visual information and text semantics in a fine-grained fashion. We propose a one-stage visual grounding model based on cross-modal feature fusion, which regards the task as a coordinate regression problem and implement an end-to-end optimization. The coordinates of bounding box are directly predicted by the fusion features, but previous fusion methods such as element-wise product, summation, and concatenation are too simple to combine the deep information within feature vectors. In order to improve the quality of the fusion features, we incorporate co-attention mechanism to deeply transform the representations from two modalities. We evaluate our grounding model on publicly available datasets, including Flickr30k Entities, RefCOCO, RefCOCO+ and RefCOCOg. Quantitative evaluation results show that co-attention mechanism plays a positive role in multi-modal feature fusion for the task of visual grounding.