{"title":"DSAMT: Dual-Source Aligned Multimodal Transformers for TextCaps","authors":"Chenyang Liao, Ruifang Liu, Sheng Gao","doi":"10.1109/IC-NIDC54101.2021.9660575","DOIUrl":null,"url":null,"abstract":"When generating captions for images, previous caption methods tend to consider the visual features of the image but ignore the Optical Character Recognition (OCR) in it, which makes the generated caption lack text information in the image. By integrating OCR modal as well as visual modal into caption prediction, TextCaps task is aimed at producing concise sentences recapitulating the image and the text information. We propose Dual-Source Aligned Multimodal Transformers (DSAMT), which utilize words from two sources (object tags and OCR tokens) as the supplement to vocabulary. These extra words are applied to align caption embedding and visual embedding through randomly masking some tokens in caption and calculating the masked token loss. A new object detection module is used in DSAMT to extract image visual features and object tags on TextCaps. We additionally use BERTSCORE to evaluate our predictions. We demonstrate our approach achieves superior results compared to state-of-the-art models on TextCaps dataset.","PeriodicalId":264468,"journal":{"name":"2021 7th IEEE International Conference on Network Intelligence and Digital Content (IC-NIDC)","volume":"85 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 7th IEEE International Conference on Network Intelligence and Digital Content (IC-NIDC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IC-NIDC54101.2021.9660575","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
When generating captions for images, previous caption methods tend to consider the visual features of the image but ignore the Optical Character Recognition (OCR) in it, which makes the generated caption lack text information in the image. By integrating OCR modal as well as visual modal into caption prediction, TextCaps task is aimed at producing concise sentences recapitulating the image and the text information. We propose Dual-Source Aligned Multimodal Transformers (DSAMT), which utilize words from two sources (object tags and OCR tokens) as the supplement to vocabulary. These extra words are applied to align caption embedding and visual embedding through randomly masking some tokens in caption and calculating the masked token loss. A new object detection module is used in DSAMT to extract image visual features and object tags on TextCaps. We additionally use BERTSCORE to evaluate our predictions. We demonstrate our approach achieves superior results compared to state-of-the-art models on TextCaps dataset.