DSAMT: Dual-Source Aligned Multimodal Transformers for TextCaps

Chenyang Liao, Ruifang Liu, Sheng Gao
{"title":"DSAMT: Dual-Source Aligned Multimodal Transformers for TextCaps","authors":"Chenyang Liao, Ruifang Liu, Sheng Gao","doi":"10.1109/IC-NIDC54101.2021.9660575","DOIUrl":null,"url":null,"abstract":"When generating captions for images, previous caption methods tend to consider the visual features of the image but ignore the Optical Character Recognition (OCR) in it, which makes the generated caption lack text information in the image. By integrating OCR modal as well as visual modal into caption prediction, TextCaps task is aimed at producing concise sentences recapitulating the image and the text information. We propose Dual-Source Aligned Multimodal Transformers (DSAMT), which utilize words from two sources (object tags and OCR tokens) as the supplement to vocabulary. These extra words are applied to align caption embedding and visual embedding through randomly masking some tokens in caption and calculating the masked token loss. A new object detection module is used in DSAMT to extract image visual features and object tags on TextCaps. We additionally use BERTSCORE to evaluate our predictions. We demonstrate our approach achieves superior results compared to state-of-the-art models on TextCaps dataset.","PeriodicalId":264468,"journal":{"name":"2021 7th IEEE International Conference on Network Intelligence and Digital Content (IC-NIDC)","volume":"85 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 7th IEEE International Conference on Network Intelligence and Digital Content (IC-NIDC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IC-NIDC54101.2021.9660575","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

When generating captions for images, previous caption methods tend to consider the visual features of the image but ignore the Optical Character Recognition (OCR) in it, which makes the generated caption lack text information in the image. By integrating OCR modal as well as visual modal into caption prediction, TextCaps task is aimed at producing concise sentences recapitulating the image and the text information. We propose Dual-Source Aligned Multimodal Transformers (DSAMT), which utilize words from two sources (object tags and OCR tokens) as the supplement to vocabulary. These extra words are applied to align caption embedding and visual embedding through randomly masking some tokens in caption and calculating the masked token loss. A new object detection module is used in DSAMT to extract image visual features and object tags on TextCaps. We additionally use BERTSCORE to evaluate our predictions. We demonstrate our approach achieves superior results compared to state-of-the-art models on TextCaps dataset.
DSAMT:文本大写的双源对齐多模态变压器
在为图像生成字幕时,以往的字幕方法往往只考虑图像的视觉特征,而忽略了图像中的光学字符识别(OCR),导致生成的字幕缺乏图像中的文本信息。TextCaps任务通过将OCR模态和视觉模态集成到标题预测中,旨在生成概括图像和文本信息的简明句子。我们提出了双源对齐多模态变压器(DSAMT),它利用来自两个来源(对象标签和OCR标记)的单词作为词汇表的补充。通过随机屏蔽标题中的一些标记并计算被屏蔽标记的损失,将这些额外的单词用于对齐标题嵌入和视觉嵌入。在DSAMT中使用了一个新的目标检测模块来提取图像的视觉特征和TextCaps上的目标标签。我们还使用BERTSCORE来评估我们的预测。我们证明,与TextCaps数据集上最先进的模型相比,我们的方法取得了更好的结果。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信