Image Captioning with multi-level similarity-guided semantic matching

IF 3.8 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS
Jiesi Li , Ning Xu , Weizhi Nie , Shenyuan Zhang
{"title":"Image Captioning with multi-level similarity-guided semantic matching","authors":"Jiesi Li ,&nbsp;Ning Xu ,&nbsp;Weizhi Nie ,&nbsp;Shenyuan Zhang","doi":"10.1016/j.visinf.2021.11.003","DOIUrl":null,"url":null,"abstract":"<div><p>Image Captioning is a cross-modal task that needs to automatically generate coherent natural sentences to describe the image contents. Due to the large gap between vision and language modalities, most of the existing methods have the problem of inaccurate semantic matching between images and generated captions. To solve the problem, this paper proposes a novel multi-level similarity-guided semantic matching method for image captioning, which can fuse local and global semantic similarities to learn the latent semantic correlation between images and generated captions. Specifically, we extract the semantic units containing fine-grained semantic information of images and generated captions, respectively. Based on the comparison of the semantic units, we design a local semantic similarity evaluation mechanism. Meanwhile, we employ the CIDEr score to characterize the global semantic similarity. The local and global two-level similarities are finally fused using the reinforcement learning theory, to guide the model optimization to obtain better semantic matching. The quantitative and qualitative experiments on large-scale MSCOCO dataset illustrate the superiority of the proposed method, which can achieve fine-grained semantic matching of images and generated captions.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":null,"pages":null},"PeriodicalIF":3.8000,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X21000590/pdfft?md5=f944bc3d86f6d64595ece2bbaa4a94c8&pid=1-s2.0-S2468502X21000590-main.pdf","citationCount":"6","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Visual Informatics","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2468502X21000590","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 6

Abstract

Image Captioning is a cross-modal task that needs to automatically generate coherent natural sentences to describe the image contents. Due to the large gap between vision and language modalities, most of the existing methods have the problem of inaccurate semantic matching between images and generated captions. To solve the problem, this paper proposes a novel multi-level similarity-guided semantic matching method for image captioning, which can fuse local and global semantic similarities to learn the latent semantic correlation between images and generated captions. Specifically, we extract the semantic units containing fine-grained semantic information of images and generated captions, respectively. Based on the comparison of the semantic units, we design a local semantic similarity evaluation mechanism. Meanwhile, we employ the CIDEr score to characterize the global semantic similarity. The local and global two-level similarities are finally fused using the reinforcement learning theory, to guide the model optimization to obtain better semantic matching. The quantitative and qualitative experiments on large-scale MSCOCO dataset illustrate the superiority of the proposed method, which can achieve fine-grained semantic matching of images and generated captions.

基于多级相似度引导语义匹配的图像字幕
图像字幕是一项跨模态任务,需要自动生成连贯的自然句子来描述图像内容。由于视觉和语言模式之间存在较大的差异,现有的大多数方法存在图像与生成的字幕之间语义匹配不准确的问题。为了解决这一问题,本文提出了一种新的多级相似度引导的图像字幕语义匹配方法,该方法可以融合局部和全局的语义相似度,从而学习图像与生成的字幕之间的潜在语义相关性。具体来说,我们分别提取图像和生成的标题中包含细粒度语义信息的语义单元。在语义单元比较的基础上,设计了局部语义相似度评价机制。同时,我们使用CIDEr评分来描述全局语义相似度。最后利用强化学习理论融合局部和全局两级相似度,指导模型优化以获得更好的语义匹配。在大规模MSCOCO数据集上的定量和定性实验证明了该方法的优越性,该方法可以实现图像和生成的标题的细粒度语义匹配。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Visual Informatics
Visual Informatics Computer Science-Computer Graphics and Computer-Aided Design
CiteScore
6.70
自引率
3.30%
发文量
33
审稿时长
79 days
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信