Recorrect Net: Visual Guidance for Image Captioning

Qilin Guo, Yajing Xu, Sheng Gao
{"title":"Recorrect Net: Visual Guidance for Image Captioning","authors":"Qilin Guo, Yajing Xu, Sheng Gao","doi":"10.1109/IC-NIDC54101.2021.9660494","DOIUrl":null,"url":null,"abstract":"Most image caption methods directly learn the mapping relationship from image to text. In practice, however, paying attention to both sentence structure and visual content at the same time can be difficult. In this paper, we propose a model, called Re-correct Net, which aims to use the existing caption information by other captioners, to guide the visual content in the generation of new caption. In addition, to obtain the more accurate caption, our method uses the existing textured entity as additional prior knowledge. Experiments show that our model can be used as re-correct block after all captioner training, which is beneficial to improve the quality of caption and is also flexible.","PeriodicalId":264468,"journal":{"name":"2021 7th IEEE International Conference on Network Intelligence and Digital Content (IC-NIDC)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 7th IEEE International Conference on Network Intelligence and Digital Content (IC-NIDC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IC-NIDC54101.2021.9660494","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Most image caption methods directly learn the mapping relationship from image to text. In practice, however, paying attention to both sentence structure and visual content at the same time can be difficult. In this paper, we propose a model, called Re-correct Net, which aims to use the existing caption information by other captioners, to guide the visual content in the generation of new caption. In addition, to obtain the more accurate caption, our method uses the existing textured entity as additional prior knowledge. Experiments show that our model can be used as re-correct block after all captioner training, which is beneficial to improve the quality of caption and is also flexible.
recrectnet:图像标题的视觉指导
大多数图像标题方法直接学习图像到文本的映射关系。然而,在实践中,同时注意句子结构和视觉内容是很困难的。在本文中,我们提出了一个名为Re-correct Net的模型,该模型旨在利用其他字幕者已有的字幕信息来指导视觉内容生成新的字幕。此外,为了获得更准确的标题,我们的方法使用现有的纹理实体作为额外的先验知识。实验表明,该模型可以作为字幕全部训练后的重新校正块,有利于提高字幕质量,且具有一定的灵活性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信