{"title":"Recorrect Net: Visual Guidance for Image Captioning","authors":"Qilin Guo, Yajing Xu, Sheng Gao","doi":"10.1109/IC-NIDC54101.2021.9660494","DOIUrl":null,"url":null,"abstract":"Most image caption methods directly learn the mapping relationship from image to text. In practice, however, paying attention to both sentence structure and visual content at the same time can be difficult. In this paper, we propose a model, called Re-correct Net, which aims to use the existing caption information by other captioners, to guide the visual content in the generation of new caption. In addition, to obtain the more accurate caption, our method uses the existing textured entity as additional prior knowledge. Experiments show that our model can be used as re-correct block after all captioner training, which is beneficial to improve the quality of caption and is also flexible.","PeriodicalId":264468,"journal":{"name":"2021 7th IEEE International Conference on Network Intelligence and Digital Content (IC-NIDC)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 7th IEEE International Conference on Network Intelligence and Digital Content (IC-NIDC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IC-NIDC54101.2021.9660494","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Most image caption methods directly learn the mapping relationship from image to text. In practice, however, paying attention to both sentence structure and visual content at the same time can be difficult. In this paper, we propose a model, called Re-correct Net, which aims to use the existing caption information by other captioners, to guide the visual content in the generation of new caption. In addition, to obtain the more accurate caption, our method uses the existing textured entity as additional prior knowledge. Experiments show that our model can be used as re-correct block after all captioner training, which is beneficial to improve the quality of caption and is also flexible.