Video Captioning based on Image Captioning as Subsidiary Content

J. Vaishnavi, V. Narmatha
{"title":"Video Captioning based on Image Captioning as Subsidiary Content","authors":"J. Vaishnavi, V. Narmatha","doi":"10.1109/ICAECT54875.2022.9807935","DOIUrl":null,"url":null,"abstract":"Video captioning is the more heuristic task of the combination of computer vision and Natural language processing while researchers are concentrated more in video related tasks. Dense video captioning is still considering the more challenging task as it needs to consider every event occurs in the video and provide optimal captions separately for all the events presents in the video with high diversity. Captioning process with less corpus leads to less performance. To avoid such issues, our proposed model constructed with the option of generating captions with high diversity. Image captions are taken as subsidiary content to enlarge the diversity for captioning the videos. Attention mechanism is utilized for the generation process. Generator and three different discriminators are utilized to contribute an appropriate caption which enriches the captioning process. ActivityNet caption dataset is used to demonstrate the proposed model. Microsoft coco image dataset is considered as subsidiary content for captioning. The benchmark metrics BLEU and METEOR are used to estimate the performance of the proposed model.","PeriodicalId":346658,"journal":{"name":"2022 Second International Conference on Advances in Electrical, Computing, Communication and Sustainable Technologies (ICAECT)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 Second International Conference on Advances in Electrical, Computing, Communication and Sustainable Technologies (ICAECT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICAECT54875.2022.9807935","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

Abstract

Video captioning is the more heuristic task of the combination of computer vision and Natural language processing while researchers are concentrated more in video related tasks. Dense video captioning is still considering the more challenging task as it needs to consider every event occurs in the video and provide optimal captions separately for all the events presents in the video with high diversity. Captioning process with less corpus leads to less performance. To avoid such issues, our proposed model constructed with the option of generating captions with high diversity. Image captions are taken as subsidiary content to enlarge the diversity for captioning the videos. Attention mechanism is utilized for the generation process. Generator and three different discriminators are utilized to contribute an appropriate caption which enriches the captioning process. ActivityNet caption dataset is used to demonstrate the proposed model. Microsoft coco image dataset is considered as subsidiary content for captioning. The benchmark metrics BLEU and METEOR are used to estimate the performance of the proposed model.
基于图像字幕作为辅助内容的视频字幕
视频字幕是计算机视觉与自然语言处理相结合的启发式任务,研究人员更多地集中在视频相关的任务上。密集视频字幕仍然在考虑更具挑战性的任务,因为它需要考虑视频中发生的每一个事件,并为视频中呈现的所有事件分别提供最优的字幕,具有高度的多样性。语料库较少的字幕处理导致性能较差。为了避免这些问题,我们提出的模型使用生成高多样性标题的选项来构建。将图像字幕作为辅助内容,增加了视频字幕的多样性。在生成过程中利用注意机制。利用生成器和三种不同的判别器生成合适的字幕,丰富了字幕过程。ActivityNet标题数据集用于演示所提出的模型。将Microsoft coco图像数据集作为标题的附属内容。使用基准度量BLEU和METEOR来评估所提出模型的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信