视频字幕的场景边缘GRU

Xin Hao, F. Zhou, Xiaoyong Li
{"title":"视频字幕的场景边缘GRU","authors":"Xin Hao, F. Zhou, Xiaoyong Li","doi":"10.1109/ITNEC48623.2020.9084781","DOIUrl":null,"url":null,"abstract":"Recurrent neural networks for video caption have recently attracted widespread attention. It is essential for the video captioning task as it is involved in both the encoding phase and the text description generation phase of the video. However, the traditional encoding-decoding method ignores the scene switching in the video during the encoding phase. In this paper, we propose a video encoding scheme that can discover the structure of a video scene, so as to achieve variable length of the flexible encoding for the video. Unlike the classic encoding-decoding scheme, we propose a new GRU unit that recognizes discontinuities between video frames and enables end-to-end training without the need for additional annotation information. We evaluated our approach on two large datasets: the MPII movie description dataset, and the MSVD dataset. Experiments have shown that our method can find the appropriate level representation of the video and improve the best results of the movie description dataset.","PeriodicalId":235524,"journal":{"name":"2020 IEEE 4th Information Technology, Networking, Electronic and Automation Control Conference (ITNEC)","volume":"157 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":"{\"title\":\"Scene-Edge GRU for Video Caption\",\"authors\":\"Xin Hao, F. Zhou, Xiaoyong Li\",\"doi\":\"10.1109/ITNEC48623.2020.9084781\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Recurrent neural networks for video caption have recently attracted widespread attention. It is essential for the video captioning task as it is involved in both the encoding phase and the text description generation phase of the video. However, the traditional encoding-decoding method ignores the scene switching in the video during the encoding phase. In this paper, we propose a video encoding scheme that can discover the structure of a video scene, so as to achieve variable length of the flexible encoding for the video. Unlike the classic encoding-decoding scheme, we propose a new GRU unit that recognizes discontinuities between video frames and enables end-to-end training without the need for additional annotation information. We evaluated our approach on two large datasets: the MPII movie description dataset, and the MSVD dataset. Experiments have shown that our method can find the appropriate level representation of the video and improve the best results of the movie description dataset.\",\"PeriodicalId\":235524,\"journal\":{\"name\":\"2020 IEEE 4th Information Technology, Networking, Electronic and Automation Control Conference (ITNEC)\",\"volume\":\"157 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-06-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"5\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 IEEE 4th Information Technology, Networking, Electronic and Automation Control Conference (ITNEC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ITNEC48623.2020.9084781\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE 4th Information Technology, Networking, Electronic and Automation Control Conference (ITNEC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ITNEC48623.2020.9084781","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5

摘要

用于视频字幕的递归神经网络最近引起了广泛的关注。它涉及到视频的编码阶段和文本描述生成阶段,对视频字幕任务至关重要。然而,传统的编解码方法在编码阶段忽略了视频中的场景切换。本文提出了一种能够发现视频场景结构的视频编码方案,从而实现对视频的可变长度的灵活编码。与经典的编解码方案不同,我们提出了一种新的GRU单元,它可以识别视频帧之间的不连续,并且无需额外的注释信息即可实现端到端训练。我们在两个大型数据集上评估了我们的方法:MPII电影描述数据集和MSVD数据集。实验表明,我们的方法可以找到合适的视频层次表示,提高了电影描述数据集的最佳效果。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Scene-Edge GRU for Video Caption
Recurrent neural networks for video caption have recently attracted widespread attention. It is essential for the video captioning task as it is involved in both the encoding phase and the text description generation phase of the video. However, the traditional encoding-decoding method ignores the scene switching in the video during the encoding phase. In this paper, we propose a video encoding scheme that can discover the structure of a video scene, so as to achieve variable length of the flexible encoding for the video. Unlike the classic encoding-decoding scheme, we propose a new GRU unit that recognizes discontinuities between video frames and enables end-to-end training without the need for additional annotation information. We evaluated our approach on two large datasets: the MPII movie description dataset, and the MSVD dataset. Experiments have shown that our method can find the appropriate level representation of the video and improve the best results of the movie description dataset.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信