CMVCG: Non-autoregressive Conditional Masked Live Video Comments Generation Model

Zehua Zeng, Chenyang Tu, Neng Gao, Cong Xue, Cunqing Ma, Yiwei Shan
{"title":"CMVCG: Non-autoregressive Conditional Masked Live Video Comments Generation Model","authors":"Zehua Zeng, Chenyang Tu, Neng Gao, Cong Xue, Cunqing Ma, Yiwei Shan","doi":"10.1109/IJCNN52387.2021.9533460","DOIUrl":null,"url":null,"abstract":"The blooming of live comment videos leads to the need of automatic live video comment generating task. Previous works focus on autoregressive live video comments generation and can only generate comments by giving the first word of the target comment. However, in some scenes, users need to generate comments by their given prompt keywords, which can't be solved by the traditional live video comment generation methods. In this paper, we propose a Transformer based non-autoregressive conditional masked live video comments generation model called CMVCG model. Our model considers not only the visual and textual context of the comments, but also time and color information. To predict the position of the given prompt keywords, we also introduce a keywords position predicting module. By leveraging the conditional masked language model, our model achieves non-autoregressive live video comment generation. Furthermore, we collect and introduce a large-scale real-world live video comment dataset called Bili-22 dataset. We evaluate our model in two live comment datasets and the experiment results present that our model outperforms the state-of-the-art models in most of the metrics.","PeriodicalId":396583,"journal":{"name":"2021 International Joint Conference on Neural Networks (IJCNN)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 International Joint Conference on Neural Networks (IJCNN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IJCNN52387.2021.9533460","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

Abstract

The blooming of live comment videos leads to the need of automatic live video comment generating task. Previous works focus on autoregressive live video comments generation and can only generate comments by giving the first word of the target comment. However, in some scenes, users need to generate comments by their given prompt keywords, which can't be solved by the traditional live video comment generation methods. In this paper, we propose a Transformer based non-autoregressive conditional masked live video comments generation model called CMVCG model. Our model considers not only the visual and textual context of the comments, but also time and color information. To predict the position of the given prompt keywords, we also introduce a keywords position predicting module. By leveraging the conditional masked language model, our model achieves non-autoregressive live video comment generation. Furthermore, we collect and introduce a large-scale real-world live video comment dataset called Bili-22 dataset. We evaluate our model in two live comment datasets and the experiment results present that our model outperforms the state-of-the-art models in most of the metrics.
CMVCG:非自回归条件屏蔽直播视频评论生成模型
随着实时评论视频的兴起,需要自动生成实时视频评论任务。以前的工作主要集中在自回归的直播视频评论生成,只能通过给出目标评论的第一个词来生成评论。然而,在某些场景中,用户需要根据给定的提示关键词生成评论,这是传统的视频直播评论生成方法无法解决的问题。本文提出了一种基于Transformer的非自回归条件掩码直播视频评论生成模型CMVCG模型。我们的模型不仅考虑了评论的视觉和文本上下文,还考虑了时间和颜色信息。为了预测给定提示关键词的位置,我们还引入了关键词位置预测模块。通过利用条件屏蔽语言模型,我们的模型实现了非自回归的实时视频评论生成。此外,我们收集并引入了一个大规模的真实世界实时视频评论数据集,称为Bili-22数据集。我们在两个实时评论数据集中评估了我们的模型,实验结果表明,我们的模型在大多数指标上都优于最先进的模型。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信