用于视频段落字幕的内存增强型分层变换器

IF 5.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Benhui Zhang , Junyu Gao , Yuan Yuan
{"title":"用于视频段落字幕的内存增强型分层变换器","authors":"Benhui Zhang ,&nbsp;Junyu Gao ,&nbsp;Yuan Yuan","doi":"10.1016/j.neucom.2024.128835","DOIUrl":null,"url":null,"abstract":"<div><div>Video paragraph captioning aims to describe a video that contains multiple events with a paragraph of generated coherent sentences. Such a captioning task is full of challenges since the high requirements for visual–textual relevance and semantic coherence across the captioning paragraph of a video. In this work, we introduce a memory-enhanced hierarchical transformer for video paragraph captioning. Our model adopts a hierarchical structure, where the outer layer transformer extracts visual information from a global perspective and captures the relevancy between event segments throughout the entire video, while the inner layer transformer further mines local details within each event segment. By thoroughly exploring both global and local visual information at the video and event levels, our model can provide comprehensive visual feature cues for promising paragraph caption generation. Additionally, we design a memory module to capture similar patterns among event segments within a video, which preserves contextual information across event segments and updates its memory state accordingly. Experimental results on two popular datasets, ActivityNet Captions and YouCook2, demonstrate that our proposed model can achieve superior performance, generating higher quality caption while maintaining consistency in the content of video.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"615 ","pages":"Article 128835"},"PeriodicalIF":5.5000,"publicationDate":"2024-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Memory-enhanced hierarchical transformer for video paragraph captioning\",\"authors\":\"Benhui Zhang ,&nbsp;Junyu Gao ,&nbsp;Yuan Yuan\",\"doi\":\"10.1016/j.neucom.2024.128835\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Video paragraph captioning aims to describe a video that contains multiple events with a paragraph of generated coherent sentences. Such a captioning task is full of challenges since the high requirements for visual–textual relevance and semantic coherence across the captioning paragraph of a video. In this work, we introduce a memory-enhanced hierarchical transformer for video paragraph captioning. Our model adopts a hierarchical structure, where the outer layer transformer extracts visual information from a global perspective and captures the relevancy between event segments throughout the entire video, while the inner layer transformer further mines local details within each event segment. By thoroughly exploring both global and local visual information at the video and event levels, our model can provide comprehensive visual feature cues for promising paragraph caption generation. Additionally, we design a memory module to capture similar patterns among event segments within a video, which preserves contextual information across event segments and updates its memory state accordingly. Experimental results on two popular datasets, ActivityNet Captions and YouCook2, demonstrate that our proposed model can achieve superior performance, generating higher quality caption while maintaining consistency in the content of video.</div></div>\",\"PeriodicalId\":19268,\"journal\":{\"name\":\"Neurocomputing\",\"volume\":\"615 \",\"pages\":\"Article 128835\"},\"PeriodicalIF\":5.5000,\"publicationDate\":\"2024-11-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Neurocomputing\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0925231224016060\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neurocomputing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0925231224016060","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

视频段落字幕旨在用一段连贯的句子描述一段包含多个事件的视频。这种字幕任务充满挑战,因为对视频字幕段落的视觉-文本相关性和语义连贯性要求很高。在这项工作中,我们介绍了一种用于视频段落字幕的记忆增强型分层转换器。我们的模型采用分层结构,外层转换器从全局角度提取视觉信息,捕捉整个视频中事件段之间的相关性,而内层转换器则进一步挖掘每个事件段中的局部细节。通过深入挖掘视频和事件层面的全局和局部视觉信息,我们的模型可以为有望生成的段落标题提供全面的视觉特征线索。此外,我们还设计了一个记忆模块来捕捉视频中事件片段之间的相似模式,该模块会保留事件片段之间的上下文信息,并相应地更新其记忆状态。在 ActivityNet Captions 和 YouCook2 这两个流行数据集上的实验结果表明,我们提出的模型可以实现卓越的性能,在保持视频内容一致性的同时生成更高质量的字幕。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Memory-enhanced hierarchical transformer for video paragraph captioning
Video paragraph captioning aims to describe a video that contains multiple events with a paragraph of generated coherent sentences. Such a captioning task is full of challenges since the high requirements for visual–textual relevance and semantic coherence across the captioning paragraph of a video. In this work, we introduce a memory-enhanced hierarchical transformer for video paragraph captioning. Our model adopts a hierarchical structure, where the outer layer transformer extracts visual information from a global perspective and captures the relevancy between event segments throughout the entire video, while the inner layer transformer further mines local details within each event segment. By thoroughly exploring both global and local visual information at the video and event levels, our model can provide comprehensive visual feature cues for promising paragraph caption generation. Additionally, we design a memory module to capture similar patterns among event segments within a video, which preserves contextual information across event segments and updates its memory state accordingly. Experimental results on two popular datasets, ActivityNet Captions and YouCook2, demonstrate that our proposed model can achieve superior performance, generating higher quality caption while maintaining consistency in the content of video.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Neurocomputing
Neurocomputing 工程技术-计算机:人工智能
CiteScore
13.10
自引率
10.00%
发文量
1382
审稿时长
70 days
期刊介绍: Neurocomputing publishes articles describing recent fundamental contributions in the field of neurocomputing. Neurocomputing theory, practice and applications are the essential topics being covered.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信