Personalized Video Fragment Recommendation

Jiaqi Wang, Yu-Kwong Kwok, Edith C. H. Ngai
{"title":"Personalized Video Fragment Recommendation","authors":"Jiaqi Wang, Yu-Kwong Kwok, Edith C. H. Ngai","doi":"10.1109/WI-IAT55865.2022.00036","DOIUrl":null,"url":null,"abstract":"In the mass market, users’ attention span over video contents is agonizingly short (e.g., 15 seconds for music/entertainment videos, 6 minutes for lecture videos, etc.), from a video producer’s or platform provider’s point of view. Given the huge amounts of existing and new videos that are significantly longer than such attention spans, a formidable research challenge is to design and implement a system for recommending just the specific fragments within a long video to match the profiles of the users.In this paper, we propose to meet this challenge based on three major insights. First, we propose to apply Self-Attention Blocks in our deep-learning framework to capture the fragment-level contextual effect. Second, we design a Video-Level Representation Module to take video-level preference into consideration when generating recommendations. Third, we propose a simple yet effective loss function for the video fragment recommendation task. Extensive experiments are conducted to evaluate the effectiveness of the proposed method. Experiment results show that our proposed framework outperforms state-of-the-art approaches in both NDCG@K and Recall@K, demonstrating judicious exploitation of fragment-level contextual effect and video-level preference. Moreover, empirical experiments are also conducted to analyze the key components and parameters in the proposed framework.","PeriodicalId":345445,"journal":{"name":"2022 IEEE/WIC/ACM International Joint Conference on Web Intelligence and Intelligent Agent Technology (WI-IAT)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE/WIC/ACM International Joint Conference on Web Intelligence and Intelligent Agent Technology (WI-IAT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/WI-IAT55865.2022.00036","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

In the mass market, users’ attention span over video contents is agonizingly short (e.g., 15 seconds for music/entertainment videos, 6 minutes for lecture videos, etc.), from a video producer’s or platform provider’s point of view. Given the huge amounts of existing and new videos that are significantly longer than such attention spans, a formidable research challenge is to design and implement a system for recommending just the specific fragments within a long video to match the profiles of the users.In this paper, we propose to meet this challenge based on three major insights. First, we propose to apply Self-Attention Blocks in our deep-learning framework to capture the fragment-level contextual effect. Second, we design a Video-Level Representation Module to take video-level preference into consideration when generating recommendations. Third, we propose a simple yet effective loss function for the video fragment recommendation task. Extensive experiments are conducted to evaluate the effectiveness of the proposed method. Experiment results show that our proposed framework outperforms state-of-the-art approaches in both NDCG@K and Recall@K, demonstrating judicious exploitation of fragment-level contextual effect and video-level preference. Moreover, empirical experiments are also conducted to analyze the key components and parameters in the proposed framework.
个性化视频片段推荐
从视频制作商或平台提供商的角度来看,在大众市场中,用户对视频内容的注意力持续时间非常短(例如,音乐/娱乐视频为15秒,讲座视频为6分钟等)。考虑到大量的现有和新视频远远超过了这样的注意力持续时间,一个艰巨的研究挑战是设计和实现一个系统来推荐长视频中的特定片段,以匹配用户的配置文件。在本文中,我们建议基于三个主要见解来应对这一挑战。首先,我们建议在我们的深度学习框架中应用自注意块来捕捉片段级上下文效应。其次,我们设计了一个视频级表示模块,在生成推荐时考虑视频级偏好。第三,提出了一种简单有效的视频片段推荐损失函数。进行了大量的实验来评估所提出方法的有效性。实验结果表明,我们提出的框架在NDCG@K和Recall@K中都优于最先进的方法,展示了对片段级上下文效应和视频级偏好的明智利用。此外,本文还进行了实证实验,对所提出的框架中的关键成分和参数进行了分析。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信