Viewport-Driven Adaptive 360◦ Live Streaming Optimization Framework

Shuai Peng, Jialu Hu, Han Xiao, Shujie Yang, Changqiao Xu
{"title":"Viewport-Driven Adaptive 360◦ Live Streaming Optimization Framework","authors":"Shuai Peng, Jialu Hu, Han Xiao, Shujie Yang, Changqiao Xu","doi":"10.33969/j-nana.2021.010401","DOIUrl":null,"url":null,"abstract":"Virtual reality (VR) video streaming and 360◦ panoramic video have received extensive attention in recent years, which can bring users an immersive experience. However, the ultra-high bandwidth and ultra-low latency requirements of virtual reality video or 360◦ panoramic video also put tremendous pressure on the carrying capacity of the current network. In fact, since the user’s field of view (a.k.a viewport) is limited when watching a panoramic video and users can only watch about 20%∼30% of the video content, it is not necessary to directly transmit all high-resolution content to the user. Therefore, predicting the user’s future viewing viewport can be crucial for selective streaming and further bitrate decisions. Combined with the tile-based adaptive bitrate (ABR) algorithm for panoramic video, video content within the user’s viewport can be transmitted at a higher resolution, while areas outside the viewport can be transmitted at a lower resolution. This paper mainly proposes a viewport-driven adaptive 360◦ live streaming optimization framework, which combines viewport prediction and ABR algorithm to optimize the transmission of live 360◦ panoramic video. However, existing viewport prediction always suffers from low prediction accuracy and does not support real-time performance. With the advantage of convolutional network (CNN) in image processing and long short-term memory (LSTM) in temporal series processing, we propose an online-updated viewport prediction model called LiveCL which mainly utilizes CNN to extract the spatial characteristics of video frames and LSTM to learn the temporal characteristics of the user’s viewport trajectories. With the help of the viewport prediction and ABR algorithm, unnecessary bandwidth consumption can be effectively reduced. The main contributions of this work include: (1) a framework for 360◦ video transmission is proposed; (2) an online real-time viewport prediction model called LiveCL is proposed to optimize 360◦ video transmission combined with a novel ABR algorithm, which outperforms the existing model. Based on the public 360◦ video dataset, the tile accuracy, recall, precision, and frame accuracy of LiveCL are better than those of the latest model. Combined with related adaptive bitrate algorithms, the proposed viewport prediction model can reduce the transmission bandwidth by about 50%.","PeriodicalId":384373,"journal":{"name":"Journal of Networking and Network Applications","volume":"45 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Networking and Network Applications","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.33969/j-nana.2021.010401","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Virtual reality (VR) video streaming and 360◦ panoramic video have received extensive attention in recent years, which can bring users an immersive experience. However, the ultra-high bandwidth and ultra-low latency requirements of virtual reality video or 360◦ panoramic video also put tremendous pressure on the carrying capacity of the current network. In fact, since the user’s field of view (a.k.a viewport) is limited when watching a panoramic video and users can only watch about 20%∼30% of the video content, it is not necessary to directly transmit all high-resolution content to the user. Therefore, predicting the user’s future viewing viewport can be crucial for selective streaming and further bitrate decisions. Combined with the tile-based adaptive bitrate (ABR) algorithm for panoramic video, video content within the user’s viewport can be transmitted at a higher resolution, while areas outside the viewport can be transmitted at a lower resolution. This paper mainly proposes a viewport-driven adaptive 360◦ live streaming optimization framework, which combines viewport prediction and ABR algorithm to optimize the transmission of live 360◦ panoramic video. However, existing viewport prediction always suffers from low prediction accuracy and does not support real-time performance. With the advantage of convolutional network (CNN) in image processing and long short-term memory (LSTM) in temporal series processing, we propose an online-updated viewport prediction model called LiveCL which mainly utilizes CNN to extract the spatial characteristics of video frames and LSTM to learn the temporal characteristics of the user’s viewport trajectories. With the help of the viewport prediction and ABR algorithm, unnecessary bandwidth consumption can be effectively reduced. The main contributions of this work include: (1) a framework for 360◦ video transmission is proposed; (2) an online real-time viewport prediction model called LiveCL is proposed to optimize 360◦ video transmission combined with a novel ABR algorithm, which outperforms the existing model. Based on the public 360◦ video dataset, the tile accuracy, recall, precision, and frame accuracy of LiveCL are better than those of the latest model. Combined with related adaptive bitrate algorithms, the proposed viewport prediction model can reduce the transmission bandwidth by about 50%.
视口驱动的自适应360◦直播优化框架
虚拟现实(VR)视频流和360度全景视频近年来受到广泛关注,可以为用户带来身临其境的体验。然而,虚拟现实视频或360度全景视频的超高带宽和超低延迟要求,也给当前网络的承载能力带来了巨大的压力。实际上,由于观看全景视频时用户的视场(又称视口)是有限的,用户只能观看约20% ~ 30%的视频内容,因此没有必要将所有高分辨率内容直接传输给用户。因此,预测用户未来的观看视口对于选择流和进一步的比特率决策至关重要。结合全景视频的基于tile的自适应比特率(ABR)算法,用户视口内的视频内容可以以更高的分辨率传输,而视口外的区域可以以更低的分辨率传输。本文主要提出了一种视口驱动的自适应360度全景直播优化框架,将视口预测与ABR算法相结合,对360度全景直播视频的传输进行优化。然而,现有的视口预测存在预测精度低、不支持实时性能的问题。利用卷积网络(CNN)在图像处理和长短时记忆(LSTM)在时间序列处理方面的优势,提出了一种在线更新的视口预测模型LiveCL,该模型主要利用CNN提取视频帧的空间特征,LSTM学习用户视口轨迹的时间特征。借助视口预测和ABR算法,可以有效地减少不必要的带宽消耗。本工作的主要贡献包括:(1)提出了360度视频传输框架;(2)结合一种新颖的ABR算法,提出一种在线实时视口预测模型LiveCL,优化360◦视频传输,优于现有模型。基于公开的360◦视频数据集,LiveCL的瓷砖精度、召回率、精度和帧精度优于最新型号。结合相关的自适应比特率算法,所提出的视口预测模型可将传输带宽降低约50%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信