Redefine the A in ABR for 360-degree Videos: A Flexible ABR Framework

Kuan-Ying Lee, Andrew Yoo, Jounsup Park, K. Nahrstedt
{"title":"Redefine the A in ABR for 360-degree Videos: A Flexible ABR Framework","authors":"Kuan-Ying Lee, Andrew Yoo, Jounsup Park, K. Nahrstedt","doi":"10.1109/ISM.2020.00020","DOIUrl":null,"url":null,"abstract":"360-degree video has been popular due to the immersive experience it provides to the viewer. While watching, viewer can control the field of view (FoV)11In this paper, we use viewport interchangeably with FoV in the range of 360° by 180°. As this trend continues, adaptive bitrate (ABR) streaming is becoming a prevalent issue. Most existing ABR algorithms for 360 videos (360 ABR algorithms) require real-time head traces and certain computation resource from the client for streaming, which largely constrains the range of audience. Also, while more 360 ABR algorithms rely upon machine learning (ML) for viewport prediction, ML and ABR are research topics that grow mostly independently. In this paper, we propose a two-fold ABR algorithm for 360 video streaming that utilizes 1) an off-the-shelf ABR algorithm for ordinary videos, and 2) an off-the-shelf viewport prediction model. Our algorithm requires neither real-time head traces nor additional computation from the viewing device. In addition, it adapts easily to the newest developments in viewport prediction and ABR. As a consequence, the proposed method fits nicely to the existing streaming framework and any advancement in viewport prediction and ABR could enhance its performance. With the quantitative experiments, we demonstrate that the proposed method achieves twice the quality of experience (QoE) compared to the baseline.","PeriodicalId":120972,"journal":{"name":"2020 IEEE International Symposium on Multimedia (ISM)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE International Symposium on Multimedia (ISM)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISM.2020.00020","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

360-degree video has been popular due to the immersive experience it provides to the viewer. While watching, viewer can control the field of view (FoV)11In this paper, we use viewport interchangeably with FoV in the range of 360° by 180°. As this trend continues, adaptive bitrate (ABR) streaming is becoming a prevalent issue. Most existing ABR algorithms for 360 videos (360 ABR algorithms) require real-time head traces and certain computation resource from the client for streaming, which largely constrains the range of audience. Also, while more 360 ABR algorithms rely upon machine learning (ML) for viewport prediction, ML and ABR are research topics that grow mostly independently. In this paper, we propose a two-fold ABR algorithm for 360 video streaming that utilizes 1) an off-the-shelf ABR algorithm for ordinary videos, and 2) an off-the-shelf viewport prediction model. Our algorithm requires neither real-time head traces nor additional computation from the viewing device. In addition, it adapts easily to the newest developments in viewport prediction and ABR. As a consequence, the proposed method fits nicely to the existing streaming framework and any advancement in viewport prediction and ABR could enhance its performance. With the quantitative experiments, we demonstrate that the proposed method achieves twice the quality of experience (QoE) compared to the baseline.
重新定义360度视频ABR中的A:一个灵活的ABR框架
360度视频之所以受欢迎,是因为它为观众提供了身临其境的体验。在观看时,观看者可以控制视场(FoV)11在本文中,我们在360°× 180°的范围内交替使用viewport和FoV。随着这一趋势的持续,自适应比特率(ABR)流正在成为一个普遍的问题。现有的360度视频ABR算法(360 ABR算法)大多需要实时的头部跟踪和一定的客户端计算资源来进行流媒体,这在很大程度上限制了观众的范围。此外,虽然更多的360 ABR算法依赖于机器学习(ML)进行视口预测,但ML和ABR是主要独立发展的研究课题。在本文中,我们提出了一种用于360视频流的双重ABR算法,该算法利用1)普通视频的现成ABR算法,以及2)现成的视口预测模型。我们的算法既不需要实时头部轨迹,也不需要观看设备的额外计算。此外,它可以很容易地适应viewport预测和ABR的最新发展。结果表明,该方法能够很好地适应现有的流媒体框架,并且在视口预测和ABR方面的任何进步都可以提高其性能。通过定量实验,我们证明了该方法的经验质量(QoE)是基线的两倍。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信