缓冲感知神经自适应视频流避免额外的缓冲消耗

Tianchi Huang, Chao Zhou, Ruixiao Zhang, Chen Wu, Lifeng Sun
{"title":"缓冲感知神经自适应视频流避免额外的缓冲消耗","authors":"Tianchi Huang, Chao Zhou, Ruixiao Zhang, Chen Wu, Lifeng Sun","doi":"10.1109/INFOCOM53939.2023.10229002","DOIUrl":null,"url":null,"abstract":"Adaptive video streaming has already been a major scheme to transmit videos with high quality of experience (QoE). However, the improvement of network traffics and the high compression efficiency of videos enable clients to accumulate too much buffer, which might cause colossal data waste if users close the session early before the session ends. In this paper, we consider buffer-aware adaptive bitrate (ABR) mechanisms to overcome the above concerns. Formulating the buffer-aware rate adaptation problem as multi-objective optimization, we propose DeepBuffer, a deep reinforcement learning-based approach that jointly takes proper bitrate and controls the maximum buffer. To deal with the challenges of learning-based buffer-aware ABR composition, such as infinite possible plans, multiple bitrate levels, and complex action space, we design adequate preference-driven inputs, separate action outputs, and invent high sample-efficiency training methodologies. We train DeepBuffer with a broad set of real-world network traces and provide a comprehensive evaluation in terms of various network scenarios and different video types. Experimental results indicate that DeepBuffer rivals or outperforms recent heuristics and learning-based ABR schemes in terms of QoE while heavily reducing the average buffer consumption by up to 90%. Extensive real-world experiments further demonstrate the substantial superiority of DeepBuffer.","PeriodicalId":387707,"journal":{"name":"IEEE INFOCOM 2023 - IEEE Conference on Computer Communications","volume":"96 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Buffer Awareness Neural Adaptive Video Streaming for Avoiding Extra Buffer Consumption\",\"authors\":\"Tianchi Huang, Chao Zhou, Ruixiao Zhang, Chen Wu, Lifeng Sun\",\"doi\":\"10.1109/INFOCOM53939.2023.10229002\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Adaptive video streaming has already been a major scheme to transmit videos with high quality of experience (QoE). However, the improvement of network traffics and the high compression efficiency of videos enable clients to accumulate too much buffer, which might cause colossal data waste if users close the session early before the session ends. In this paper, we consider buffer-aware adaptive bitrate (ABR) mechanisms to overcome the above concerns. Formulating the buffer-aware rate adaptation problem as multi-objective optimization, we propose DeepBuffer, a deep reinforcement learning-based approach that jointly takes proper bitrate and controls the maximum buffer. To deal with the challenges of learning-based buffer-aware ABR composition, such as infinite possible plans, multiple bitrate levels, and complex action space, we design adequate preference-driven inputs, separate action outputs, and invent high sample-efficiency training methodologies. We train DeepBuffer with a broad set of real-world network traces and provide a comprehensive evaluation in terms of various network scenarios and different video types. Experimental results indicate that DeepBuffer rivals or outperforms recent heuristics and learning-based ABR schemes in terms of QoE while heavily reducing the average buffer consumption by up to 90%. Extensive real-world experiments further demonstrate the substantial superiority of DeepBuffer.\",\"PeriodicalId\":387707,\"journal\":{\"name\":\"IEEE INFOCOM 2023 - IEEE Conference on Computer Communications\",\"volume\":\"96 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-05-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE INFOCOM 2023 - IEEE Conference on Computer Communications\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/INFOCOM53939.2023.10229002\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE INFOCOM 2023 - IEEE Conference on Computer Communications","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/INFOCOM53939.2023.10229002","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

自适应视频流已经成为传输高质量视频体验(QoE)的主要方案。但是,网络流量的提高和视频的高压缩效率使得客户端积累了过多的缓冲区,如果用户在会话结束前提前关闭会话,可能会造成巨大的数据浪费。在本文中,我们考虑了缓冲感知自适应比特率(ABR)机制来克服上述问题。将缓冲感知速率自适应问题表述为多目标优化,我们提出了一种基于深度强化学习的方法DeepBuffer,该方法可以选择适当的比特率并控制最大缓冲区。为了应对基于学习的缓冲感知ABR组合的挑战,如无限可能的计划、多比特率水平和复杂的动作空间,我们设计了足够的偏好驱动输入、单独的动作输出,并发明了高样本效率的训练方法。我们用一组广泛的真实网络轨迹来训练DeepBuffer,并根据各种网络场景和不同的视频类型提供全面的评估。实验结果表明,DeepBuffer在QoE方面与最近的启发式和基于学习的ABR方案相媲美或优于,同时大大降低了平均缓冲区消耗,最高可达90%。大量的实际实验进一步证明了DeepBuffer的巨大优势。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Buffer Awareness Neural Adaptive Video Streaming for Avoiding Extra Buffer Consumption
Adaptive video streaming has already been a major scheme to transmit videos with high quality of experience (QoE). However, the improvement of network traffics and the high compression efficiency of videos enable clients to accumulate too much buffer, which might cause colossal data waste if users close the session early before the session ends. In this paper, we consider buffer-aware adaptive bitrate (ABR) mechanisms to overcome the above concerns. Formulating the buffer-aware rate adaptation problem as multi-objective optimization, we propose DeepBuffer, a deep reinforcement learning-based approach that jointly takes proper bitrate and controls the maximum buffer. To deal with the challenges of learning-based buffer-aware ABR composition, such as infinite possible plans, multiple bitrate levels, and complex action space, we design adequate preference-driven inputs, separate action outputs, and invent high sample-efficiency training methodologies. We train DeepBuffer with a broad set of real-world network traces and provide a comprehensive evaluation in terms of various network scenarios and different video types. Experimental results indicate that DeepBuffer rivals or outperforms recent heuristics and learning-based ABR schemes in terms of QoE while heavily reducing the average buffer consumption by up to 90%. Extensive real-world experiments further demonstrate the substantial superiority of DeepBuffer.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信