{"title":"Dynamic Push for HTTP Adaptive Streaming with Deep Reinforcement Learning","authors":"Haipeng Du, Danfu Yuan, Weizhan Zhang, Q. Zheng","doi":"10.1109/ICPADS53394.2021.00112","DOIUrl":null,"url":null,"abstract":"HTTP adaptive streaming (HAS) has revolutionized video distribution over the Internet due to its prominent benefit of outstanding quality of experience (QoE). Due to the pull-based nature of HTTP/1.1, the client must make requests for each segment. This usually causes high request overhead and low bandwidth utilization and finally reduces QoE. Currently, research into the HAS adaptive bitrate algorithm typically focuses on the server-push feature introduced in the new HTTP standard, which enables the client to receive multiple segments with a single request. Every time a request is sent, the client must simultaneously make decisions on the number of segments the server should push and the bitrate of these future segments. As the decision space complexity increases, existing rule-based strategies inevitably fail to achieve optimal performance. In this paper, we present D-Push, an HAS framework that combines deep reinforcement learning (DRL) techniques. Instead of relying on inaccurate assumptions about the environment and network capacity variation models, D-Push trains a DRL model and makes decisions by exploiting the QoE of past decisions through the training process and adapts to a wide range of highly dynamic environments. The experimental results show that D-Push outperforms the existing state-of-the-art algorithm by 12%-24% in terms of the average QoE.","PeriodicalId":309508,"journal":{"name":"2021 IEEE 27th International Conference on Parallel and Distributed Systems (ICPADS)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE 27th International Conference on Parallel and Distributed Systems (ICPADS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICPADS53394.2021.00112","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
HTTP adaptive streaming (HAS) has revolutionized video distribution over the Internet due to its prominent benefit of outstanding quality of experience (QoE). Due to the pull-based nature of HTTP/1.1, the client must make requests for each segment. This usually causes high request overhead and low bandwidth utilization and finally reduces QoE. Currently, research into the HAS adaptive bitrate algorithm typically focuses on the server-push feature introduced in the new HTTP standard, which enables the client to receive multiple segments with a single request. Every time a request is sent, the client must simultaneously make decisions on the number of segments the server should push and the bitrate of these future segments. As the decision space complexity increases, existing rule-based strategies inevitably fail to achieve optimal performance. In this paper, we present D-Push, an HAS framework that combines deep reinforcement learning (DRL) techniques. Instead of relying on inaccurate assumptions about the environment and network capacity variation models, D-Push trains a DRL model and makes decisions by exploiting the QoE of past decisions through the training process and adapts to a wide range of highly dynamic environments. The experimental results show that D-Push outperforms the existing state-of-the-art algorithm by 12%-24% in terms of the average QoE.