{"title":"LiveClip:通过深度强化学习实现智能移动短视频流","authors":"Jian-Qian He, Miao Hu, Yipeng Zhou, Di Wu","doi":"10.1145/3386290.3396937","DOIUrl":null,"url":null,"abstract":"Recent years have witnessed great success of mobile short-form video apps. However, most current video streaming strategies are designed for long-form videos, which cannot be directly applied to short-form videos. Especially, short-form videos differ in many aspects, such as shorter video length, mobile friendliness, sharp popularity dynamics, and so on. Facing these challenges, in this paper, we perform an in-depth measurement study on Douyin, one of the most popular mobile short-form video platforms in China. The measurement study reveals that Douyin adopts a rather simple strategy (called Next-One strategy) based on HTTP progressive download, which uses a sliding window with stop-and-wait protocol. Such a strategy performs poorly when network connection is slow and user scrolling is fast. The results motivate us to design an intelligent adaptive streaming scheme for mobile short-form videos. We formulate the short-form video streaming problem and propose an adaptive short-form video streaming strategy called LiveClip using a deep reinforcement learning (DRL) approach. Trace-driven experimental results prove that LiveClip outperforms existing state-of-the-art approaches by around 10%-40% under various scenarios.","PeriodicalId":402166,"journal":{"name":"Proceedings of the 30th ACM Workshop on Network and Operating Systems Support for Digital Audio and Video","volume":"38 9","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"18","resultStr":"{\"title\":\"LiveClip: towards intelligent mobile short-form video streaming with deep reinforcement learning\",\"authors\":\"Jian-Qian He, Miao Hu, Yipeng Zhou, Di Wu\",\"doi\":\"10.1145/3386290.3396937\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Recent years have witnessed great success of mobile short-form video apps. However, most current video streaming strategies are designed for long-form videos, which cannot be directly applied to short-form videos. Especially, short-form videos differ in many aspects, such as shorter video length, mobile friendliness, sharp popularity dynamics, and so on. Facing these challenges, in this paper, we perform an in-depth measurement study on Douyin, one of the most popular mobile short-form video platforms in China. The measurement study reveals that Douyin adopts a rather simple strategy (called Next-One strategy) based on HTTP progressive download, which uses a sliding window with stop-and-wait protocol. Such a strategy performs poorly when network connection is slow and user scrolling is fast. The results motivate us to design an intelligent adaptive streaming scheme for mobile short-form videos. We formulate the short-form video streaming problem and propose an adaptive short-form video streaming strategy called LiveClip using a deep reinforcement learning (DRL) approach. Trace-driven experimental results prove that LiveClip outperforms existing state-of-the-art approaches by around 10%-40% under various scenarios.\",\"PeriodicalId\":402166,\"journal\":{\"name\":\"Proceedings of the 30th ACM Workshop on Network and Operating Systems Support for Digital Audio and Video\",\"volume\":\"38 9\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-06-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"18\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 30th ACM Workshop on Network and Operating Systems Support for Digital Audio and Video\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3386290.3396937\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 30th ACM Workshop on Network and Operating Systems Support for Digital Audio and Video","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3386290.3396937","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
LiveClip: towards intelligent mobile short-form video streaming with deep reinforcement learning
Recent years have witnessed great success of mobile short-form video apps. However, most current video streaming strategies are designed for long-form videos, which cannot be directly applied to short-form videos. Especially, short-form videos differ in many aspects, such as shorter video length, mobile friendliness, sharp popularity dynamics, and so on. Facing these challenges, in this paper, we perform an in-depth measurement study on Douyin, one of the most popular mobile short-form video platforms in China. The measurement study reveals that Douyin adopts a rather simple strategy (called Next-One strategy) based on HTTP progressive download, which uses a sliding window with stop-and-wait protocol. Such a strategy performs poorly when network connection is slow and user scrolling is fast. The results motivate us to design an intelligent adaptive streaming scheme for mobile short-form videos. We formulate the short-form video streaming problem and propose an adaptive short-form video streaming strategy called LiveClip using a deep reinforcement learning (DRL) approach. Trace-driven experimental results prove that LiveClip outperforms existing state-of-the-art approaches by around 10%-40% under various scenarios.