Meng Shen, Jinpeng Zhang, Ke Xu, Liehuang Zhu, Jiangchuan Liu, Xiaojiang Du
{"title":"DeepQoE:利用深度学习从加密流量中实时测量视频QoE","authors":"Meng Shen, Jinpeng Zhang, Ke Xu, Liehuang Zhu, Jiangchuan Liu, Xiaojiang Du","doi":"10.1109/IWQoS49365.2020.9212897","DOIUrl":null,"url":null,"abstract":"With the dramatic increase of video traffic on the Internet, video quality of experience (QoE) measurement becomes even more important, which provides network operators with an insight into the quality of their video delivery services. The widespread adoption of end-to-end encryption protocols such as SSL/TLS, however, sets a barrier to QoE monitoring as the most valuable indicators in cleartext traffic are no longer available after encryption. Existing studies on video QoE measurement in encrypted traffic support only coarse-grained QoE metrics or suffer from low accuracy. In this paper, we propose DeepQoE, a new approach that enables real-time video QoE measurement from encrypted traffic. We summarize critical fine-grained QoE metrics, including startup delay, rebuffering, and video resolutions. In order to achieve accurate and real-time inference of these metrics, we build DeepQoE by employing Convolutional Neural Networks (CNNs) with a sophisticated input and architecture design. More specifically, DeepQoE only leverages packet Round-Trip Time (RTT) in upstream traffic as its input. Evaluation results with real-world datasets collected from two popular content providers (i.e., YouTube and Bilibili) show that DeepQoE can improve QoE measurement accuracy by up to 22% over the state-of-the-art methods.","PeriodicalId":177899,"journal":{"name":"2020 IEEE/ACM 28th International Symposium on Quality of Service (IWQoS)","volume":"97 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"17","resultStr":"{\"title\":\"DeepQoE: Real-time Measurement of Video QoE from Encrypted Traffic with Deep Learning\",\"authors\":\"Meng Shen, Jinpeng Zhang, Ke Xu, Liehuang Zhu, Jiangchuan Liu, Xiaojiang Du\",\"doi\":\"10.1109/IWQoS49365.2020.9212897\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"With the dramatic increase of video traffic on the Internet, video quality of experience (QoE) measurement becomes even more important, which provides network operators with an insight into the quality of their video delivery services. The widespread adoption of end-to-end encryption protocols such as SSL/TLS, however, sets a barrier to QoE monitoring as the most valuable indicators in cleartext traffic are no longer available after encryption. Existing studies on video QoE measurement in encrypted traffic support only coarse-grained QoE metrics or suffer from low accuracy. In this paper, we propose DeepQoE, a new approach that enables real-time video QoE measurement from encrypted traffic. We summarize critical fine-grained QoE metrics, including startup delay, rebuffering, and video resolutions. In order to achieve accurate and real-time inference of these metrics, we build DeepQoE by employing Convolutional Neural Networks (CNNs) with a sophisticated input and architecture design. More specifically, DeepQoE only leverages packet Round-Trip Time (RTT) in upstream traffic as its input. Evaluation results with real-world datasets collected from two popular content providers (i.e., YouTube and Bilibili) show that DeepQoE can improve QoE measurement accuracy by up to 22% over the state-of-the-art methods.\",\"PeriodicalId\":177899,\"journal\":{\"name\":\"2020 IEEE/ACM 28th International Symposium on Quality of Service (IWQoS)\",\"volume\":\"97 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-06-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"17\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 IEEE/ACM 28th International Symposium on Quality of Service (IWQoS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IWQoS49365.2020.9212897\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE/ACM 28th International Symposium on Quality of Service (IWQoS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IWQoS49365.2020.9212897","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
DeepQoE: Real-time Measurement of Video QoE from Encrypted Traffic with Deep Learning
With the dramatic increase of video traffic on the Internet, video quality of experience (QoE) measurement becomes even more important, which provides network operators with an insight into the quality of their video delivery services. The widespread adoption of end-to-end encryption protocols such as SSL/TLS, however, sets a barrier to QoE monitoring as the most valuable indicators in cleartext traffic are no longer available after encryption. Existing studies on video QoE measurement in encrypted traffic support only coarse-grained QoE metrics or suffer from low accuracy. In this paper, we propose DeepQoE, a new approach that enables real-time video QoE measurement from encrypted traffic. We summarize critical fine-grained QoE metrics, including startup delay, rebuffering, and video resolutions. In order to achieve accurate and real-time inference of these metrics, we build DeepQoE by employing Convolutional Neural Networks (CNNs) with a sophisticated input and architecture design. More specifically, DeepQoE only leverages packet Round-Trip Time (RTT) in upstream traffic as its input. Evaluation results with real-world datasets collected from two popular content providers (i.e., YouTube and Bilibili) show that DeepQoE can improve QoE measurement accuracy by up to 22% over the state-of-the-art methods.