{"title":"An optimal client buffer model for multiplexing HTTP streams","authors":"Saayan Mitra, Viswanathan Swaminathan","doi":"10.1109/MMSP.2012.6343455","DOIUrl":null,"url":null,"abstract":"The basic tenet of HTTP streaming is to deliver fragments of video and audio that are individually addressable chunks of content over HTTP. Some media players consume incoming video and audio data only in a time ordered multiplexed format. If alternate tracks need to be added post packaging of the media, it has to be repackaged that involves duplication resulting in multiple multiplexed files. Additionally for adaptive streaming, a set of all those files need to be added for each bitrate. Alternatively, it is more efficient to store component tracks separately, fetching only the required tracks and multiplexing audio and video in the client before sending the data to the decoder. To deliver an optimal viewing experience, the client has to take care of the seemingly conflicting constraints viz., handling the network jitter, minimizing the time to switch to an alternate track and minimizing the live latency. For instance, to absorb more network jitter more data should be available in the buffers but this would increase the switching latency. We introduce a formal buffer model for a client that gathers video and audio fragments and multiplexes them on the fly. This model uses separate video and audio buffers, a multiplexed buffer in the application, and decoding buffer associated with the decoder. We model the buffer sizes, their thresholds to request data from the network, and the rate of transfer of data between buffers. We show that these buffers can be designed varying these parameters to optimize for the above constraints. This buffer model can also be leveraged for deciding when to switch in adaptive bitrate streaming. We further validate these by experimental results from our implementation.","PeriodicalId":325274,"journal":{"name":"2012 IEEE 14th International Workshop on Multimedia Signal Processing (MMSP)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2012-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2012 IEEE 14th International Workshop on Multimedia Signal Processing (MMSP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/MMSP.2012.6343455","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5
Abstract
The basic tenet of HTTP streaming is to deliver fragments of video and audio that are individually addressable chunks of content over HTTP. Some media players consume incoming video and audio data only in a time ordered multiplexed format. If alternate tracks need to be added post packaging of the media, it has to be repackaged that involves duplication resulting in multiple multiplexed files. Additionally for adaptive streaming, a set of all those files need to be added for each bitrate. Alternatively, it is more efficient to store component tracks separately, fetching only the required tracks and multiplexing audio and video in the client before sending the data to the decoder. To deliver an optimal viewing experience, the client has to take care of the seemingly conflicting constraints viz., handling the network jitter, minimizing the time to switch to an alternate track and minimizing the live latency. For instance, to absorb more network jitter more data should be available in the buffers but this would increase the switching latency. We introduce a formal buffer model for a client that gathers video and audio fragments and multiplexes them on the fly. This model uses separate video and audio buffers, a multiplexed buffer in the application, and decoding buffer associated with the decoder. We model the buffer sizes, their thresholds to request data from the network, and the rate of transfer of data between buffers. We show that these buffers can be designed varying these parameters to optimize for the above constraints. This buffer model can also be leveraged for deciding when to switch in adaptive bitrate streaming. We further validate these by experimental results from our implementation.