{"title":"Chaining: a generalized batching technique for video-on-demand systems","authors":"S. Sheu, K. Hua, Wallapak Tavanapong","doi":"10.1109/MMCS.1997.609583","DOIUrl":null,"url":null,"abstract":"Although the bandwidth of the storage I/O typically dictates the performance of a conventional DBMS, network-I/O bandwidth limitation is the main operating constraint of most multimedia database systems. In spite of the fact that the throughput of a public network (e.g. ATM) can be huge, the network-I/O bottleneck limits the number of client stations a media server can support simultaneously. A possible solution to this problem is to batch requests for the same video and multicast the data to these requesters to save the network I/O bandwidth. A disadvantage of this scheme is that it unfairly forces requests arriving early in a batch to wait for the latecomers. As a result, the reneging rate can be high in a system which employs this technique. To reduce the long access latency, we examine in this paper a new batching mechanism called chaining. This approach allows the server to serve a \"chain\" of client stations using a single data stream. The idea is to pipeline the data stream through the chain of stations. Requests arriving early in a chain (virtual batch), therefore, do not have to experience long delays as in conventional batching. Our simulation results based on an ATM networking environment indicate that very significant performance improvement over batching can be obtained.","PeriodicalId":302885,"journal":{"name":"Proceedings of IEEE International Conference on Multimedia Computing and Systems","volume":"68 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1997-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"217","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of IEEE International Conference on Multimedia Computing and Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/MMCS.1997.609583","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 217
Abstract
Although the bandwidth of the storage I/O typically dictates the performance of a conventional DBMS, network-I/O bandwidth limitation is the main operating constraint of most multimedia database systems. In spite of the fact that the throughput of a public network (e.g. ATM) can be huge, the network-I/O bottleneck limits the number of client stations a media server can support simultaneously. A possible solution to this problem is to batch requests for the same video and multicast the data to these requesters to save the network I/O bandwidth. A disadvantage of this scheme is that it unfairly forces requests arriving early in a batch to wait for the latecomers. As a result, the reneging rate can be high in a system which employs this technique. To reduce the long access latency, we examine in this paper a new batching mechanism called chaining. This approach allows the server to serve a "chain" of client stations using a single data stream. The idea is to pipeline the data stream through the chain of stations. Requests arriving early in a chain (virtual batch), therefore, do not have to experience long delays as in conventional batching. Our simulation results based on an ATM networking environment indicate that very significant performance improvement over batching can be obtained.