{"title":"Spout: a transparent distributed execution engine for Java applets","authors":"T. Chiueh, Harish Sankaran, A. Neogi","doi":"10.1109/ICDCS.2000.840951","DOIUrl":"https://doi.org/10.1109/ICDCS.2000.840951","url":null,"abstract":"The advent of executable contents such as Java applets exposes WWW users to a new class of attacks that were not possible before. Serious security breach incidents due to implementation bugs arose repeatedly in the past several years. Without a provably correct implementation of Java's security architecture specification, it is difficult to make any conclusive statements about the security characteristic of current Java virtual machines. The Spout project takes an alternative approach to address Java's security problems. Rather than attempt a provably secure implementation, we aim to confine the damages of malicious Java applets to selective machines, thus protecting resources behind an organization's firewall from attacks by malicious or buggy applets. Spout is essentially a distributed Java execution engine that transparently decouples the processing of an incoming applet's application logic from that of the graphical user interface (GUI), such that the only part of an applet that is actually running on the requesting user's host is the harmless GUI code. A unique feature of the Spout architecture that does not exist in other similar systems, is that it is completely transparent to and does not require any modifications to WWW browsers or class libraries on the end hosts. This paper describes the design, implementation, and performance measurements of the first Spout prototype, which also incorporates run-time resource monitoring mechanisms to counter denial-of-service attacks.","PeriodicalId":284992,"journal":{"name":"Proceedings 20th IEEE International Conference on Distributed Computing Systems","volume":"100 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121397307","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Prediction-capable data compression algorithms for improving transmission efficiency on distributed systems","authors":"H. Chiou, A. I. Lai, C. Lei","doi":"10.1109/ICDCS.2000.840982","DOIUrl":"https://doi.org/10.1109/ICDCS.2000.840982","url":null,"abstract":"Network bandwidth is a limited and precious resource in distributed computing environments. Insufficient bandwidth will severely degrade the performance of a distributed computing task in exchanging massive amounts of data among the networked hosts. A feasible solution to save bandwidth is to incorporate data compression during transmission. However blind, or unconditional, compression may only result in waste of CPU power and even slow down the overall network transfer rate, if the data to be transmitted are hard to compress. We present a prediction-capable lossless data compression algorithm to address this problem. By adapting to the compression speed of a host CPU, current system load, and network speed, our algorithm can accurately estimate the compression time of each data block given, and decide whether it should be compressed or not. Experimental results indicate that our prediction mechanism is both efficient and effective, achieving 93% of prediction accuracy at the cost of only 3.2% of the execution time of unconditional compression.","PeriodicalId":284992,"journal":{"name":"Proceedings 20th IEEE International Conference on Distributed Computing Systems","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127301305","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An adaptive, perception-driven error spreading scheme in continuous media streaming","authors":"S. Varadarajan, H. Q. Ngo, J. Srivastava","doi":"10.1109/ICDCS.2000.840960","DOIUrl":"https://doi.org/10.1109/ICDCS.2000.840960","url":null,"abstract":"For transmission of continuous media (CM) streams such as audio and video over the Internet, a critical issue is that periodic network overloads cause bursty packet losses. Studies on human perception of audio and video streams have shown that bursty losses have the most annoying affects. Hence, addressing this issue is critical for multimedia applications such as Internet telephony, videoconferencing, distance learning, etc. Classical error handling schemes like retransmission and forward error recovery have the undesirable effects of (a) introducing timing variations, which is unacceptable for isochronous traffic, and (b) using up valuable bandwidth, potentially exacerbating the problem. This paper introduces a new concept called error spreading, which is a transformation technique that takes an input sequence of packets (from an audio or video stream) and permutes them before transmission. The packets are then un-permuted at the receiver before delivery to the application. The purpose is to spread out bursty network errors, in order to achieve better perceptual quality of the transmitted stream. Analysis has been done to determine the provable lower bound on bursty errors tolerable by this class of protocols. An algorithm to generate the optimal permutation for a given network loss rate is presented. While our previous work had focused on streams with no inter-frame dependencies, e.g. MJPEG encoded video, in this paper the technique is generalized to streams with inter-frame dependencies, e.g. MPEG encoded video.","PeriodicalId":284992,"journal":{"name":"Proceedings 20th IEEE International Conference on Distributed Computing Systems","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131230146","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Efficient RMI: dynamic specialization of object serialization","authors":"K. Kono, T. Masuda","doi":"10.1109/ICDCS.2000.840943","DOIUrl":"https://doi.org/10.1109/ICDCS.2000.840943","url":null,"abstract":"This paper describes a novel approach to object serialization in remote method invocation (RMI). Object serialization transforms objects' representations between heterogeneous platforms. Efficient serialization is primary concern in RMI because the conventional approaches incur large runtime overheads. The approach described specializes a serializing routine dynamically according to a receiver's platform, and this routine converts the sender's in-memory representations of objects directly into the receiver's in-memory representations. This approach simplifies the process of RMI: the receiver can access the passed objects immediately without any data copies and data conversions. A new platform can join the existing community of senders and receivers because a specialized routine for the platform is generated as needed. Experimental results show that significant performance gains are obtained by this approach. The prototype implementation of this approach was 1.9-3.0 times faster than Sun XDR, and the time needed for generating a specialized routine was only 0.6 msec.","PeriodicalId":284992,"journal":{"name":"Proceedings 20th IEEE International Conference on Distributed Computing Systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134352920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Scheduling with global information in distributed systems","authors":"F. Petrini, Wu-chun Feng","doi":"10.1109/ICDCS.2000.840933","DOIUrl":"https://doi.org/10.1109/ICDCS.2000.840933","url":null,"abstract":"Buffered coscheduling is a distributed scheduling methodology for time-sharing communicating processes in a distributed system, e.g., PC cluster. The principle mechanisms involved in this methodology are communication buffering and strobing. With communication buffering, communication generated by each processor is buffered and performed at the end of regular intervals (or time slices) to amortize communication and scheduling overhead. This regular communication structure is then leveraged by introducing a strobing mechanism which performs a total exchange of information at the end of each time slice. Thus, a distributed system can rely on this global information to more efficiently schedule communicating processes rather than rely on isolated or implicit information gathered from local events between processors. We describe how buffered coscheduling is implemented in the context of our SMART simulator. We then present performance measurements for two synthetic workloads and demonstrate the effectiveness of buffered coscheduling under different computational granularities, context-switch times and time-slice granularities.","PeriodicalId":284992,"journal":{"name":"Proceedings 20th IEEE International Conference on Distributed Computing Systems","volume":"235 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129372519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A content placement and management system for distributed Web-server systems","authors":"Chu-Sing Yang, Mon-Yen Luo","doi":"10.1109/ICDCS.2000.840986","DOIUrl":"https://doi.org/10.1109/ICDCS.2000.840986","url":null,"abstract":"Clusters of commodity computers are becoming an increasingly popular approach for building cost-effective high-performance Internet servers. However, how to place and manage content in such a distributed and complex system is becoming a challenging problem. In particular, such distributed servers tend to be more heterogeneous, and this heterogeneity will further increase the management burden. This paper describes the motivation, design, implementation and performance of a content placement and management system for a heterogeneous distributed Web server.","PeriodicalId":284992,"journal":{"name":"Proceedings 20th IEEE International Conference on Distributed Computing Systems","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125629612","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"On request forwarding for dynamic Web caching hierarchies","authors":"C. Chiang, Yingjie Li, Ming T. Liu, M. E. Muller","doi":"10.1109/ICDCS.2000.840937","DOIUrl":"https://doi.org/10.1109/ICDCS.2000.840937","url":null,"abstract":"We propose a Web caching scheme, based on the caching neighborhood protocol, featuring dynamic caching hierarchies as its underlying infrastructure. Dynamic Web caching hierarchies consist of proxy servers building hierarchies on a per request basis, in contrast to static Web caching hierarchies that comprise proxy servers preconfigured into hierarchies. Concerns of overheads and efficiency in forwarding requests individually have driven conventional Web caching schemes to use static Web caching hierarchies. Nevertheless, we show that a Web caching scheme featuring dynamic caching hierarchies can be both efficient and effective in request forwarding.","PeriodicalId":284992,"journal":{"name":"Proceedings 20th IEEE International Conference on Distributed Computing Systems","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126091758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"System mechanisms for partial rollback of mobile agent execution","authors":"Markus Straßer, K. Rothermel","doi":"10.1109/ICDCS.2000.840903","DOIUrl":"https://doi.org/10.1109/ICDCS.2000.840903","url":null,"abstract":"Mobile agent technology has been proposed for various fault-sensitive application areas, including electronic commerce, systems management and active messaging. Recently proposed protocols providing the exactly-once execution of mobile agents allow the usage of mobile agents in these application areas. Based on these protocols, a mechanism for the application-initiated partial rollback of the agent execution is presented. The rollback mechanism uses compensation operations to roll back the effects of the agent execution on the resources and uses a mixture of physical logging and compensation operations to roll back the state of the agent. The introduction of different types of compensation operations allows performance improvements during the agent rollback.","PeriodicalId":284992,"journal":{"name":"Proceedings 20th IEEE International Conference on Distributed Computing Systems","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124733137","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"On low-cost error containment and recovery methods for guarded software upgrading","authors":"A. Tai, K. Tso, L. Alkalai, S. Chau, W. Sanders","doi":"10.1109/ICDCS.2000.840969","DOIUrl":"https://doi.org/10.1109/ICDCS.2000.840969","url":null,"abstract":"To assure dependable onboard evolution, we have developed a methodology called guarded software upgrading (GSU). We focus on a low-cost approach to error containment and recovery for GSU. To ensure low development cost, we exploit inherent system resource redundancies as the fault tolerance means. In order to mitigate the effect of residual software faults at low performance cost, we take a crucial step in devising error containment and recovery methods by introducing the confidence-driven notion. This notion complements the message-driven (or communication-induced) approach employed by a number of existing checkpointing protocols for tolerating hardware faults. In particular, we discriminate between the individual software components with respect to our confidence in their reliability and keep track of changes of our confidence (due to knowledge about potential process state contamination) in particular processes. This, in turn, enables the individual processes in the spaceborne distributed system to make decisions locally at run-time, on whether to establish a checkpoint upon message passing and whether to roll back or roll forward during error recovery. The resulting message-driven confidence-driven approach enables cost-effective checkpointing and cascading-rollback free recovery.","PeriodicalId":284992,"journal":{"name":"Proceedings 20th IEEE International Conference on Distributed Computing Systems","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128118553","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Improving distributed workload performance by sharing both CPU and memory resources","authors":"Xiaodong Zhang, Yanxia Qu, Li Xiao","doi":"10.1109/ICDCS.2000.840934","DOIUrl":"https://doi.org/10.1109/ICDCS.2000.840934","url":null,"abstract":"We develop and examine job migration policies by considering effective usage of global memory in addition to CPU load sharing in distributed systems. When a node is identified for lacking sufficient memory space to serve jobs, one or more jobs of the node will be migrated to remote nodes with low memory allocations. If the memory space is sufficiently large the jobs will be scheduled by a CPU-based load sharing policy. Following the principle of sharing both CPU and memory resources, we present several load sharing alternatives. Out objective is to reduce the number of page faults caused by unbalanced memory allocations for jobs among distributed nodes, so that overall performance of a distributed system can be significantly improved. We have conducted trace-driven simulations to compare CPU-based load sharing policies with our policies. We show that our load sharing policies not only improve performance of memory bound jobs, but also maintain the same load sharing quality as the CPU-based policies for CPU-bound jobs. Regarding remote execution and preemptive migration strategies, our experiments indicate that a strategy selection in load sharing is dependent on the amount of memory demand of jobs-remote execution is more effective for memory-bound jobs, and preemptive migration is more effective for CPU-bound jobs. Our CPU memory-based policy using either high performance or high throughput approach and using the remote execution strategy performs the best for both CPU-bound and memory-bound jobs.","PeriodicalId":284992,"journal":{"name":"Proceedings 20th IEEE International Conference on Distributed Computing Systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130158494","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}