{"title":"An examination of the transition of the Arjuna distributed transaction processing software from research to products","authors":"M. Little, S. Shrivastava","doi":"10.5555/1251516.1251520","DOIUrl":"https://doi.org/10.5555/1251516.1251520","url":null,"abstract":"The Arjuna transaction system began life in the mid 1980s as an academic project to examine the use of object-oriented techniques in the development of fault-tolerant systems; over 15 years later it is now a Hewlett-Packard product in its own right and is also embedded in several other offerings from HP. In addition, many of the original developers of Arjuna have accompanied the system on its journey and had first hand experience in taking this academic research vehicle into a commercial environment. At times the transition has been neither easy nor smooth but it has been interesting from many different perspectives. In this paper we shall attempt to give an overview of how this occurred and illustrate some of the lessons we have learned over the years.","PeriodicalId":171901,"journal":{"name":"USENIX Workshop on Industrial Experiences with Systems Software","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117284030","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Tree houses and real houses: research and commercial software","authors":"Susan J. LoVerso, M. Seltzer","doi":"10.5555/1251516.1251521","DOIUrl":"https://doi.org/10.5555/1251516.1251521","url":null,"abstract":"Sleepycat Software develops and supports the Open Source software product Berkeley DB, the most widely deployed embedded database software in the world. Berkeley DB originated at the University of California, Berkeley, and in this paper, we discuss the differences between research software and a quality commercial product. Over the past years we have acquired an education in configuration, portability, and testing. The key message is that code quality, a willingness to rewrite or discard code when necessary, rigorous adherence to internal standards, and constant policing of ourselves are the key requirements of quality software.","PeriodicalId":171901,"journal":{"name":"USENIX Workshop on Industrial Experiences with Systems Software","volume":"368 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122399804","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Incremental linking on HP-UX","authors":"D. Mikulin, Murali Vijayasundaram, Loreena Wong","doi":"10.5555/1251503.1251509","DOIUrl":"https://doi.org/10.5555/1251503.1251509","url":null,"abstract":"The linker is often a time bottleneck in the development of large applications. Traditional linkers process all input files, even if only one or two objects have changed since the previous link. To shorten link time, we have developed an incremental linker for HP-UX which only processes modified files. Users can take advantage of the performance gains without modifying their usage patterns of the existing HP-UX linker since the incremental linker is implemented on top of the regular 64-bit linker. In addition to the tasks of the normal linker, the incremental linker must save extra information about input files, symbols and relocations, allow for the expansion of existing files and addition of new ones by allocating padding spaces in the output file and use this information to perform in-place updates. The results of several different design considerations and tradeoffs are materialized in link-time performance gains of up to thirteen times that of a normal link for large applications.","PeriodicalId":171901,"journal":{"name":"USENIX Workshop on Industrial Experiences with Systems Software","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116147149","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Dynamic memory management with garbage collection for embedded applications","authors":"R. Brega, G. Rivera","doi":"10.5555/1251503.1251514","DOIUrl":"https://doi.org/10.5555/1251503.1251514","url":null,"abstract":"A software system can be called a safe-system with respect to memory, when it supports only strong-typing and it does not allow for the manual disposal of dynamic memory [2]. The first aspect guarantees that untyped, potentially dangerous operations are caught by the compiler or by run-time checks. The second issue is solved by the utilisation of an automatic memory reclamation scheme, i.e. a garbage collector.\u0000 In this paper we argue that the careful choice of the programming language, along with an automatic memory reclamation scheme can optimise memory usage, while ensuring that many of the logical errors related to memory can be avoided.","PeriodicalId":171901,"journal":{"name":"USENIX Workshop on Industrial Experiences with Systems Software","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122951462","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Andreas Haeberlen, J. Liedtke, Yoonho Park, Lars Reuther, Volkmar Uhlig
{"title":"Stub-code performance is becoming important","authors":"Andreas Haeberlen, J. Liedtke, Yoonho Park, Lars Reuther, Volkmar Uhlig","doi":"10.5555/1251503.1251507","DOIUrl":"https://doi.org/10.5555/1251503.1251507","url":null,"abstract":"As IPC mechanisms become faster, stub-code efficiency becomes a performance issue for local client/server RPCs and inter-component communication. Inefficient and unnecessary complex marshalling code can almost double communication costs. We have developed an experimental new IDL compiler that produces near-optimal stub code for gcc and the L4 microkernel. The current experimental IDL4 compiler cooperates with the gcc compiler and its x86 code generator. Other compilers or target machines would require different optimizations. In most cases, the generated stub code is approximately 3 times faster (and shorter) than the code generated by a commonly used portable IDL compiler. Benchmarks have shown that efficient stubs can increase application performance by more than 10 percent. The results are applied within IBM's SawMill project that aims at technology for constructing multi-server operating systems.","PeriodicalId":171901,"journal":{"name":"USENIX Workshop on Industrial Experiences with Systems Software","volume":"111 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127322200","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A fast implementation of DES and triple-DES on PA-RISC 2.0","authors":"F. Corella","doi":"10.5555/1251503.1251515","DOIUrl":"https://doi.org/10.5555/1251503.1251515","url":null,"abstract":"Encryption, however, is computationally expensive. A computer server that must encrypt data for thousands of clients before sending it over the network can easily become cryptobound. The capacity of the server is then determined by the speed at which it can perform encryption. This is especially the case when slow encryption protocols such as the Digital Encryption Standard (DES) or Triple-DES are employed. Since DES and Triple-DES are very widely used, it is important to optimize the performance of these algorithms.","PeriodicalId":171901,"journal":{"name":"USENIX Workshop on Industrial Experiences with Systems Software","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117128263","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"HP caliper: an architecture for performance analysis tools","authors":"R. Hundt","doi":"10.5555/1251503.1251508","DOIUrl":"https://doi.org/10.5555/1251503.1251508","url":null,"abstract":"HP Caliper is an architecture for software developer tools that deal with executable (binary) programs. It provides a common framework that allows building of a wide variety of tools for doing performance analysis, profiling, coverage analysis, correctness checking, and testing. HP Caliper uses a technology known as dynamic instrumentation, which allows program instructions to be changed on-the-fly with instrumentation probes. Dynamic instrumentation makes HP Caliper easy to use: It requires no special preparation of an application, supports shared libraries, collects data for multiple threads, and has low intrusion and overhead. This paper describes HP Caliper for HP-UX, running on the IA-64 (Itanium) processor. It describes Caliper's architecture, dynamic instrumentation algorithm, and the experiences gathered during its implementation.","PeriodicalId":171901,"journal":{"name":"USENIX Workshop on Industrial Experiences with Systems Software","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132008610","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cliff McCarthy, Michael Murphy, Indira Subramanian
{"title":"Meeting performance goals with the HP-UX workload manager","authors":"Cliff McCarthy, Michael Murphy, Indira Subramanian","doi":"10.5555/1251503.1251513","DOIUrl":"https://doi.org/10.5555/1251503.1251513","url":null,"abstract":"The HP-UX Workload Manager helps workloads meet user-specified performance goals by dynamically adjusting their access to resources such as CPU. We implemented this workload manager as a part of a feedback control system, using existing resource control and performance instrumentation infrastructure.","PeriodicalId":171901,"journal":{"name":"USENIX Workshop on Industrial Experiences with Systems Software","volume":"02 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124474273","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Van Oleson, K. Schwan, G. Eisenhauer, Beth Plale, C. Pu, Dick Amin
{"title":"Operational information systems: an example from the airline industry","authors":"Van Oleson, K. Schwan, G. Eisenhauer, Beth Plale, C. Pu, Dick Amin","doi":"10.5555/1251503.1251504","DOIUrl":"https://doi.org/10.5555/1251503.1251504","url":null,"abstract":"Our research is motivated by the scaleability, availability, and extensibility challenges in deploying open systems based, enterprise operational applications. We present Delta's mid-tier Operational Information Systems (OIS) as an approach for leveraging its legacy operational OLTP infrastructure, to participate in the emerging world of electronic commerce, as well as enable new applications. The approach is to place minimally intrusive 'taps' into the legacy OLTP systems to capture transactions as they occur for consistent replay in the mid-tier OIS. One important issue addressed by our work is the processing, and dissemination of information in the mid-tier system itself, potentially serving hundreds of thousands of access and display points, distributed across a highly geographically distributed system (e.g. airports world wide), and also involving large 'working sets' of operational data, used by applications that require rapid response and also rapid recovery from failures. To address the scaleability, availability, and cost of this OIS infrastructure, we are researching cluster computing techniques, as well as, devising replication and failover techniques. To address the communications scaleability requirements, we are experimenting with novel event-based implementations of information transport and processing, that include reliable multicast variations.","PeriodicalId":171901,"journal":{"name":"USENIX Workshop on Industrial Experiences with Systems Software","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125248364","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Experiences in measuring the reliability of a cache-based storage system","authors":"Dan Lambright","doi":"10.5555/1251503.1251505","DOIUrl":"https://doi.org/10.5555/1251503.1251505","url":null,"abstract":"We present our experiences in benchmarking the reliability of the cache component of a storage system in a development environment. The reliability metrics we measured are availability from the standpoint of the host and maintainability from the standpoint of the system operator. We created errors using software fault injection, and measured their impact using a combination of performance measurement techniques and the rehearsal of maintenance procedures. This paper gives three case studies. The first two describe experiments that recreate very specific breakdowns in the software logic, and the third describes an experiment simulating a memory hardware failure that creates unpredictable effects. We found that, taken together, these various techniques gave us a useful picture of how well our cache management software tolerated faults.","PeriodicalId":171901,"journal":{"name":"USENIX Workshop on Industrial Experiences with Systems Software","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128756363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}