{"title":"Using reversible computing to achieve fail-safety","authors":"P. Bishop","doi":"10.1109/ISSRE.1997.630863","DOIUrl":"https://doi.org/10.1109/ISSRE.1997.630863","url":null,"abstract":"This paper describes a fail-safe design approach that can be used to achieve a high level of fail-safety with conventional computing equipment which may contain design flaws. The method is based on the well-established concept of reversible computing. Conventional programs destroy information and hence cannot be reversed. However it is easy to define a virtual machine that preserves sufficient intermediate information to permit reversal. Any program implemented on this virtual machine is inherently reversible. The integrity of a calculation can therefore be checked by reversing back from the output values and checking for the equivalence of intermediate values and original input values. By using different machine instructions on the forward and reverse paths, errors in any single instruction execution can be revealed. Random corruptions in data values are also detected. An assessment of the performance of the reversible computer design for a simple reactor trip application indicates that it runs about ten times slower than a conventional software implementation and requires about 20 kilobytes of additional storage. The trials also show a fail-safe bias of better than 99.998% for random data corruptions, and it is argued that failures due to systematic flaws could achieve similar levels of fail-safe bias. Potential extensions and applications of the technique are discussed.","PeriodicalId":170184,"journal":{"name":"Proceedings The Eighth International Symposium on Software Reliability Engineering","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126438216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Automatic test generation using checkpoint encoding and antirandom testing","authors":"Huifang Yin, Zemen Lebne-Dengel, Y. Malaiya","doi":"10.1109/ISSRE.1997.630850","DOIUrl":"https://doi.org/10.1109/ISSRE.1997.630850","url":null,"abstract":"The implementation of an efficient automatic test generation scheme for black box testing is discussed. It uses checkpoint encoding and antirandom testing schemes. Checkpoint encoding converts test generation to a binary problem. The checkpoints are selected as the boundary and illegal cases in addition to valid cases to probe the input space. Antirandom testing selects each test case such that it is as different as possible from all the previous tests. The implementation is illustrated using benchmark examples that have been used in the literature. Use of random testing both with checkpoint encoding and without is also reported. Comparison and evaluation of the effectiveness of these methods is also presented. Implications of the observations for larger software systems are noted. Overall, antirandom testing gives higher code coverage than encoding random testing, which gives higher code coverage than pure random testing.","PeriodicalId":170184,"journal":{"name":"Proceedings The Eighth International Symposium on Software Reliability Engineering","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130279340","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Reliability analysis of systems based on software and human resources","authors":"A. Pasquini, G. Pistolesi, A. Rizzo","doi":"10.1109/ISSRE.1997.630883","DOIUrl":"https://doi.org/10.1109/ISSRE.1997.630883","url":null,"abstract":"Safety critical systems require an assessment activity to verify that they are able to perform their functions in specified use environments. This activity would benefit from evaluation methodologies that consider these systems as a whole and not as the simple sum of their parts. Indeed, analysis of accidents involving such systems has shown that they are rarely due to the simple failure of one of their components. Accidents are the outcome of a composite causal scenario where human, software and hardware failures combine in a complex pattern. On the contrary, dependability analysis and evaluation of safety critical systems are based on techniques and methodologies that concern human and computer separately, and whose results can hardly be integrated. The analogies between the processes of: (1) software reliability growth due to testing and the related fault removal; (2) improvement of man machine interface due to preliminary operative feedback; (3) improvement of the operator performances due to his learning activity; suggest an effort for a common evaluation approach. Only the first one of these processes is currently modelled by using mathematical methods. The paper considers extending these methods to study the reliability growth process of other system components, i.e. the operator and the man machine interface. To study the feasibility of the approach, the paper analyses the results of an experiment in which the reliability of a system is evaluated using trend analysis. The evaluation concerns the graphic man machine interface and the operators, and could easily be extended to the software control system.","PeriodicalId":170184,"journal":{"name":"Proceedings The Eighth International Symposium on Software Reliability Engineering","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125069365","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Confidence-based reliability and statistical coverage estimation","authors":"W. Howden","doi":"10.1109/ISSRE.1997.630877","DOIUrl":"https://doi.org/10.1109/ISSRE.1997.630877","url":null,"abstract":"In confidence-based reliability measurement, we determine that we are at least C confident that the probability of a program failing is less than or equal to a bound B. The basic results of this approach are reviewed and several additional results are introduced, including the adaptive sampling theorem which shows how confidence can be computed when faults are corrected as they appear in the testing process. Another result shows how to carry out testing in parallel. Some of the problems of statistical testing are discussed and an alternative method for establishing reliability called statistical coverage is introduced. At the cost of making reliability estimates that are relative to a fault model, statistical coverage eliminates the need for output validation during reliability estimation and allows the incorporation of non-statistical testing results into the statistical reliability estimation process. Statistical testing and statistical coverage are compared, and their relationship with traditional reliability growth modeling approaches is briefly discussed.","PeriodicalId":170184,"journal":{"name":"Proceedings The Eighth International Symposium on Software Reliability Engineering","volume":"124 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121096249","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Software reliability in weapon systems","authors":"P. Carnes","doi":"10.1109/ISSRE.1997.630855","DOIUrl":"https://doi.org/10.1109/ISSRE.1997.630855","url":null,"abstract":"Summary form only given, as follows. The Air Force Operational Test and Evaluation Center (AFOTEC) is responsible for the operational testing of application software used in weapon systems. In order to determine if a system is ready for operational testing, raw data from unit and integration testing is analyzed to determine if the software is exhibiting maturity and reliability growth. A graphical tool containing a library of widely accepted software reliability growth models then selects the optimum model by first subjecting the data to goodness of fit criterion and then prequential likelihood as a measure of predictive strength. The intent is to use this data with operational profiles to determine the expected software reliability for a given system operational mission.","PeriodicalId":170184,"journal":{"name":"Proceedings The Eighth International Symposium on Software Reliability Engineering","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131659462","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An operational profile for the Cartridge Support Software","authors":"Ken Chruscielski, J. Tian","doi":"10.1109/ISSRE.1997.630865","DOIUrl":"https://doi.org/10.1109/ISSRE.1997.630865","url":null,"abstract":"This paper describes our experience and findings in constructing an operation profile for the Lockheed Martin Tactical Aircraft System's (LMTAS) Cartridge Support Software (CSS). The process is an adaptation of Musa's (1993) 5-step approach. The resulting operational profile was reviewed and evaluated by LMTAS's software product manager, system engineers, and software test engineers. An account of the findings and conclusions from the independent review and evaluation is discussed. This operational profile allowed the LMTAS software engineering team to derive some clear insights about the usage rate of the CSS functions from the customer's perspective.","PeriodicalId":170184,"journal":{"name":"Proceedings The Eighth International Symposium on Software Reliability Engineering","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125329488","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Qualifying The Reliability Of COTS Software Components","authors":"J. Voas, Charles Howell, B. Everett","doi":"10.1109/ISSRE.1997.630859","DOIUrl":"https://doi.org/10.1109/ISSRE.1997.630859","url":null,"abstract":"With demands for increased software reuse, a market for buying and selling generic software functionality is emerging. This generic software functionality is usually referred to as Commercial-off-the-shelf (COTS) software. Generic functionality has greater commercial viability than specialized functionality, and can be used in diverse application domains, including systems with high-assurance requirements. A problem arises, however, in that the quality of COTS software cannot be guaranteed past what the developing organization certifies, which is usually little. This puts the onus on the buyer to determine whether the quality of the COTS software is sufficient for their application. This panel will discuss various means for qualifying the reliability of COTS and third-party software. This panel may also discuss some of the legal and certification issues that are associated with trying to standardize on a fixed set of assessment techniques to reduce this concern. 1071.9458197 $10.00","PeriodicalId":170184,"journal":{"name":"Proceedings The Eighth International Symposium on Software Reliability Engineering","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132087815","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Using proxy failure times with the Jelinski-Moranda software reliability model","authors":"M. A. Qureshi, D. Jeske","doi":"10.1109/ISSRE.1997.630884","DOIUrl":"https://doi.org/10.1109/ISSRE.1997.630884","url":null,"abstract":"We introduce the concept of proxy failure times for situations where system test data only consists of the fraction of test cases that fail for a set of execution scenarios. We show how proxy failure times can be simulated if external information about the user frequency of the test cases is available. We develop statistical inference procedures for fitting the Jelinski-Moranda model (Z. Jelinski and P. Moranda, 1996). In particular, we present a graphical diagnostic for testing goodness of fit and show how it suggests appropriate transformations of the failure times that would improve the fit. Influential observations are also identified by the diagnostic and moreover, it provides regression estimators of the model parameters as a quick alternative to the maximum likelihood estimators. Formulas for likelihood based confidence intervals for the model parameters are provided. The simulation of proxy failure times and the statistical inference procedures for the Jelinski-Moranda model are illustrated with an example.","PeriodicalId":170184,"journal":{"name":"Proceedings The Eighth International Symposium on Software Reliability Engineering","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129234666","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Software reliability growth models: assumptions vs. reality","authors":"A. Wood","doi":"10.1109/ISSRE.1997.630858","DOIUrl":"https://doi.org/10.1109/ISSRE.1997.630858","url":null,"abstract":"Software reliability growth models are often differentiated by assumptions regarding testing and defect repair. In this paper, these model assumptions are compared to Tandem's software development and test environment. The key differences between our environment and the standard model assumptions are that (1) the total number of defects can increase due to new code being introduced during system test, but the models normally assume a constant total number of defects, and (2) the defect-finding efficiency of tests can vary but is assumed constant by the models. In spite of the model assumption violations, we (and other practitioners) continue to use the models because they are easy to apply and because the results seem reasonable. However, we are concerned about the potential inaccuracy of the models and would like to determine the effect of the assumption violations. This paper contains suggestions for research to quantify the model inaccuracy and help practitioners make accuracy vs. model complexity tradeoffs.","PeriodicalId":170184,"journal":{"name":"Proceedings The Eighth International Symposium on Software Reliability Engineering","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122505729","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"STL: a tool for on-line software update and rejuvenation","authors":"S. Yajnik, Yennun Huang","doi":"10.1109/ISSRE.1997.630872","DOIUrl":"https://doi.org/10.1109/ISSRE.1997.630872","url":null,"abstract":"Summary form only given, as follows. A large number of tools and techniques have been developed in the past to achieve a 24/spl times/7 system availability (24 hours a day and 7 days a week) by reducing unscheduled system downtime due to failures. However, a highly available or fault-tolerant system may still have to be taken off-line for software and hardware updates, maintenance and rejuvenation. Therefore, the scheduled downtime for maintenance could become the major source of system unavailability. One big challenge in a highly available system is to keep the system running while it is undergoing software updates or bug fixes. In this paper, we describe a tool that can be used to perform an online update of software in a cluster environment. The tool consists of a protocol compiler (stgen) and a library (libst) for marshaling and unmarshaling data during a software update. The tool has the ability to transfer complex data structures between two processes even if the data definitions in the two processes are different. The data transfer format is machine-independent. Hence, the tool can transfer data across processes running on different machine types. The paper describes some real-life applications of the tool and presents performance measurements of the tool for these applications.","PeriodicalId":170184,"journal":{"name":"Proceedings The Eighth International Symposium on Software Reliability Engineering","volume":"86 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125101763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}