T. Khoshgoftaar, E. B. Allen, N. Goel, A. Nandi, John McMullan
{"title":"Detection of software modules with high debug code churn in a very large legacy system","authors":"T. Khoshgoftaar, E. B. Allen, N. Goel, A. Nandi, John McMullan","doi":"10.1109/ISSRE.1996.558896","DOIUrl":"https://doi.org/10.1109/ISSRE.1996.558896","url":null,"abstract":"Society has become so dependent on reliable telecommunications, that failures can risk loss of emergency service, business disruptions, or isolation from friends. Consequently, telecommunications software is required to have high reliability. Many previous studies define the classification fault prone in terms of fault counts. This study defines fault prone as exceeding a threshold of debug code churn, defined as the number of lines added or changed due to bug fixes. Previous studies have characterized reuse history with simple categories. This study quantified new functionality with lines of code. The paper analyzes two consecutive releases of a large legacy software system for telecommunications. We applied discriminant analysis to identify fault prone modules based on 16 static software product metrics and the amount of code changed during development. Modules from one release were used as a fit data set and modules from the subsequent release were used as a test data set. In contrast, comparable prior studies of legacy systems split the data to simulate two releases. We validated the model with a realistic simulation of utilization of the fitted model with the test data set. Model results could be used to give extra attention to fault prone modules and thus, reduce the risk of unexpected problems.","PeriodicalId":441362,"journal":{"name":"Proceedings of ISSRE '96: 7th International Symposium on Software Reliability Engineering","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114571787","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Design of reliable software via general combination of N-version programming and acceptance testing","authors":"B. Parhami","doi":"10.1109/ISSRE.1996.558714","DOIUrl":"https://doi.org/10.1109/ISSRE.1996.558714","url":null,"abstract":"N-version programming (NVP) and acceptance testing (AT) are techniques for ensuring reliable computation results from imperfect software. Various symmetric combinations of NVP and AT have also been suggested. We take the view that one can insert an AT at virtually any point in a suitably constructed multi-channel computation graph and that judicious placement of ATs will lead to cost-effective reliability improvement. Hence, as a general framework for the creation, representation, and analysis of combined NVP-AT schemes, we introduce MTV graphs, and their simplified data-driven version called DD-MTV graphs, composed of computation module (M), acceptance test (T), and voter (V) building blocks. Previous NVP-AT schemes, such as consensus recovery blocks, recoverable N-version blocks, and N-self-checking programs can be viewed as special cases of our general combining scheme. Results on the design and analysis of new NVP-AT schemes are presented and the reliability improvements are quantified. We show, e.g., that certain, somewhat asymmetric, combinations of M, T, and V building blocks can lead to higher reliabilities than previously proposed symmetric arrangements having comparable or higher complexities.","PeriodicalId":441362,"journal":{"name":"Proceedings of ISSRE '96: 7th International Symposium on Software Reliability Engineering","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128697068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A practical strategy for testing pair-wise coverage of network interfaces","authors":"A. Williams, R. Probert","doi":"10.1109/ISSRE.1996.558835","DOIUrl":"https://doi.org/10.1109/ISSRE.1996.558835","url":null,"abstract":"Distributed systems consist of a number of network elements that interact with each other. As the number of network elements and interchangeable components for each network element increases, the trade-off that the system tester faces is the thoroughness of test configuration coverage vs. limited resources of time and expense that are available. An approach to resolving this trade-off is to determine a set of test configurations that test each pair-wise combination of network components. This goal gives a well-defined level of test coverage, with a reduced number of system configurations. To select such a set of test configurations, we show how to apply the method of orthogonal Latin squares, from the design of balanced statistical experiments. Since the theoretical treatment assumes constraints that may not be satisfied in practice, we then show how to adapt this approach to realistic application constraints.","PeriodicalId":441362,"journal":{"name":"Proceedings of ISSRE '96: 7th International Symposium on Software Reliability Engineering","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128719250","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Katchabaw, H. Lutfiyya, A. D. Marshall, M. Bauer
{"title":"Policy-driven fault management in distributed systems","authors":"M. Katchabaw, H. Lutfiyya, A. D. Marshall, M. Bauer","doi":"10.1109/ISSRE.1996.558833","DOIUrl":"https://doi.org/10.1109/ISSRE.1996.558833","url":null,"abstract":"Management policies can be used to specify requirements about the desired behaviour of distributed systems. Violations of policies (faults) can then be detected, isolated, located and corrected using a policy-driven fault management system. Other work in this area to date has focused on network-level faults. We believe that in a distributed system it is more appropriate to focus on faults at the application level. Furthermore, this work has been largely domain-specific-a generic, structured approach to this problem is needed. Our work has focused on policy-driven fault management in distributed systems at the application level. In this paper, we define a generic architecture for policy-driven fault management and present a prototype system based on this architecture. We also discuss experience to date using and experimenting with our prototype system.","PeriodicalId":441362,"journal":{"name":"Proceedings of ISSRE '96: 7th International Symposium on Software Reliability Engineering","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128875008","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Software reliability models: an approach to early reliability prediction","authors":"Carol S. Smidts, R. Stoddard, M. Stutzke","doi":"10.1109/ISSRE.1996.558733","DOIUrl":"https://doi.org/10.1109/ISSRE.1996.558733","url":null,"abstract":"Software reliability prediction models are of paramount importance since they provide early identification of cost overruns, software development process issues, optimal development strategies, etc. Existing prediction models were developed mostly during the past 5 to 10 years and, hence, have become obsolete. Furthermore, they are not based on a deep knowledge and understanding of the software development process. This limits their predictive power. This paper presents an approach to the prediction of software reliability based on a systematic identification of software process failure modes and their likelihoods. A direct consequence of the approach and its supporting data collection efforts is the identification of weak areas in the software development process. A Bayesian framework for the quantification of software process failure mode probabilities is recommended since it allows usage of historical data that are only partially relevant to the software at hand. The approach is applied to the requirements analysis phase.","PeriodicalId":441362,"journal":{"name":"Proceedings of ISSRE '96: 7th International Symposium on Software Reliability Engineering","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122879403","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Fault exposure ratio estimation and applications","authors":"Michael Naixin Li, Y. Malaiya","doi":"10.1109/ISSRE.1996.558897","DOIUrl":"https://doi.org/10.1109/ISSRE.1996.558897","url":null,"abstract":"One of the most important parameters that control reliability growth is the fault exposure ratio (FER) identified by J.D. Musa et al. (1991). It represents the average detectability of the faults in software. Other parameters that control reliability growth are software size and execution speed of the processor which are both easily evaluated. The fault exposure ratio thus presents a key challenge in our quest towards understanding the software testing process and characterizing it analytically. It has been suggested that the fault exposure ratio may depend on the program structure, however the structuredness as measured by decision density may average out and may not vary with program size. In addition FER should be independent of program size. The available data sets suggest that FER varies as testing progresses. This has been attributed partly to the non-randomness of testing. We relate defect density to FER and present a model that can be used to estimate FER. Implications of the model are discussed. This model has three applications. First, it offers the possibility of estimating parameters of reliability growth models even before testing begins. Secondly, it can assist in stabilizing projections during the early phases of testing when the failure intensity may have large short term swings. Finally, since it allows analytical characterization of the testing process, it can be used in expressions describing processes like software test coverage growth.","PeriodicalId":441362,"journal":{"name":"Proceedings of ISSRE '96: 7th International Symposium on Software Reliability Engineering","volume":"271 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124387630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Efficient allocation of testing resources for software module testing based on the hyper-geometric distribution software reliability growth model","authors":"R. Hou, S. Kuo, Yi-Ping Chang","doi":"10.1109/ISSRE.1996.558884","DOIUrl":"https://doi.org/10.1109/ISSRE.1996.558884","url":null,"abstract":"A considerable amount of testing resources is required during software module testing. In this paper, based on the HGDM (Hyper-Geometric Distribution Model) software reliability growth model, we investigate the following optimal resource allocation problems in software module testing: (1) minimization of the number of software faults still undetected in the system after testing given a total amount of testing resources, and (2) minimization of the total amount of testing resources repaired, given the number of software faults still undetected in the system after testing. Furthermore, based on the concepts of \"average allocation\" and \"proportional allocation\", two simple allocation methods are also introduced. Experimental results show that the optimal allocation method can improve the quality and reliability of the software system much more significantly than these simple allocation methods can. Therefore, the optimal allocation method is very efficient for solving the testing resource allocation problem.","PeriodicalId":441362,"journal":{"name":"Proceedings of ISSRE '96: 7th International Symposium on Software Reliability Engineering","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121092183","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Avionics software problem occurrence rates","authors":"M. Shooman","doi":"10.1109/ISSRE.1996.558695","DOIUrl":"https://doi.org/10.1109/ISSRE.1996.558695","url":null,"abstract":"This paper discusses a feasibility study which developed estimates of the occurrence rate for significant avionics software problems. Two FAA databases, airworthiness directives (ADs) and service difficulty reports (SDRs) were used. A study of the AD database for large aircraft (1984-1994) revealed 33 avionics ADs, 13 of which were software related. Estimates were made of the operational hours for the fleet of commercial aircraft with computer avionics and the number of problem occurrences. Minimum, maximum and average occurrence rates were established. The average occurrence rate for the 6 resulting data sets was 0.15 per million operating hours. The nonoccurrence of ADs for the remaining avionics was \"bounded on the average\"; yielding less than 0.02 occurrences/per million hrs. The significant problem occurrence rate for the TCAS system (collision avoidance) software has motivated others to apply proof of correctness techniques to the specification and design of this software (Craigen et al., 1994).","PeriodicalId":441362,"journal":{"name":"Proceedings of ISSRE '96: 7th International Symposium on Software Reliability Engineering","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129486977","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Towards automation of checklist-based code-reviews","authors":"F. Belli, R. Crisan","doi":"10.1109/ISSRE.1996.558687","DOIUrl":"https://doi.org/10.1109/ISSRE.1996.558687","url":null,"abstract":"Different types of code-reviews (Fagan-style code-inspections, Parnas-like active design reviews and walkthroughs) have been found to be very useful in improving the quality of software. In many cases reviewers use checklists to guide their analysis during review sessions. However, valuable, checklist-based code-reviews have the principal shortcoming of their high costs due to lack of supporting tools enabling at least partial automation of typical multiple appearing rules. This paper describes an approach towards semi-automation of some steps of individual review processes based on checklists. The method proposed is interactive, i.e. reviewers will be enabled to actualize, extend, and check the consistency and redundancy of their checklists. The basic idea underlying the approach is the usage of a rule-based system, adapting concepts of the compiler theory and knowledge engineering, for acquisition and representation of knowledge about the program. Redundant and conflicting knowledge about the program under study is recognized and solved by means of an embedded truth maintenance system. As a result of fault diagnosis, rules for fault classification are used. Software reliability models are applied to validate the results of each review session. The approach has shown promising preliminary results in analyses of conventional C-programs developed in the automotive industry.","PeriodicalId":441362,"journal":{"name":"Proceedings of ISSRE '96: 7th International Symposium on Software Reliability Engineering","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114651605","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Analyze-NOW-an environment for collection and analysis of failures in a network of workstations","authors":"Anshuman Thakur, R. Iyer","doi":"10.1109/ISSRE.1996.558682","DOIUrl":"https://doi.org/10.1109/ISSRE.1996.558682","url":null,"abstract":"This paper describes Analyze-NOW an environment for collection and analysis of failures/errors in a network of workstations. Descriptions cover the data collection methodology and the tool implemented to facilitate this process. Software tools used for analysis are described, with emphasis on the details of the implementation of the Analyzer, the primary analysis tool. Application of the tools is demonstrated by using them to collect and analyze failure data (for 32 week period) from a network of 69 SunOS-based workstations. Classification based on the source and the effect of faults is used to identify problem areas. Different types of failures encountered on the machines and the network are highlighted to develop a proper understanding of failures in a network environment. Lastly, a case is made for using the results from the analysis tool to pinpoint the problem areas in the network.","PeriodicalId":441362,"journal":{"name":"Proceedings of ISSRE '96: 7th International Symposium on Software Reliability Engineering","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123038004","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}