C. Chung, T. Shih, Ying-Hong Wang, Wei-Chuan Lin, Ying-Feng Kuo
{"title":"Task decomposition testing and metrics for concurrent programs","authors":"C. Chung, T. Shih, Ying-Hong Wang, Wei-Chuan Lin, Ying-Feng Kuo","doi":"10.1109/ISSRE.1996.558726","DOIUrl":"https://doi.org/10.1109/ISSRE.1996.558726","url":null,"abstract":"Software testing and metrics are two important approaches to assure the reliability and quality of software. The emergence of concurrent programming in recent years introduces new testing problems and difficulties that cannot be solved by testing techniques for traditional sequential programs. One of the difficult tasks is that concurrent programs can have many instances of execution for the same set of input data. Many concurrent program testing methodologies propose to solve controlled execution and determinism. There are few discussions of concurrent software testing from the inter-task viewpoints. Yet, the common characteristics of concurrent programming are explicit identification of the large grain parallel computation units (tasks), and the explicit inter-task communication via rendezvous-style mechanisms. In this paper, we focus on testing concurrent programs through task decomposition. We propose four testing criteria to test a concurrent program. The programmer can choose an appropriate testing strategy depending on the properties of the concurrent program. Associated with the strategies, four equations are provided to measure the complexity of concurrent programs.","PeriodicalId":441362,"journal":{"name":"Proceedings of ISSRE '96: 7th International Symposium on Software Reliability Engineering","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132996470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A replication technique based on a functional and attribute grammar computation model","authors":"A. Cherif, Masato Suzuki, T. Katayama","doi":"10.1109/ISSRE.1996.558862","DOIUrl":"https://doi.org/10.1109/ISSRE.1996.558862","url":null,"abstract":"Presents a replication technique based on the FTAG (fault-tolerant attribute grammar) computation model, where instances of a replicated application are active on different groups of processors called replicas. FTAG is a functional and attribute-based model. The developed replication technique implements \"active parallel replication\", i.e. all replicas are active and concurrently compute a different piece of the application's parallel code. In our model, replicas cooperate not only to detect and mask failures but also to perform parallel computation. The replication mechanisms are supported by the FTAG run-time system and are fully application-transparent. Different novel mechanisms for checkpointing and recovery are developed. Rollback is achieved only if the system experiences multiple failures, otherwise forward recovery is performed. The replication technique takes full advantage of parallel computation to reduce the computation time.","PeriodicalId":441362,"journal":{"name":"Proceedings of ISSRE '96: 7th International Symposium on Software Reliability Engineering","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133147694","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
R. Hochman, T. Khoshgoftaar, E. B. Allen, J. Hudepohl
{"title":"Using the genetic algorithm to build optimal neural networks for fault-prone module detection","authors":"R. Hochman, T. Khoshgoftaar, E. B. Allen, J. Hudepohl","doi":"10.1109/ISSRE.1996.558759","DOIUrl":"https://doi.org/10.1109/ISSRE.1996.558759","url":null,"abstract":"The genetic algorithm is applied to developing optimal or near optimal backpropagation neural networks for fault-prone/not-fault-prone classification of software modules. The algorithm considers each network in a population of neural networks as a potential solution to the optimal classification problem. Variables governing the learning and other parameters and network architecture are represented as substrings (genes) in a machine-level bit string (chromosome). When the population undergoes simulated evolution using genetic operators-selection based on a fitness function, crossover, and mutation-the average performance increases in successive generations. We found that, on the same data, compared with the best manually developed networks, evolved networks produced improved classifications in considerably less time, with no human effort, and with greater confidence in their optimality or near optimality. Strategies for devising a fitness function specific to the problem are explored and discussed.","PeriodicalId":441362,"journal":{"name":"Proceedings of ISSRE '96: 7th International Symposium on Software Reliability Engineering","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116489766","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An empirical evaluation of maximum likelihood voting in failure correlation conditions","authors":"K. Kim, M. Vouk, D. McAllister","doi":"10.1109/ISSRE.1996.558890","DOIUrl":"https://doi.org/10.1109/ISSRE.1996.558890","url":null,"abstract":"The maximum likelihood voting (MLV) strategy was recently proposed as one of the most reliable voting methods. The strategy determines the most likely correct result based on the reliability history of each software version. However, the theoretical results were obtained under the assumption that inter-version failures are not correlated by common cause faults. We first discuss the issues that arise in practical implementation of MLV, and present an extended version of the MLV algorithm that uses component reliability estimates to break voting ties. We then empirically evaluate the implemented MLV strategy in a situation where the inter-version failures are highly correlated. Our results show that, although in real situations MLV carries no reliability guarantees, it tends to be statistically more reliable, even under high inter-version correlation conditions, than other voting strategies that we have examined. We also compare implemented MLV performance with that of Recovery Block and hybrid Consensus Recovery Block approaches. Our results show that MLV often outperforms Recovery Block and that it can successfully compete with more elaborate Consensus Recovery Block. To the best of our knowledge, this is the first empirical evaluation of the MLV strategy.","PeriodicalId":441362,"journal":{"name":"Proceedings of ISSRE '96: 7th International Symposium on Software Reliability Engineering","volume":"87 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133881186","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A conservative theory for long term reliability growth prediction","authors":"P. Bishop, R. Bloomfield","doi":"10.1109/ISSRE.1996.558887","DOIUrl":"https://doi.org/10.1109/ISSRE.1996.558887","url":null,"abstract":"The paper describes a different approach to software reliability growth modelling which should enable conservative long term predictions to be made. Using relatively standard assumptions it is shown that the expected value of the failure rate after a usage time t is bounded by: /spl lambda/~/sub t//spl les/(N/(et)) where N is the initial number of faults and e is the exponential constant. This is conservative since it places a worst case bound on the reliability rather than making a best estimate. We also show that the predictions might be relatively insensitive to assumption violations over the longer term. The theory offers the potential for making long term software reliability growth predictions based solely on prior estimates of the number of residual faults. The predicted bound appears to agree with a wide range of industrial and experimental reliability data. It is shown that less pessimistic results can be obtained if additional assumptions are made about the failure rate distribution of faults.","PeriodicalId":441362,"journal":{"name":"Proceedings of ISSRE '96: 7th International Symposium on Software Reliability Engineering","volume":"734 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133209795","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Residual fault density prediction using regression methods","authors":"J. A. Morgan, G. Knafl","doi":"10.1109/ISSRE.1996.558706","DOIUrl":"https://doi.org/10.1109/ISSRE.1996.558706","url":null,"abstract":"Regression methods are used to model residual fault density in terms of several product and testing process measures. Process measures considered include discovered fault density, test set size and various coverage measures such as block, decision and all-uses coverage. Product measures considered include lines of code as well as block, decision and all-uses counts. The relative importance of these product/process measures for predicting residual fault density is assessed for a specific data set. Only selected testing process measures, in particular discovered fault density and decision coverage, are important predictors in this case while all product measures considered are important. These results are based on consideration of a substantial family of models, specifically, the family of quadratic response surface models with two-way interaction. Model selection is based on \"leave one out at a time\" cross-validation using the predicted residual sum of squares (PRESS) criterion.","PeriodicalId":441362,"journal":{"name":"Proceedings of ISSRE '96: 7th International Symposium on Software Reliability Engineering","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122133804","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Primary-shadow consistency issues in the DRB scheme and the recovery time bound","authors":"K. Kim, L. Bacellar, C. Subbaraman","doi":"10.1109/ISSRE.1996.558888","DOIUrl":"https://doi.org/10.1109/ISSRE.1996.558888","url":null,"abstract":"The distributed recovery block (DRB) scheme is an approach for realizing both hardware and software fault tolerance in real time distributed and parallel computer systems. We point out that in order for the DRB scheme to yield a high fault coverage and a low recovery time bound, some important consistency requirements must be satisfied by the replicated application tasks in a DRB computing station. Newly identified approaches for meeting the consistency requirements, which involve, among other things, integration of network surveillance and reconfiguration (NSR) techniques with the DRB scheme, are presented. The paper then presents an analysis of the recovery time bound of the DRB scheme. The analysis is based on a modular structured concrete implementation model of the DRB scheme for local area network (LAN) based distributed computer systems, which is called the DRB/T LAN scheme and incorporates an NSR scheme and the newly identified consistency ensuring mechanisms. Finally, we consider approaches for applying the DRB scheme to new types of application computation segments that were not considered before and then discuss approaches for meeting the consistency requirements in such DRB stations. These approaches broaden the application range of the DRB scheme significantly.","PeriodicalId":441362,"journal":{"name":"Proceedings of ISSRE '96: 7th International Symposium on Software Reliability Engineering","volume":"191 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122162245","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Designing reliable systems from reliable components using the context-dependent constraint concept","authors":"Peter Molin","doi":"10.1109/ISSRE.1996.558738","DOIUrl":"https://doi.org/10.1109/ISSRE.1996.558738","url":null,"abstract":"The problem of composing a system from well-behaving components is discussed. Specifically, necessary conditions for preserving the behaviour in a system context are analysed in this paper. Such conditions are defined as Context-Dependent Constraints (CDC). A non-formal approach is taken based on common system integration errors. It is suggested that the identification and verification of CDCs should be part of any development method based on component verification. The CDCs can also serve as an aid for designing reliable and maintainable systems, where the goal of the design process is to reduce the number of CDCs.","PeriodicalId":441362,"journal":{"name":"Proceedings of ISSRE '96: 7th International Symposium on Software Reliability Engineering","volume":"80 6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125890117","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Software reliability engineering for client-server systems","authors":"N. Schneidewind","doi":"10.1109/ISSRE.1996.558829","DOIUrl":"https://doi.org/10.1109/ISSRE.1996.558829","url":null,"abstract":"Too often, when doing software reliability modeling and prediction, the assumption is made that the software involves either a single module or a single node. The reality in today's increasing use of multi-node client-server systems is that there are multiple software entities that execute on multiple nodes that must be modeled in a system context, if realistic reliability predictions and assessments are to be made. For example, if there are N/sub c/ clients and N/sub x/ servers in a client-server system, it is not necessarily the case that a software failure in any of the N/sub c/ clients or N/sub x/ servers will cause the system to fail. Thus, if such a system were to be modeled as a single entity, the predicted reliability would be much lower than the true reliability, because the prediction would not account for criticality and redundancy. The first factor accounts for the possibility that the survivability of some clients and servers will be more critical to continued system operation than others, while the second factor accounts for the possibility of using redundant nodes to allow for system recovery should a critical node fail. To address this problem, we must identify which nodes-clients and servers-are critical and which are not critical, as defined by whether these nodes are used for critical or non-critical functions, respectively.","PeriodicalId":441362,"journal":{"name":"Proceedings of ISSRE '96: 7th International Symposium on Software Reliability Engineering","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129810363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Reliability and availability of a wide area network-based education system","authors":"P. Dixit, M. Vouk, D. Bitzer, Christopher Alix","doi":"10.1109/ISSRE.1996.558815","DOIUrl":"https://doi.org/10.1109/ISSRE.1996.558815","url":null,"abstract":"An important class of quality of service (QoS)-dependent network-based applications are computer-based education systems. A successful network-based education (NBE) system needs to provide appropriate QoS at the user level. This includes adequate end-to-end response delay and adequate system reliability and availability. This paper presents results from a reliability and availability evaluation of NovaNET. NovaNET is a successful low-overhead multimedia education system which serves thousands of users on a daily basis. We analyze eight years of failure data and examine correlations among system failure events. The NovaNET data are used to discuss practical bounds on the reliability and availability of an NBE system.","PeriodicalId":441362,"journal":{"name":"Proceedings of ISSRE '96: 7th International Symposium on Software Reliability Engineering","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117328341","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}