{"title":"Reliability analysis of large fault trees using the Vesely failure rate","authors":"S. Amari, J. Akers","doi":"10.1109/RAMS.2004.1285481","DOIUrl":"https://doi.org/10.1109/RAMS.2004.1285481","url":null,"abstract":"Fault trees provide a compact, graphical, intuitive method to analyze system reliability. However, combinatorial fault tree analysis methods, such as binary decision diagrams, cannot be used to find the reliability of systems with repairable components. In such cases, the analyst should use either Markov models explicitly or generate Markov models from fault trees using automatic conversion algorithms. This process is tedious and generates huge Markov models even for moderately sized fault trees. In this paper, the use of the Vesely failure rate as an approximation to the actual failure rate of the system to find the reliability-based measures of large fault trees is demonstrated. The main advantage of this method is that it calculates the reliability of a repairable system using combinatorial methods such as binary decision diagrams. The efficiency of this approximation is demonstrated by comparing it with several other approximations and provide various bounds for system reliability. The usefulness of this method in finding the other reliability measures such as MTBF, MTTR, MTTF, and MTTFF is shown. Finally, extending this method to analyze complex fault trees containing static and dynamic modules as well as events represented by other modeling tools.","PeriodicalId":270494,"journal":{"name":"Annual Symposium Reliability and Maintainability, 2004 - RAMS","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122206866","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Modeling reliability growth in the product design process","authors":"M. Krasich, J. Quigley, L. Walls","doi":"10.1109/RAMS.2004.1285486","DOIUrl":"https://doi.org/10.1109/RAMS.2004.1285486","url":null,"abstract":"Relying on reliability growth testing to improve system design is not always cost effective and certainly not efficient. Instead, it is important to design in reliability. This requires models to estimate reliability growth in the design and to assess whether goal reliability will be achieved within the target timescale. While many models have been developed for analysis of reliability growth in test, there has been less attention given to reliability growth in design. This paper proposes and compares two models - one motivated by the practical engineering process (the modified power law) and the other by extending the reasoning of statistical reliability growth modeling (the modified IBM). The commonalities and differences between these models are explored through an assessment of their logic and an application. We conclude that the choice of model depends on the growth process being modeled. Key drivers are the type of system design and the project management of the growth process. When the design activities are well understood and project workloads can be managed evenly, leading to predictable and equally spaced modifications each of which having a similar effect on the reliability of the item, then the modified power law is a more appropriate model. On the other hand, the modified IBM is more appropriate for more uncertain situations, where the reliability improvement of a design is driven by the removal of faults, which are yet unknown and only through further investigation of the design, these can be identified. These situations have less predictable workloads and fewer modifications are likely later on in the project.","PeriodicalId":270494,"journal":{"name":"Annual Symposium Reliability and Maintainability, 2004 - RAMS","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122215227","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Generalized decomposition method for complex systems","authors":"Wendai Wang, Mingxiao Jiang","doi":"10.1109/RAMS.2004.1285416","DOIUrl":"https://doi.org/10.1109/RAMS.2004.1285416","url":null,"abstract":"Many practical systems have complex structures that are not purely series, parallel, or series-parallel, such as communication networks, power plant control systems, electric power systems, computing systems, and etc. The emergence of those network redundant systems has been seen increasingly due to very high reliability requirements. The complex redundant techniques greatly improve system reliability, but also increase the difficulty in system reliability analysis to a great extent as well. For most of these practical complex systems, the existing methods give no way to obtain the expression of the system reliability, which is desire for design purpose. To deal with this problem, this paper proposes an approach generalized from the decomposition method, in which the conditional probability equation is extended from one key component to several keystones at a time. It has been found that the reliability of many complex systems becomes obtainable in this way, especially for the complex networks with certain structural patterns. The general reliability expression of the proposed method is given in this paper. The method is illustrated by examples, and the power of the method is demonstrated by applying it to a real complex system.","PeriodicalId":270494,"journal":{"name":"Annual Symposium Reliability and Maintainability, 2004 - RAMS","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122469888","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Bayesian belief network model and tool to evaluate risk and impact in software development projects","authors":"A. Hui, D. Liu","doi":"10.1109/RAMS.2004.1285464","DOIUrl":"https://doi.org/10.1109/RAMS.2004.1285464","url":null,"abstract":"A recent survey indicates that 53 % of software development projects are over budget, behind schedule, or deliver fewer features than originally specified. Statistics also show that 31 % of development projects end up being cancelled or terminated prematurely. Among those completed projects, only 61 % of them satisfy originally specified features and functions. In today's environment, one of the greatest challenges a project manager constantly face with is to keep the projects under control in terms of budget and development time frame. A successful software development project relies on many factors; it is not that easy to control all of them and continually keeping those entire factors all going well together. The goal of this paper is to introduce a mathematical model and prove that a software development team can rely on it to accurately predict, calculate the risks and their impacts on the success of the project. Our objective is to conceptualize the model into a scientific tool that can be used to understand and calculate the risks of a development project. Subsequently, the software development team can take appropriate actions to mitigate the risks, and as a result, the project manager have a better control of the budget and development time frame of the project. It is the author's believe that if we can identify and control problems at early stages, we can significantly increase the chance of success of the development project. The model and the software tool written by the author in this paper to calculate the risks and weight their impacts on a project can be used to identify problems and their potentials risk at early stage. The tool also allows a project manager to apply the model and obtain results without getting involved in too many mathematical calculations. Although the model introduced in this paper can generally provide an accurate picture of what, how, and when things may go wrong at the beginning of a typical software development project, there are areas need further fine tuned, especially when it is used for a particular industry or at later stages of software development cycle.","PeriodicalId":270494,"journal":{"name":"Annual Symposium Reliability and Maintainability, 2004 - RAMS","volume":"65 9","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132070522","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Measuring cost avoidance in the face of messy data","authors":"J. Romeu, J. Ciccimaro, J. Trinkle","doi":"10.1109/RAMS.2004.1285440","DOIUrl":"https://doi.org/10.1109/RAMS.2004.1285440","url":null,"abstract":"This paper presents alternative methods to forecast or predict failure trends when the data violates the assumptions associated with least squares linear regression. Simulations based on actual case studies validated that least squares linear regression may provide a biased model in the presence of messy data. Non-parametric regression methods provide robust forecasting models less sensitive to non-constant variability, outliers, and small data sets.","PeriodicalId":270494,"journal":{"name":"Annual Symposium Reliability and Maintainability, 2004 - RAMS","volume":"142 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132358179","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Inventory optimization techniques, system vs. item level inventory analysis","authors":"C. Adams","doi":"10.1109/RAMS.2004.1285423","DOIUrl":"https://doi.org/10.1109/RAMS.2004.1285423","url":null,"abstract":"Inventory optimization attempts to find the best distribution of inventory that meets specified cost and availability goals. An abundance of commercial off the shelf spares models have recently implemented spares optimization techniques to determine least cost spares solutions. This paper examines two different spares analysis methods and optimization approaches to determine the best method to predict initial stock requirements and quantify the risk and/or benefits in using spares optimization techniques. Spares Analysis methods discussed in this paper include the traditional item level availability method and system level availability method. Spares optimization methods include applying marginal analysis and a technique known as genetic algorithms to perform optimization using the system level availability method. It is concluded that while inventory optimization methods may find low cost inventory distributions, it is important to quantify the risk of selecting these distributions if there is a reasonable amount of uncertainty in the inventory model input parameters.","PeriodicalId":270494,"journal":{"name":"Annual Symposium Reliability and Maintainability, 2004 - RAMS","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134208679","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Steady-state availability estimation using field failure data","authors":"R. M. Fricks, M. Ketcham","doi":"10.1109/RAMS.2004.1285427","DOIUrl":"https://doi.org/10.1109/RAMS.2004.1285427","url":null,"abstract":"This paper introduces a novel technique for computing confidence limits associated with steady-state availability estimation using field failure data. The proposed cumulative downtime distribution (CDD) method implements a simple, though powerful, availability inference procedure based on the statistical properties of the distribution of sample means of the cumulative system outage time. Another advantage of this new approach over more traditional estimation methods is that it makes no assumptions regarding the lifetime or time to repair distributions of the system under observation. A simulation model was developed to compare the coverage probability of the confidence limits computed using the CDD method and the more traditional two-state equivalent (TSE) method. Simulation runs are used to support that confidence intervals determined with the CDD method seem to be exact. On the other hand, confidence intervals determined using the TSE method seem to be only approximated. Additionally, the CDD method was shown to provide an excellent framework for the application of other statistical inference procedures such as hypothesis testing. Our future research intends to verify the quality of the CDD method using more complex system models and more exhaustive simulation experiments. We also want to verify the algorithm behavior applied to deployed systems with different maturity levels.","PeriodicalId":270494,"journal":{"name":"Annual Symposium Reliability and Maintainability, 2004 - RAMS","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122817143","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Pioneers of the reliability theories of the past 50 years","authors":"Alice Rueda, M. Pawlak","doi":"10.1109/RAMS.2004.1285431","DOIUrl":"https://doi.org/10.1109/RAMS.2004.1285431","url":null,"abstract":"This paper is dedicated to all the researchers for their contributions in reliability theories in the past 50 years. The paper provides a summary on the pioneers of reliability theories, and how their works placed a great influence on our reliability analysis today. This is also a survey paper on reliability theories and methods. The information provided in this paper is mostly based on literatures found first hand to provide as much a neutral view as possible. However, some of the information is adopted from Refs. 1-4. Area of interest in the reliability analysis included representation of reliability parameters, renewal theory, coherent structure, diagram-based models, theoretical methods, and other miscellaneous techniques. Diagram based models included block diagrams, fault tree analysis (FTA), event tree analysis, and flowgraphs. Theoretical methods included queueing theory, asymptotic analysis, Boolean algebra, Bayesian method, Monte Carlo simulation, optimization techniques. Miscellaneous methods that cannot be classified in any of the categories are also provided. Looking back in the last century, a lot of the contributions to reliability research were done in the last 50 years. Weibull, Epstein and Sobel had made a significant influence on the distribution functions we used today. Lotka, Campbell, Feller, Cox, Smith, Barlow, Proschan, Hunter, Marshall, Esary, Gnedenko, Belyaev, and Solov'yev had advanced the theories for reliability. Takacs' paper in sojourn time provided an initiative to the asymptotic studies. Birnbaum started a whole family on component importance measure for coherent structure.","PeriodicalId":270494,"journal":{"name":"Annual Symposium Reliability and Maintainability, 2004 - RAMS","volume":"119 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130507649","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Software safety analysis: using the entire risk analysis toolkit","authors":"V. Guthrie, P. Parikh","doi":"10.1109/RAMS.2004.1285460","DOIUrl":"https://doi.org/10.1109/RAMS.2004.1285460","url":null,"abstract":"When an accident occurs, it is common to attribute the accident to a failure in the system. Therefore, precautions must be taken to design the system to provide safeguards that supports the system even when failures occur. The problem, however, is that accident occur where there is no failure in the system (i.e., the software, hardware, and humans \"work\" as they are supposed to). The flaw is in the design oversight for specific high-risk situations. It is up to the decision maker to: (a) ensure that adequate design and safety checks have been performed before the system is put into operation (b) ensure that a comprehensive risk analysis is conducted to examine both the design element malfunctions and the design oversights to determine the loss sequences (c) be satisfied that the loss sequences are understood with adequate confidence that the system risk is at or below the risk acceptance criteria.","PeriodicalId":270494,"journal":{"name":"Annual Symposium Reliability and Maintainability, 2004 - RAMS","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131442419","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Accelerated stress testing to detect probabilistic software failures","authors":"L. Gullo, R. J. Davis","doi":"10.1109/RAMS.2004.1285456","DOIUrl":"https://doi.org/10.1109/RAMS.2004.1285456","url":null,"abstract":"This paper describes the advantages of accelerated stress testing and addresses the value of accelerated stressing to find probabilistic software failure modes. The value has been widely reported in discovery of traditional hardware-related failure modes, but has not been widely accepted as valuable in finding software errors. This paper provides a business case study in accelerated stressing to find both the traditional hardware failure mechanisms and the elusive probabilistic software failures. This case is presented with a cost benefit analysis including warranty cost avoidance. This business case quantifies the value of accelerated stress testing by demonstrating that in the development phase, accelerated stress testing finds software errors that would have been found later in the product's life cycle after the product is launched and manufactured in high volume, which would have made a detrimental impact to the company's bottom-line.","PeriodicalId":270494,"journal":{"name":"Annual Symposium Reliability and Maintainability, 2004 - RAMS","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115825479","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}