{"title":"Contracting for system availability under fleet expansion: Redundancy allocation or spares inventory?","authors":"T. Jin, Yisha Xiang, H. Taboada","doi":"10.1109/RAM.2017.7889766","DOIUrl":"https://doi.org/10.1109/RAM.2017.7889766","url":null,"abstract":"Operational availability is a fundamental measure to assess the system performance after the installation. To achieve the desired availability goals, various strategies have been discussed, ranging from preventive maintenance, reliability-redundancy allocation (RRA), to spare parts logistics. RRA aims to extend the system uptime while spare parts logistics can reduce the downtime. These methods become difficult to choose if the fleet size changes over time. This situation often occurs in the new product introduction stage. This paper develops new cost model and analyzes the trade-off between redundancy allocation and spare parts stocking. Our model is built upon an integrated product-service mechanism where the firm manufactures the products and also provides after-sales support. We show that component redundancy is preferred over spare part inventory under long-term, performance-based contract. Examples from semiconductor equipment industry are used to demonstrate the application of the proposed method.","PeriodicalId":138871,"journal":{"name":"2017 Annual Reliability and Maintainability Symposium (RAMS)","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126219268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jose Dempere, N. Papakonstantinou, B. O’Halloran, Douglas L. Van Bossuyt
{"title":"Risk modeling of variable probability external initiating events","authors":"Jose Dempere, N. Papakonstantinou, B. O’Halloran, Douglas L. Van Bossuyt","doi":"10.1109/RAM.2017.7889704","DOIUrl":"https://doi.org/10.1109/RAM.2017.7889704","url":null,"abstract":"As components engineering has progressively advanced over the past 20 years to encompass a robust element of reliability, a paradigm shift has occurred in how complex systems fail. While failures used to be dominated by ‘component failures,’ failures are now governed by other factors such as environmental factors, integration capability, design quality, system complexity, built in testability, etc. Of these factors, environmental factors are difficult to predict and assess. While test regimes typically encompass environmental factors, significant design changes to the system to mitigate any failures found is not likely to occur based on the cost. The early stages of the engineering design process offer a significant opportunity to evaluate and mitigate risks due to environmental factors. Systems that are expected to operate in a dynamic and changing environment have significant challenges for assessing environmental factors. For example, external failure initiating event probabilities will change with respect to time and new types of external initiating events can be expect with respect to time. While some of the well exercised methods such as Probabilistic Risk Assessment (PRA) [Error! Reference source not found.] and Failure Modes and Effects Analysis (FMEA) [Error! Reference source not found.] can partially address a time-dependent external initiating event probability, current methods of analyzing system failure risk during conceptual system design cannot. As a result, we present our efforts at developing a Time Based Failure Flow Evaluator (TBFFE). This method builds upon the Function Based Engineering Design (FBED) [Error! Reference source not found.] method of functional modeling and the Function Failure Identification and Propagation (FFIP) [Error! Reference source not found.] failure analysis method that is compatible with FBED. Through the development of TBFFE, we have found that it can provide significant insights into a design that is to be used in an environment with variable probability external initiating events and unique external initiating events. We present a case study of the conceptual design of a nuclear power plant's spent fuel pool undergoing a variety of external initiating events that vary in probability based upon the time of year. The case study illustrates the capability of TBFFE by identifying how seasonally variable initiating event occurrences can impact the probability of failure on a month timescale that otherwise would not be seen on a yearly timescale. Changing the design helps to reduce the impact that time-varying initiating events have on the monthly risk of system failure.","PeriodicalId":138871,"journal":{"name":"2017 Annual Reliability and Maintainability Symposium (RAMS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130153411","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Reliability study on high-k bi-layer dielectrics","authors":"Faranak Fathi Aghdam, H. Liao","doi":"10.1109/RAM.2017.7889746","DOIUrl":"https://doi.org/10.1109/RAM.2017.7889746","url":null,"abstract":"As electronic devices get smaller, reliability issues pose new challenges due to unknown underlying physics of failure mechanisms. This necessitates the development of new reliability analysis approaches related to nano-scale devices. One of the most important nano-devices is the transistor, and it is subject to various failure mechanisms. For such devices, dielectric breakdown is the most critical failure mode and has become a major barrier for reliable circuit design in nanoscale. Due to aggressive needs for the downscaling of transistors, dielectric films are made extremely thin. This has led to adopting high permittivity (k) dielectrics as an alternative to previously widely used SiO2, in recent years. Since most time-dependent dielectric breakdown test data on high-k bi-layer stacks significantly deviate from the Weibull trend, we propose a new approach to modeling the corresponding time-to-breakdown in this paper. A marked space-time self-exciting point process is employed in modeling defect generation rate. A simulation algorithm is used to generate defects within the dielectric space, and an optimization algorithm is developed to minimize the Kullback-Leibler divergence between the empirical distributions of real and simulated data to find the best set of the parameters and predict the total time-to-failure. The novelty of the presented approach lies in using a conditional intensity for trap generation in dielectrics that is a function of the times, locations and sizes of previous defects.","PeriodicalId":138871,"journal":{"name":"2017 Annual Reliability and Maintainability Symposium (RAMS)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116798697","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Human reliability assessments: Using the past (Shuttle) to predict the future (Orion)","authors":"D. DeMott, M. Bigler","doi":"10.1109/RAM.2017.7889780","DOIUrl":"https://doi.org/10.1109/RAM.2017.7889780","url":null,"abstract":"NASA (National Aeronautics and Space Administration) Johnson Space Center (JSC) Safety and Mission Assurance (S&MA) uses two human reliability analysis (HRA) methodologies. The first is a simplified method which is based on how much time is available to complete the action, with consideration included for environmental and personal factors that could influence the human's reliability. This method is expected to provide a conservative value or placeholder as a preliminary estimate. This preliminary estimate or screening value is used to determine which placeholder needs a more detailed assessment. The second methodology is used to develop a more detailed human reliability assessment on the performance of critical human actions. This assessment needs to consider more than the time available, this would include factors such as: the importance of the action, the context, environmental factors, potential human stresses, previous experience, training, physical design interfaces, available procedures/checklists and internal human stresses. The more detailed assessment is expected to be more realistic than that based primarily on time available. When performing an HRA on a system or process that has an operational history, we have information specific to the task based on this history and experience. In the case of a Probabilistic Risk Assessment (PRA) that is based on a new design and has no operational history, providing a “reasonable” assessment of potential crew actions becomes more challenging. To determine what is expected of future operational parameters, the experience from individuals who had relevant experience and were familiar with the system and process previously implemented by NASA was used to provide the “best” available data. Personnel from Flight Operations, Flight Directors, Launch Test Directors, Control Room Console Operators, and Astronauts were all interviewed to provide a comprehensive picture of previous NASA operations. Verification of the assumptions and expectations expressed in the assessments will be needed when the procedures, flight rules, and operational requirements are developed and then finalized.","PeriodicalId":138871,"journal":{"name":"2017 Annual Reliability and Maintainability Symposium (RAMS)","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127234931","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Enno Ruijters, S. Schivo, M. Stoelinga, A. Rensink
{"title":"Uniform analysis of fault trees through model transformations","authors":"Enno Ruijters, S. Schivo, M. Stoelinga, A. Rensink","doi":"10.1109/RAM.2017.7889759","DOIUrl":"https://doi.org/10.1109/RAM.2017.7889759","url":null,"abstract":"As the critical systems we rely on every day, such as nuclear power plants and airplanes, become ever more complex, the need to rigorously verify the safety and dependability of these systems is becoming very clear. Furthermore, deliberate attacks have become a prominent cause of concern for safety and reliability. One of the most prominent techniques for analyzing such systems is fault tree analysis (FTA), and a whole forest of variants, extensions, and analysis tools have been developed. In the security field, FTA was the inspiration for attack trees, used to analyze systems for vulnerability to malicious attacks. These formalisms are rarely compatible, making it difficult to exploit their different strengths in analyzing the same system. The key contribution of this paper is a meta-model describing many varieties of fault and attack trees, and well as combined attack-fault trees. We provide translations to and from different formalisms, as well as our own analysis engine for combined models. We demonstrate this framework on three case studies.","PeriodicalId":138871,"journal":{"name":"2017 Annual Reliability and Maintainability Symposium (RAMS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129354655","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Deovrat Kakde, Sergiy Peredriy, A. Chaudhuri, Anya McGuirk
{"title":"A non-parametric control chart for high frequency multivariate data","authors":"Deovrat Kakde, Sergiy Peredriy, A. Chaudhuri, Anya McGuirk","doi":"10.1109/RAM.2017.7889786","DOIUrl":"https://doi.org/10.1109/RAM.2017.7889786","url":null,"abstract":"Support Vector Data Description (SVDD) is a machine learning technique used for single class classification and outlier detection. A SVDD based K-chart was first introduced by Sun and Tsung [4]. K-chart provides an attractive alternative to the traditional control charts such as the Hotelling's T2 charts when the distribution of the underlying multivariate data is either non-normal or is unknown. But there are challenges when the K-chart is deployed in practice. The K-chart requires calculating the kernel distance of each new observation but there are no guidelines on how to interpret the kernel distance plot and draw inferences about shifts in process mean or changes in process variation. This limits the application of K-charts in big-data applications such as equipment health monitoring, where observations are generated at a very high frequency. In this scenario, the analyst using the K-chart is inundated with kernel distance results at a very high frequency, generally without any recourse for detecting presence of any assignable causes of variation. We propose a new SVDD based control chart, called a kT chart, which addresses the challenges encountered when using a K-chart for big-data applications. The kT charts can be used to track simultaneously process variation and central tendency.","PeriodicalId":138871,"journal":{"name":"2017 Annual Reliability and Maintainability Symposium (RAMS)","volume":"80 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121003477","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Reliability and availability measure and assessment of multistage production systems","authors":"Jian Guo, Z. Li, Wendai Wang","doi":"10.1109/RAM.2017.7889708","DOIUrl":"https://doi.org/10.1109/RAM.2017.7889708","url":null,"abstract":"This paper investigates an interesting research topic of defining and measuring reliability and availability of multistage production systems. Most current production systems include multiple stations or stages with possible varying buffer capacity in each station. The configurations of buffer resources/equipment and their reliability performance in one station are interdependent with adjacent stations, which makes it challenging to define and measure the reliability and availability of the overall system. Stochastic processes such as Markov process is introduced to model the reliability and availability performance of multistage production systems. The relationship of reliability/availability and the traditional performance metrics such as cycle times and throughputs in modeling production systems are investigated. Simulation models are introduced to verify performance of the proposed methods under complex and varying multistage production settings.","PeriodicalId":138871,"journal":{"name":"2017 Annual Reliability and Maintainability Symposium (RAMS)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123142989","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"New FIDES models for emerging technologies","authors":"Patrick Carton, M. Giraudeau, F. Davenel","doi":"10.1109/RAM.2017.7889686","DOIUrl":"https://doi.org/10.1109/RAM.2017.7889686","url":null,"abstract":"The purpose of this paper is to describe the PISTIS project, mainly focused on the reliability of emerging technologies involved in electronic systems. PISTIS is a French acronym, meaning faith, trust and confidence, from the Greek origin. Managing the reliability risk is a big challenge in rugged environments. PISTIS is linked to FIDES, a guide allowing reliability prediction of electronic systems. Results from in-service study presented in this paper show the accordance between FIDES predictions and reliability observed. This confirmed the interest to complete FIDES models by taking into account intrinsic wear-out effects limiting the operating lifetime. The PISTIS project started in 2015. Depending on the technologies and their main failure mechanisms, different long-term test processes are set up to evaluate the wear-out effects. To be able to construct reliability prediction models taking into account these effects, the stress level of reliability tests need to be close to the actual extreme use conditions and mission profiles in which electronic equipment are used.","PeriodicalId":138871,"journal":{"name":"2017 Annual Reliability and Maintainability Symposium (RAMS)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123155108","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Amith Nag Nichenametla, Srikanth Nandipati, Abhay Laxmanrao Waghmare
{"title":"Optimizing life cycle cost of wind turbine blades using predictive analytics in effective maintenance planning","authors":"Amith Nag Nichenametla, Srikanth Nandipati, Abhay Laxmanrao Waghmare","doi":"10.1109/RAM.2017.7889682","DOIUrl":"https://doi.org/10.1109/RAM.2017.7889682","url":null,"abstract":"A wind turbine blade is capital equipment vital enough to be protected and maintained for inherent safety and reliability during lifetime due to its high impact on turbine availability in event of failure / repair. Unlike matured industries like aerospace, there are no specific guidelines for maintenance plans and mostly the repairs are reactive in nature. This leads to very high cost of maintenance owing to longer downtime of the turbine raising a need to derive an effective maintenance strategy demanding reliability centered maintenance, also facilitating business decisions on spares, service and maintenance requirements through use of available field information, supported by a predictive analytics and reliability models with an overall objective of reducing the operation cost and gaining higher levels of reliability. This paper is an attempt to make use of the widely practiced Predictive Analytics techniques in wind domain to address such challenges and remain competitive in the market. The model built was able to take inputs from different stages of the product life cycle providing a mathematical relationship with respect to failures and contributing factors, allowing addressing the blades that are in critical need of inspection and maintenance at any given point of time based on the rate of wear out. This further becomes a critical input for maintenance planning thereby reducing the operational cost and also attaining high levels of Reliability. Additionally, the model built also provides feedback to the different stages of blade life cycle in terms of setting targets that are required in order to maintain a certain level of Reliability in the field.","PeriodicalId":138871,"journal":{"name":"2017 Annual Reliability and Maintainability Symposium (RAMS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124897071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Resilience and stakeholder need","authors":"R. Emanuel","doi":"10.1109/RAM.2017.7889705","DOIUrl":"https://doi.org/10.1109/RAM.2017.7889705","url":null,"abstract":"The current resilience literature lacks a thorough comparison of the behavior of resilience metrics using fundamental models of system performance. To close this gap, this study identifies three metrics that either encompass or can be easily amended to encompass resilience definition of resilience as proposed by Ayyub [1]. The three selected metrics are integral resilience [1], [2], quotient resilience [3], [4], and expected system degradation function [5]. While each of these metrics measures resilience in its own way, gaps exist that affect the metrics' decision-support potential. This study identifies gaps common to these metrics, which limit their decision support value. The gaps include: (1) Lack of consideration of stakeholder performance preferences. (2) Lack of consideration of different stakeholder time horizon. (3) Lack of performance substitution over time. The first step of the study is to modify the three selected metrics to satisfy the broad definition of resilience if necessary. The second step is to develop extended versions of the metric to close the three identified gaps. The third step is to compare the six metrics using a fundamental model of performance and need with known variables (failure time, robustness, recovery time, recovery performance level, etc.). The extended metrics demonstrate different values from the original metrics which are consistent with the spirit of the metrics and largely congruent with intuition.","PeriodicalId":138871,"journal":{"name":"2017 Annual Reliability and Maintainability Symposium (RAMS)","volume":"90 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125134210","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}