{"title":"Hyper-Erlang Software Reliability Model","authors":"H. Okamura, T. Dohi","doi":"10.1109/PRDC.2008.20","DOIUrl":"https://doi.org/10.1109/PRDC.2008.20","url":null,"abstract":"This paper proposes a hyper-Erlang software reliability model (HErSRM) in the framework of non-homogeneous Poisson process (NHPP) modeling. The proposed HErSRM is a generalized model which contains some existing NHPP-based SRMs like Goel-Okumoto SRM and Delayed S-shaped SRM, and can represent a variety of software fault-detection patterns. Such characteristics are useful to solve the model selection problem arising in the practical use of NHPP-based SRMs. More precisely, we discuss the statistical inference of HErSRM based on the EM (expectation-maximization) algorithm. In numerical experiments, we show that the HErSRM outperforms conventional NHPP-based SRMs with respect to fitting ability.","PeriodicalId":369064,"journal":{"name":"2008 14th IEEE Pacific Rim International Symposium on Dependable Computing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129937475","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
K. Tomita, K. Fujiwara, H. Kawasaki, Naoki Miwa, S. Nagai
{"title":"The Schemes to Develop Dependable System Using COTS","authors":"K. Tomita, K. Fujiwara, H. Kawasaki, Naoki Miwa, S. Nagai","doi":"10.1109/PRDC.2008.18","DOIUrl":"https://doi.org/10.1109/PRDC.2008.18","url":null,"abstract":"This paper provides general design schemes for the dependable system especially long-term projects using commercial off the shelf (COTS) computers based on the concept we established for dependability. As computer technology has shown significantly rapid pace of evolution, it is necessary to take the evolution into consideration when we develop computer systems. Especially in a large-scale project, the evolutions in computer technology during its term are often seen; thus, it is necessary to establish simply applicable schemes as basic concepts to develop dependable system. Basic considerations are presented here by extruding its requirements for dependable systems from wide area railway systems requiring on-time traffic operation and safety under high-density traffic. Many dependable control systems have already been delivered based on autonomous decentralized concept using COTS systems and been showing their sufficient dependability.","PeriodicalId":369064,"journal":{"name":"2008 14th IEEE Pacific Rim International Symposium on Dependable Computing","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121401549","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Conjoined Pipeline: Enhancing Hardware Reliability and Performance through Organized Pipeline Redundancy","authors":"Viswanathan Subramanian, Arun Kumar Somani","doi":"10.1109/PRDC.2008.54","DOIUrl":"https://doi.org/10.1109/PRDC.2008.54","url":null,"abstract":"Reliability has become a serious concern as systems embrace nanometer technologies. In this paper, we propose a novel approach for organizing redundancy that provides high degree of fault tolerance and enhances performance. We replicate both the pipeline registers and the pipeline stage combinational logic. The replicated logic receives its inputs from the primary pipeline registers while writing its output to the replicated pipeline registers. The organization of redundancy in the proposed conjoined pipeline system supports overclocking, provides concurrent error detection and recovery capability for soft errors, intermittent faults and timing errors, and flags permanent silicon defects. The fast recovery process requires no checkpointing and takes three cycles. Back annotated post-layout gate level timing simulations, using 45 nm technology, of a conjoined two stage arithmetic pipeline and a conjoined five stage DLX pipeline processor, with forwarding logic, show that our approach achieves near 100% fault coverage, under a severe fault injection campaign, while enhancing performance, on an average by about 20%, when dynamically overclocked and 35%, when maximally overclocked.","PeriodicalId":369064,"journal":{"name":"2008 14th IEEE Pacific Rim International Symposium on Dependable Computing","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129999461","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Bayesian Inference Approach for Probabilistic Analogy Based Software Maintenance Effort Estimation","authors":"Y. F. Li, M. Xie, T. Goh","doi":"10.1109/PRDC.2008.21","DOIUrl":"https://doi.org/10.1109/PRDC.2008.21","url":null,"abstract":"Software maintenance effort estimation is essential for the success of software maintenance process. In the past decades, many methods have been proposed for maintenance effort estimation. However, most existing estimation methods only produce point predictions. Due to the inherent uncertainties and complexities in the maintenance process, the accurate point estimates are often obtained with great difficulties. Therefore some prior studies have been focusing on probabilistic predictions. Analogy Based Estimation (ABE) is one popular point estimation technique. This method is widely accepted due to its conceptual simplicity and empirical competitiveness. However, there is still a lack of probabilistic framework for ABE model. In this study, we first propose a probabilistic framework of ABE (PABE). The predictive PABE is obtained by integrating over its parameter k number of nearest neighbors via Bayesian inference. In addition, PABE is validated on four maintenance datasets with comparisons against other established effort estimation techniques. The promising results show that PABE could largely improve the point estimations of ABE and achieve quality probabilistic predictions.","PeriodicalId":369064,"journal":{"name":"2008 14th IEEE Pacific Rim International Symposium on Dependable Computing","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120954748","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Considering Fault Correction Lag in Software Reliability Modeling","authors":"Yanjun Shu, Zhibo Wu, Hongwei Liu, Xiaozong Yang","doi":"10.1109/PRDC.2008.47","DOIUrl":"https://doi.org/10.1109/PRDC.2008.47","url":null,"abstract":"The fault correction process is very important in software testing, and it has been considered into some software reliability growth models (SRGMs). In these models, the time-delay functions are often used to describe the dependency of the fault detection and correction processes. In this paper, a more direct variable \"correction lag\", which is defined as the difference between the detected and corrected fault numbers, is addressed to characterize the dependency of the two processes. We investigate the correction lag and find that it appears Bell-shaped. Therefore, we adopt the Gamma function to describe the correction lag. Based on this function, a new SRGM which includes the fault correction process is proposed. And the experimental results show that the new model gives better fit and prediction than other models.","PeriodicalId":369064,"journal":{"name":"2008 14th IEEE Pacific Rim International Symposium on Dependable Computing","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128567744","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fedor V. Yarochkin, Shih-Yao Dai, Chihhung Lin, Yennun Huang, S. Kuo
{"title":"Towards Adaptive Covert Communication System","authors":"Fedor V. Yarochkin, Shih-Yao Dai, Chihhung Lin, Yennun Huang, S. Kuo","doi":"10.1109/PRDC.2008.26","DOIUrl":"https://doi.org/10.1109/PRDC.2008.26","url":null,"abstract":"Covert channels are secret communication paths, which existance is not expected in the original system design. Covert channels can be used as legimate tools of censorship resistance, anonimity and privacy preservation to address issues with \"national\" firewalls, citizen profiling and other \"unethical\" uses of information technology. Current steganographic methods that implement covert channels within network traffic, are highly dependent on particular media data or network protocol to hide data. In this paper we investigate the methods and an algorithm for implementing adaptive covert communication system that works on real-world Internet, capable of using multiple application-level protocols as its communication media and can be implemented as network application, therefore requires no system modifications of communicating nodes. The key difference from previous solutions is the use of adaptive redundant mechanism, which allows real-time underlying protocol switching and adaptation to the dynamic network configuration changes.","PeriodicalId":369064,"journal":{"name":"2008 14th IEEE Pacific Rim International Symposium on Dependable Computing","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116653417","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Juan-Carlos Ruiz-Garcia, D. Andrés, S. Blanc, P. Gil
{"title":"Generic Design and Automatic Deployment of NMR Strategies on HW Cores","authors":"Juan-Carlos Ruiz-Garcia, D. Andrés, S. Blanc, P. Gil","doi":"10.1109/PRDC.2008.51","DOIUrl":"https://doi.org/10.1109/PRDC.2008.51","url":null,"abstract":"Hardware fault tolerance is a requirement even for noncritical applications, since unexpected failures may damage the reputation of manufacturers and limit the acceptance of their products. However, current practices for the design and deployment of hardware redundancy techniques remain in practice specific (defined on a case-per-case basis) and mostly manual. This paper addresses the challenging problems of (i) engineering NMR strategies in a generic way, and (ii) automating their deployment. This approach relies on metaprogramming to specify NMR mechanisms and open compilers to automatically deploy such mechanisms on the selected hardware core. Fault injection complements that approach by providing the means to (i) determine the best core for replication, and (ii) check the effectiveness of the deployed NMR strategy. A PIC microcontroller is used as case study to exemplify the approach and show its feasibility.","PeriodicalId":369064,"journal":{"name":"2008 14th IEEE Pacific Rim International Symposium on Dependable Computing","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115355831","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Training Security Assurance Teams Using Vulnerability Injection","authors":"J. Fonseca, M. Vieira, H. Madeira","doi":"10.1109/PRDC.2008.43","DOIUrl":"https://doi.org/10.1109/PRDC.2008.43","url":null,"abstract":"Writing secure Web applications is a complex task. In fact, a vast majority of Web applications are likely to have security vulnerabilities that can be exploited using simple tools like a common Web browser. This represents a great danger as the attacks may have disastrous consequences to organizations, harming their assets and reputation. To mitigate these vulnerabilities, security code inspections and penetration tests must be conducted by well-trained teams during the development of the application. However, effective code inspections and testing takes time and cost a lot of money, even before any business revenue. Furthermore, software quality assurance teams typically lack the knowledge required to effectively detect security problems. In this paper we propose an approach to quickly and effectively train security assurance teams in the context of web application development. The approach combines a novel vulnerability injection technique with relevant guidance information about the most common security vulnerabilities to provide a realistic training scenario. Our experimental results show that a short training period is sufficient to clearly improve the ability of security assurance teams to detect vulnerabilities during both code inspections and penetration tests.","PeriodicalId":369064,"journal":{"name":"2008 14th IEEE Pacific Rim International Symposium on Dependable Computing","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116069158","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Finding the Optimal Configuration of a Cascading TMR System","authors":"Masashi Hamamatsu, Tatsuhiro Tsuchiya, T. Kikuno","doi":"10.1109/PRDC.2008.12","DOIUrl":"https://doi.org/10.1109/PRDC.2008.12","url":null,"abstract":"We consider systems comprised of multiple triple modular redundancy (TMR) units in series. Only recently have researchers found that even such simple systems can be configured into various structures. We develop an algorithm for finding a structure that maximizes reliability. Using this algorithm we show that new structures have optimal reliability within some ranges of voter and module reliability.","PeriodicalId":369064,"journal":{"name":"2008 14th IEEE Pacific Rim International Symposium on Dependable Computing","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124092937","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yansheng Zhang, F. Bastani, I. Yen, Jicheng Fu, I. Chen
{"title":"Availability Analysis of Robotic Swarm Systems","authors":"Yansheng Zhang, F. Bastani, I. Yen, Jicheng Fu, I. Chen","doi":"10.1109/PRDC.2008.37","DOIUrl":"https://doi.org/10.1109/PRDC.2008.37","url":null,"abstract":"Availability analysis is an important issue in robotic swarm systems. It can help the designer to construct a cost-effective system with high availability and fewer resources. For the model and analysis to be fully specified and practical, this paper systematically investigates the major issues that need to be addressed in the analysis of various robotic swarm applications. Four models are established to consider systems with dependent and independent robots, homogenous and non-homogenous systems, and the effect of various types of motions on the overall system failure pattern. Detailed analysis of each of these models is performed based on renewal theory and continuous Markov Chain techniques. Numerical availability evaluations for two applications are also presented.","PeriodicalId":369064,"journal":{"name":"2008 14th IEEE Pacific Rim International Symposium on Dependable Computing","volume":"132 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121545388","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}